You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
超参数如下:
--batch_size 1 --max_source_seq_len 250 --max_target_seq_len 150
第一个epoch可以正常微调,到第二个epoch就爆显存,是否存在什么bug?max_source_seq_len和--max_target_seq_len已经设置很多次,到第二个epoch时每次都报相同的错误:
OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.99 GiB total capacity; 22.95 GiB already allocated; 0 bytes free; 23.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid
fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered:
超参数如下:
--batch_size 1 --max_source_seq_len 250 --max_target_seq_len 150
第一个epoch可以正常微调,到第二个epoch就爆显存,是否存在什么bug?max_source_seq_len和--max_target_seq_len已经设置很多次,到第二个epoch时每次都报相同的错误:
OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.99 GiB total capacity; 22.95 GiB already allocated; 0 bytes free; 23.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid
fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered: