-
Notifications
You must be signed in to change notification settings - Fork 27.7k
Memory leak after OOM (maybe RRelu specific) #38966
Copy link
Copy link
Closed
Labels
high prioritymodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: memory usagePyTorch is using more memory than it should, or it is leaking memoryPyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Metadata
Metadata
Assignees
Labels
high prioritymodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: memory usagePyTorch is using more memory than it should, or it is leaking memoryPyTorch is using more memory than it should, or it is leaking memorytriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Bug
Pytorch 1.5
Python 3.7
Windows 10
NVIDIA RTX 2080 8GB (6GB)
You need to catch the exception "RuntimeError: CUDA out of memory." at torch.rrelu().
cc @ezyang @gchanan @zou3519 @ngimel