Support pin_memory() during CUDA stream capture.#146924
Support pin_memory() during CUDA stream capture.#146924galv wants to merge 3 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/146924
Note: Links to docs will display an error until the docs builds have been completed. ❌ 8 New Failures, 1 Unrelated FailureAs of commit c83e708 with merge base 99da439 ( NEW FAILURES - The following jobs have failed:
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Okay, so this "works" now, but the current problem is that CUDA Graphs don't have a way to "own" their pinned allocations created by CachingHostAllocator.cpp right now. This means that the user must somehow keep pinned allocations alive for the duration of the corresponding cuda graph, which is not a very good UX at all. |
a1957ba to
2c6d98c
Compare
|
The current code actually works as intended, though it is rudimentary and probably has subtle bugs. |
This code previously did not work:
```python
import torch
def test():
graph = torch.cuda.CUDAGraph()
with torch.cuda.graph(graph, capture_error_mode="global"):
data = torch.randn(8)
data_gpu = torch.randn(8, device="cuda")
data = data.pin_memory()
data_gpu.to(data, non_blocking=True)
graph2 = torch.cuda.CUDAGraph()
with torch.cuda.graph(graph2, capture_error_mode="global"):
data2 = torch.randn(8)
data2_gpu = torch.randn(8, device="cuda")
data2 = data2.pin_memory()
data2_gpu.to(data2, non_blocking=True)
if __name__ == "__main__":
test()
```
We use events to signal when a particular usage of a pinned host
memory block has completed. Every time we call pin_memory(),
cudaEventQuery() gets called to see if we we can reuse existing blocks
rather than allocating new blocks. cudaEventQuery() is not allowed
during stream capture unless we set the thread to relaxed capture
mode. (This is safe in this case so long as we make sure that the
pinned buffer is live until the cuda graph is destroyed.)
I haven't fully thought this through. I need to make sure that a
pinned memory tensor does in fact stay live until its corresponding
cuda graph is destroyed. (I haven't done this yet!)
Draft.
Use external flag only when in stream capture mode.
Initial correct implementation with proper ownership.
Cleanup + working on pools across multiple graphs.
Rudimentary test that capture into same pool across multiple graphs works.
28ae6f2 to
dc79019
Compare
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
This code previously did not work:
We use events to signal when a particular usage of a pinned host memory block has completed. Every time we call pin_memory(), cudaEventQuery() gets called to see if we we can reuse existing blocks rather than allocating new blocks. cudaEventQuery() is not allowed during stream capture unless we set the thread to relaxed capture mode. (This is safe in this case so long as we make sure that the pinned buffer is live until the cuda graph is destroyed.)
I haven't fully thought this through. I need to make sure that a pinned memory tensor does in fact stay live until its corresponding cuda graph is destroyed. (I haven't done this yet!)
Draft.
Discovered in #146145 (comment)
Not a high priority, but I wanted to start to figure out what proper support might look like.