[core][gpu-objects] Support intra-process communication#53798
Merged
edoakes merged 6 commits intoray-project:masterfrom Jun 16, 2025
Merged
[core][gpu-objects] Support intra-process communication#53798edoakes merged 6 commits intoray-project:masterfrom
edoakes merged 6 commits intoray-project:masterfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull Request Overview
This PR enables intra-process GPU tensor communication by bypassing unnecessary out-of-band transfers and adds tests to validate that behavior.
- Skip NCCL-style transfers when source and destination ranks match, allowing direct in-process tensor passing.
- Introduce
test_intra_gpu_tensor_transferto cover pure GPU, mixed CPU/GPU, and large-tensor intra-process transfers.
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
| python/ray/_private/gpu_object_manager.py | Replace exception on same-rank transfers with a no-op continue |
| python/ray/tests/test_gpu_objects.py | Add test_intra_gpu_tensor_transfer for various intra-process scenarios |
Comments suppressed due to low confidence (2)
python/ray/tests/test_gpu_objects.py:70
- [nitpick] Consider adding a test scenario with multiple actors in the same process group to ensure the skip logic works correctly when more than one actor shares the same rank.
def test_intra_gpu_tensor_transfer(ray_start_regular):
python/ray/tests/test_gpu_objects.py:82
- The
randommodule is used here but not imported—addimport randomat the top of the file to avoid a NameError.
cpu_data = random.randint(0, 100)
stephanie-wang
approved these changes
Jun 13, 2025
Contributor
stephanie-wang
left a comment
There was a problem hiding this comment.
Could you add a description to the PR?
Member
Author
Done |
Member
Author
Member
Author
|
@stephanie-wang CI passes! |
Member
Author
|
cc @edoakes would you mind merging this PR? Thanks! |
elliot-barn
pushed a commit
that referenced
this pull request
Jun 18, 2025
If we pass GPU object refs within the same actor, NCCL send/recv will block indefinitely and the transfer is also unnecessary. This PR allows intra-process communication to retrieve tensors directly from the in-process actor store. Example: ``` small_tensor = torch.randn((1,)) # Intra-actor communication for pure GPU tensors ref = actor.echo.remote(small_tensor) result = actor.double.remote(ref) assert ray.get(result) == pytest.approx(small_tensor * 2) ``` ## Related issue number Closes #51685 Signed-off-by: Kai-Hsun Chen <kaihsun@anyscale.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
minerharry
pushed a commit
to minerharry/ray
that referenced
this pull request
Jun 27, 2025
…53798) If we pass GPU object refs within the same actor, NCCL send/recv will block indefinitely and the transfer is also unnecessary. This PR allows intra-process communication to retrieve tensors directly from the in-process actor store. Example: ``` small_tensor = torch.randn((1,)) # Intra-actor communication for pure GPU tensors ref = actor.echo.remote(small_tensor) result = actor.double.remote(ref) assert ray.get(result) == pytest.approx(small_tensor * 2) ``` ## Related issue number Closes ray-project#51685 Signed-off-by: Kai-Hsun Chen <kaihsun@anyscale.com>
elliot-barn
pushed a commit
that referenced
this pull request
Jul 2, 2025
If we pass GPU object refs within the same actor, NCCL send/recv will block indefinitely and the transfer is also unnecessary. This PR allows intra-process communication to retrieve tensors directly from the in-process actor store. Example: ``` small_tensor = torch.randn((1,)) # Intra-actor communication for pure GPU tensors ref = actor.echo.remote(small_tensor) result = actor.double.remote(ref) assert ray.get(result) == pytest.approx(small_tensor * 2) ``` ## Related issue number Closes #51685 Signed-off-by: Kai-Hsun Chen <kaihsun@anyscale.com> Signed-off-by: elliot-barn <elliot.barnwell@anyscale.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Why are these changes needed?
If we pass GPU object refs within the same actor, NCCL send/recv will block indefinitely and the transfer is also unnecessary. This PR allows intra-process communication to retrieve tensors directly from the in-process actor store.
Example:
Related issue number
Closes #51685
Checks
git commit -s) in this PR.scripts/format.shto lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/under thecorresponding
.rstfile.