[core] Deflake torch tensor transport test#62743
Merged
richardliaw merged 3 commits intoray-project:masterfrom Apr 18, 2026
Merged
[core] Deflake torch tensor transport test#62743richardliaw merged 3 commits intoray-project:masterfrom
richardliaw merged 3 commits intoray-project:masterfrom
Conversation
Signed-off-by: Joshua Lee <joshlee@anyscale.com>
Signed-off-by: Joshua Lee <joshlee@anyscale.com>
Contributor
There was a problem hiding this comment.
Code Review
This pull request updates the test_torch_tensor_transport.py test suite to improve reliability by correctly asserting expected outcomes and handling GPU availability via torch.cuda.is_available(). Additionally, it removes a deprecation warning for the container field in runtime_env.py. Feedback was provided regarding the removal of this warning, as it seems unrelated to the test updates and should likely be restored to maintain user awareness of the deprecation.
edoakes
approved these changes
Apr 18, 2026
HLDKNotFound
pushed a commit
to chichic21039/ray
that referenced
this pull request
Apr 22, 2026
After ray-project#62492 we no longer set CUDA_VISIBLE_DEVIES ="" when num_gpus=0 or not set. Torch if it detects that CUDA_VISIBLE_DEVIES ="" throws a runtime error, however now that CUDA_VISIBLE_DEVIES is not set at all it falls back to the nvidia driver to get the device ids. Following up on ray-project#62653 and instead checking for the default cuda:0 gpu id in these tests. --------- Signed-off-by: Joshua Lee <joshlee@anyscale.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
After #62492 we no longer set CUDA_VISIBLE_DEVICES ="" when num_gpus=0 or not set. Torch if it detects that CUDA_VISIBLE_DEVICES ="" throws a runtime error, however now that CUDA_VISIBLE_DEVICES is not set at all it falls back to the nvidia driver to get the device ids. Following up on #62653 and instead checking for the default cuda:0 gpu id in these tests.