Skip to content

Fixing get_local_rank() variable missing when compiled#165432

Closed
arkadip-maitra wants to merge 3 commits intopytorch:mainfrom
arkadip-maitra:fix_#165215
Closed

Fixing get_local_rank() variable missing when compiled#165432
arkadip-maitra wants to merge 3 commits intopytorch:mainfrom
arkadip-maitra:fix_#165215

Conversation

@arkadip-maitra
Copy link
Collaborator

@arkadip-maitra arkadip-maitra commented Oct 14, 2025

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165432

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit e1ffbdf with merge base 74db92b (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@arkadip-maitra
Copy link
Collaborator Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label Oct 14, 2025
@pytorch-bot
Copy link

pytorch-bot bot commented Oct 14, 2025

Didn't find following labels among repository labels: module:DeviceMesh

@arkadip-maitra
Copy link
Collaborator Author

@pytorchbot label "module: DeviceMesh"

const_args = [x.as_python_constant() for x in args]
const_kwargs = {k: v.as_python_constant() for k, v in kwargs.items()}
return ConstantVariable.create(
self.value.get_local_rank(*const_args, **const_kwargs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a test? you can put it in test_dtensor_compile.py, similar to https://github.com/pytorch/pytorch/blob/main/test/distributed/tensor/test_dtensor_compile.py#L219 (the test should compile something like the below, and test that the output is the same under compile and eager)

def f(dtensor):
   local_rank = dtensor.device_mesh.get_local_rank("dp")
    return dtensor * local_rank

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also make test that this handles the mesh_dim argument being optional

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mesh_dim argument not being passed should fail? can you explain wdym?

we should also make test that this handles the mesh_dim argument being optional

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a test? you can put it in test_dtensor_compile.py, similar to https://github.com/pytorch/pytorch/blob/main/test/distributed/tensor/test_dtensor_compile.py#L219 (the test should compile something like the below, and test that the output is the same under compile and eager)

def f(dtensor):
   local_rank = dtensor.device_mesh.get_local_rank("dp")
    return dtensor * local_rank

Added test

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mesh_dim argument not being passed should fail? can you explain wdym?

in the eager API, it looks like you don't have to pass in a mesh_dim, we won't error as long as ndim == 1: https://github.com/pytorch/pytorch/blob/main/torch/distributed/device_mesh.py#L1050. So we should have a separate test for that

Copy link
Collaborator Author

@arkadip-maitra arkadip-maitra Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mesh_dim argument not being passed should fail? can you explain wdym?

in the eager API, it looks like you don't have to pass in a mesh_dim, we won't error as long as ndim == 1: https://github.com/pytorch/pytorch/blob/main/torch/distributed/device_mesh.py#L1050. So we should have a separate test for that

Thanks. Added that test case. It should be good now.

@pytorch-bot pytorch-bot bot added the oncall: distributed Add this issue/PR to distributed oncall triage queue label Oct 15, 2025
Copy link
Collaborator

@bdhirsh bdhirsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the fix!

@bdhirsh
Copy link
Collaborator

bdhirsh commented Oct 16, 2025

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 16, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Oct 21, 2025
zhudada0120 pushed a commit to zhudada0120/pytorch that referenced this pull request Oct 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: DeviceMesh module: dynamo oncall: distributed Add this issue/PR to distributed oncall triage queue open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[DeviceMesh] DeviceMesh.get_local_rank() failing when inside of torch.compile with wrong error message

4 participants