Skip to content

Fix type annotation for _sym_get_coordinate#177446

Closed
aorenste wants to merge 9 commits intogh/aorenste/219/basefrom
gh/aorenste/219/head
Closed

Fix type annotation for _sym_get_coordinate#177446
aorenste wants to merge 9 commits intogh/aorenste/219/basefrom
gh/aorenste/219/head

Conversation

@aorenste
Copy link
Copy Markdown
Contributor

@aorenste aorenste commented Mar 14, 2026

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Mar 14, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/177446

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures, 2 Unrelated Failures

As of commit dac6fce with merge base 417a890 (image):

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the ciflow/dtensor Run DTensor specific tests label Mar 16, 2026
@aorenste aorenste marked this pull request as ready for review March 18, 2026 02:49
@aorenste aorenste added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 18, 2026
@aorenste
Copy link
Copy Markdown
Contributor Author

BC Error: Function Shard.local_shard_size_and_offset: curr_local_size changed from int to IntLikeType is expected.

@aorenste aorenste added the suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) label Mar 18, 2026
@aorenste
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@aorenste
Copy link
Copy Markdown
Contributor Author

@pytorchbot revert -m "blocking the revert of #177445 which is failing internal tests"

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Mar 19, 2026

❌ 🤖 pytorchbot command failed:

@pytorchbot revert: error: the following arguments are required: -c/--classification

usage: @pytorchbot revert -m MESSAGE -c
                          {nosignal,ignoredsignal,landrace,weird,ghfirst,autorevert}

Try @pytorchbot --help for more info.

@aorenste
Copy link
Copy Markdown
Contributor Author

@pytorchbot revert -m "blocking the revert of #177445 which is failing internal tests" -c nosignal

@yangw-dev
Copy link
Copy Markdown
Contributor

@pytorchbot revert -m "the pr breaks lint test, please fix it, see https://github.com/pytorch/pytorch/actions/runs/23302872205/job/67768676562" -c nosignal

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

pytorchmergebot added a commit that referenced this pull request Mar 19, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@aorenste your PR has been successfully reverted.

ryanzhang22 pushed a commit to ryanzhang22/pytorch that referenced this pull request Mar 19, 2026
ryanzhang22 pushed a commit to ryanzhang22/pytorch that referenced this pull request Mar 19, 2026
This reverts commit b8d53c6.

Reverted pytorch#177446 on behalf of https://github.com/aorenste due to blocking the revert of pytorch#177445 which is failing internal tests ([comment](pytorch#177446 (comment)))
ryanzhang22 pushed a commit to ryanzhang22/pytorch that referenced this pull request Mar 19, 2026
@aorenste
Copy link
Copy Markdown
Contributor Author

test/distributed/tensor/test_random_ops.py::DistTensorRandomOpTest::test_pipeline_parallel_manual_seed test is pre-existing on trunk
@pytorchbot merge -i

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

pytorchmergebot pushed a commit that referenced this pull request Mar 20, 2026
Use mesh._sym_get_coordinate() in _compute_rng_offsets so that RNG
offset values become symbolic SymInts (via _runtime_compute_coordinate_on_dim)
when compile_on_one_rank is active. Previously, mesh.get_coordinate()
returned concrete rank-specific integers that got baked into the compiled
graph, producing different graphs on different ranks.

Also refactors test_compile_on_one_rank.py to extract graph-comparison
helpers (_assert_graphs_identical_across_ranks, _compile_and_capture_graph)
and adds a test for DTensor random op graph consistency.

Authored with Claude.
Pull Request resolved: #177447
Approved by: https://github.com/yiming0416
ghstack dependencies: #177446
EmanueleCoradin pushed a commit to EmanueleCoradin/pytorch that referenced this pull request Mar 30, 2026
EmanueleCoradin pushed a commit to EmanueleCoradin/pytorch that referenced this pull request Mar 30, 2026
This reverts commit b8d53c6.

Reverted pytorch#177446 on behalf of https://github.com/aorenste due to blocking the revert of pytorch#177445 which is failing internal tests ([comment](pytorch#177446 (comment)))
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
This reverts commit b8d53c6.

Reverted pytorch#177446 on behalf of https://github.com/aorenste due to blocking the revert of pytorch#177445 which is failing internal tests ([comment](pytorch#177446 (comment)))
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
…177447)

Use mesh._sym_get_coordinate() in _compute_rng_offsets so that RNG
offset values become symbolic SymInts (via _runtime_compute_coordinate_on_dim)
when compile_on_one_rank is active. Previously, mesh.get_coordinate()
returned concrete rank-specific integers that got baked into the compiled
graph, producing different graphs on different ranks.

Also refactors test_compile_on_one_rank.py to extract graph-comparison
helpers (_assert_graphs_identical_across_ranks, _compile_and_capture_graph)
and adds a test for DTensor random op graph consistency.

Authored with Claude.
Pull Request resolved: pytorch#177447
Approved by: https://github.com/yiming0416
ghstack dependencies: pytorch#177446
nklshy-aws pushed a commit to nklshy-aws/pytorch that referenced this pull request Apr 7, 2026
nklshy-aws pushed a commit to nklshy-aws/pytorch that referenced this pull request Apr 7, 2026
…177447)

Use mesh._sym_get_coordinate() in _compute_rng_offsets so that RNG
offset values become symbolic SymInts (via _runtime_compute_coordinate_on_dim)
when compile_on_one_rank is active. Previously, mesh.get_coordinate()
returned concrete rank-specific integers that got baked into the compiled
graph, producing different graphs on different ranks.

Also refactors test_compile_on_one_rank.py to extract graph-comparison
helpers (_assert_graphs_identical_across_ranks, _compile_and_capture_graph)
and adds a test for DTensor random op graph consistency.

Authored with Claude.
Pull Request resolved: pytorch#177447
Approved by: https://github.com/yiming0416
ghstack dependencies: pytorch#177446
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/dtensor Run DTensor specific tests ciflow/inductor ciflow/torchtitan Run TorchTitan integration tests ciflow/trunk Trigger trunk jobs on your pull request Merged module: compiled autograd compiled_autograd module: dynamo release notes: distributed (dtensor) release notes category Reverted suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants