Skip to content

[LocalTensor] Cache DeviceMesh.get_coordinate results in LocalTensorMode#173836

Closed
wconstab wants to merge 7 commits intogh/wconstab/511/basefrom
gh/wconstab/511/head
Closed

[LocalTensor] Cache DeviceMesh.get_coordinate results in LocalTensorMode#173836
wconstab wants to merge 7 commits intogh/wconstab/511/basefrom
gh/wconstab/511/head

Conversation

@wconstab
Copy link
Copy Markdown
Contributor

@wconstab wconstab commented Jan 29, 2026

Stack from ghstack (oldest at bottom):

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Jan 29, 2026

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Jan 29, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/173836

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 1ad1fb9 with merge base 4b0f7fb (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

wconstab added a commit that referenced this pull request Jan 29, 2026
The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

ghstack-source-id: 02756a0
Pull Request resolved: #173836
…ocalTensorMode"

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

[ghstack-poisoned]
…ocalTensorMode"

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

[ghstack-poisoned]
…ocalTensorMode"

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

[ghstack-poisoned]
@wconstab wconstab requested a review from dzmitry-huba February 2, 2026 22:50
…ocalTensorMode"

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

[ghstack-poisoned]
…ocalTensorMode"

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 3, 2026
The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

ghstack-source-id: df2f1e5
Pull Request resolved: #173836
…ocalTensorMode"

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 4, 2026
The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.

ghstack-source-id: 97e54a8
Pull Request resolved: #173836
@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Feb 6, 2026

@pytorchbot merge

@pytorch-bot pytorch-bot Bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 6, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Feb 6, 2026

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@wconstab wconstab added the topic: not user facing topic category label Feb 6, 2026
@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Feb 6, 2026

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

radeksm pushed a commit to radeksm/pytorch that referenced this pull request Feb 20, 2026
…ode (pytorch#173836)

The get_coordinate method was being called repeatedly with the same
DeviceMesh during operations like DTensor.from_local, recomputing the
same coordinate mapping each time. This adds a per-mode cache keyed
by mesh id to avoid redundant computation.

In profiling of sharding rule validation, get_coordinate accounted for
~86% of from_local call time. With caching, from_local latency dropped
from 4.55ms to 0.76ms (83% reduction).

Authored with Claude.
Pull Request resolved: pytorch#173836
Approved by: https://github.com/dzmitry-huba
@github-actions github-actions Bot deleted the gh/wconstab/511/head branch March 9, 2026 02:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants