Skip to content

[shard prop] single-dim rules for LayerNorm, RMSNorm FW/BW#179173

Closed
pianpwk wants to merge 3 commits intogh/pianpwk/128/basefrom
gh/pianpwk/128/head
Closed

[shard prop] single-dim rules for LayerNorm, RMSNorm FW/BW#179173
pianpwk wants to merge 3 commits intogh/pianpwk/128/basefrom
gh/pianpwk/128/head

Conversation

@pianpwk
Copy link
Copy Markdown
Contributor

@pianpwk pianpwk commented Apr 2, 2026

Removes op strategies for layernorm, RMS norm FWD/BWD, since they don't compose well with AutoParallel, in favor of single-dim strategies

I think this should fix meta-pytorch/autoparallel#142, and maybe allow us to delete the overrides in meta-pytorch/autoparallel#399, meta-pytorch/autoparallel#373

Stack from ghstack (oldest at bottom):

[ghstack-poisoned]
pianpwk added a commit that referenced this pull request Apr 2, 2026
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 2, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/179173

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 6f587c6 with merge base e3473e8 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot Bot added ciflow/dtensor Run DTensor specific tests ciflow/inductor ciflow/torchtitan Run TorchTitan integration tests release notes: distributed (dtensor) release notes category labels Apr 2, 2026
@pianpwk pianpwk changed the title [shard prop] single-dim rules for LayerNorm, RMSNorm FW/BW [shard prop] [shard prop] single-dim rules for conv, uniform, scatter, index ops Apr 2, 2026
@pianpwk pianpwk changed the title [shard prop] [shard prop] single-dim rules for conv, uniform, scatter, index ops [shard prop] single-dim rules for conv, uniform, scatter, index ops Apr 2, 2026
@pianpwk pianpwk changed the title [shard prop] single-dim rules for conv, uniform, scatter, index ops [shard prop] single-dim rules for LayerNorm, RMSNorm FW/BW Apr 2, 2026
[ghstack-poisoned]
@pianpwk pianpwk requested review from anshul-si, fmassa and zpcore April 3, 2026 04:41
Copy link
Copy Markdown
Member

@zpcore zpcore left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@pianpwk
Copy link
Copy Markdown
Contributor Author

pianpwk commented Apr 8, 2026

@pytorchbot merge

@pytorch-bot pytorch-bot Bot added the ciflow/trunk Trigger trunk jobs on your pull request label Apr 8, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: inductor / unit-test / inductor-pallas-cpu-build / build

Details for Dev Infra team Raised by workflow job

Removes op strategies for layernorm, RMS norm FWD/BWD, since they don't compose well with AutoParallel, in favor of single-dim strategies

I think this should fix meta-pytorch/autoparallel#142, and maybe allow us to delete the overrides in meta-pytorch/autoparallel#399, meta-pytorch/autoparallel#373




[ghstack-poisoned]
@pianpwk
Copy link
Copy Markdown
Contributor Author

pianpwk commented Apr 9, 2026

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/dtensor Run DTensor specific tests ciflow/inductor ciflow/torchtitan Run TorchTitan integration tests ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: distributed (dtensor) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants