[DTensor] single-dim expander raises clear inplace error#173572
[DTensor] single-dim expander raises clear inplace error#173572wconstab wants to merge 2 commits intogh/wconstab/508/basefrom
Conversation
Inplace ops for dtensor have a restriction: you're not allowed to redistribute the 'inplace' tensor. This means in some cases, sharding propagation has to fail becuase the inplace input is not compatible with any of the possible sharding strategies. This PR makes sure this case raises the expected informative error rather than a confusing error about selecting min cost over an empty sharding strategies list. [ghstack-poisoned]
This PR needs a
|
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/173572
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 2bb4e85 with merge base 7754b55 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| f"{op_schema.op}: in-place operations that require placement changes " | ||
| f"are not supported. The input has placement {blocking_inplace_input_placements}, " | ||
| f"but no valid strategy preserves this placement. " | ||
| f"Please use the out-of-place version of this operation instead." |
There was a problem hiding this comment.
nit: or redistribute to {one of the possible placements}?
Inplace ops for dtensor have a restriction: you're not allowed to redistribute the 'inplace' tensor. This means in some cases, sharding propagation has to fail becuase the inplace input is not compatible with any of the possible sharding strategies. This PR makes sure this case raises the expected informative error rather than a confusing error about selecting min cost over an empty sharding strategies list. [ghstack-poisoned]
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Pull Request resolved: #173567 Approved by: https://github.com/tianyu-l, https://github.com/fegin ghstack dependencies: #172479, #173572
) Pull Request resolved: pytorch#173567 Approved by: https://github.com/tianyu-l, https://github.com/fegin ghstack dependencies: pytorch#172479, pytorch#173572
Inplace ops for dtensor have a restriction: you're not allowed to redistribute the 'inplace' tensor. This means in some cases, sharding propagation has to fail becuase the inplace input is not compatible with any of the possible sharding strategies. This PR makes sure this case raises the expected informative error rather than a confusing error about selecting min cost over an empty sharding strategies list. ghstack-source-id: 76bf5f7 Pull Request resolved: pytorch/pytorch#173572
Stack from ghstack (oldest at bottom):
Inplace ops for dtensor have a restriction: you're not allowed to
redistribute the 'inplace' tensor. This means in some cases, sharding
propagation has to fail becuase the inplace input is not compatible with
any of the possible sharding strategies.
This PR makes sure this case raises the expected informative error
rather than a confusing error about selecting min cost over an
empty sharding strategies list.