[DTensor] Report strategy_validation results per aten op variant#175892
[DTensor] Report strategy_validation results per aten op variant#175892pianpwk wants to merge 2 commits intogh/pianpwk/104/basefrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/175892
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit a65416d with merge base 7e0feca ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| _assert_keys_normalized(ground_truth_valid, input_shapes, output_shape) | ||
| _assert_keys_normalized(dtensor_rules, input_shapes, output_shape) | ||
|
|
||
| op_str = str(aten_op) if aten_op else "(unknown)" |
There was a problem hiding this comment.
when can aten_op be unknown- is it just for the case the aten op was not found and we are basically going to abort the validator?
There was a problem hiding this comment.
ah seems like defensive code, we shouldn't reach this
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / macos-py3-arm64 / test (default, 1, 3, macos-m1-stable) Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: trunk / macos-py3-arm64 / test (default, 1, 3, macos-m1-stable) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…y validation (#175893) For multi-output ops like aten.min.dim (returns values + indices), the tool now tracks each output's placement separately instead of using a single output placement for all outputs. This makes the display explicit: `S(1) -> (P(min), P(min))` shows both outputs get P(min). ComboKey changes from (inputs, single_output_str) to (inputs, output_strs_tuple). PlacementCombination is simplified to a plain type alias. normalize_combo_key normalizes each output against its own shape. Pull Request resolved: #175893 Approved by: https://github.com/wconstab ghstack dependencies: #175892
When an OpInfo covers multiple aten variants (e.g. aten.min.default, aten.min.dim, aten.minimum.default), break down the correct/incorrect/ missing counts per variant so it's clear which variant has issues. ghstack-source-id: 927212a Pull-Request: pytorch/pytorch#175892
…orch#175892) When an OpInfo covers multiple aten variants (e.g. aten.min.default, aten.min.dim, aten.minimum.default), break down the correct/incorrect/ missing counts per variant so it's clear which variant has issues. Pull Request resolved: pytorch#175892 Approved by: https://github.com/wconstab
…y validation (pytorch#175893) For multi-output ops like aten.min.dim (returns values + indices), the tool now tracks each output's placement separately instead of using a single output placement for all outputs. This makes the display explicit: `S(1) -> (P(min), P(min))` shows both outputs get P(min). ComboKey changes from (inputs, single_output_str) to (inputs, output_strs_tuple). PlacementCombination is simplified to a plain type alias. normalize_combo_key normalizes each output against its own shape. Pull Request resolved: pytorch#175893 Approved by: https://github.com/wconstab ghstack dependencies: pytorch#175892
Stack from ghstack (oldest at bottom):
When an OpInfo covers multiple aten variants (e.g. aten.min.default,
aten.min.dim, aten.minimum.default), break down the correct/incorrect/
missing counts per variant so it's clear which variant has issues.