[DTensor] report registered ops#176034
Conversation
python -m torch.distributed.tensor._ops.strategy_validation --report ``` ====================================================================== DTensor operator registration report ====================================================================== Directly registered: rule (register_prop_rule): 4 op_strategy (register_op_strategy): 612 single_dim_strategy: 13 total: 625 Decomposition table (not directly registered): 669 These ops have entries in torch._decomp.decomposition_table but no direct DTensor strategy. They may work at runtime via DecompShardingStrategy if all decomposed sub-ops are supported. Additional ops beyond this count may also be reachable via CIA (CompositeImplicitAutograd) decompositions. ``` with --report-full ``` rule (4): aten.convolution.default aten.convolution_backward.default aten.index.Tensor aten.index_select.default op_strategy (612): aten.__ilshift__.Scalar aten.__ilshift__.Tensor aten.__irshift__.Scalar aten.__irshift__.Tensor ...(truncated for git commit msg) single_dim_strategy (13): aten._fft_c2c.default aten._fft_c2r.default aten._fft_r2c.default aten._index_put_impl_.default ...(truncated for git commit msg) decomp table (not directly registered) (669): aten.__iand__.Scalar aten.__iand__.Tensor aten.__ior__.Scalar aten.__ior__.Tensor ...(truncated for git commit msg) ``` [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/176034
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 4417437 with merge base 7eeab8a ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
python -m torch.distributed.tensor._ops.strategy_validation --report ``` ====================================================================== DTensor operator registration report ====================================================================== Directly registered: rule (register_prop_rule): 4 op_strategy (register_op_strategy): 612 single_dim_strategy: 13 total: 625 Decomposition table (not directly registered): 669 These ops have entries in torch._decomp.decomposition_table but no direct DTensor strategy. They may work at runtime via DecompShardingStrategy if all decomposed sub-ops are supported. Additional ops beyond this count may also be reachable via CIA (CompositeImplicitAutograd) decompositions. ``` with --report-full ``` rule (4): aten.convolution.default aten.convolution_backward.default aten.index.Tensor aten.index_select.default op_strategy (612): aten.__ilshift__.Scalar aten.__ilshift__.Tensor aten.__irshift__.Scalar aten.__irshift__.Tensor ...(truncated for git commit msg) single_dim_strategy (13): aten._fft_c2c.default aten._fft_c2r.default aten._fft_r2c.default aten._index_put_impl_.default ...(truncated for git commit msg) decomp table (not directly registered) (669): aten.__iand__.Scalar aten.__iand__.Tensor aten.__ior__.Scalar aten.__ior__.Tensor ...(truncated for git commit msg) ``` ghstack-source-id: 1fe849c Pull Request resolved: #176034
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command |
|
@pytorchbot merge |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
python -m torch.distributed.tensor._ops.strategy_validation --report ``` ====================================================================== DTensor operator registration report ====================================================================== Directly registered: rule (register_prop_rule): 4 op_strategy (register_op_strategy): 612 single_dim_strategy: 13 total: 625 Decomposition table (not directly registered): 669 These ops have entries in torch._decomp.decomposition_table but no direct DTensor strategy. They may work at runtime via DecompShardingStrategy if all decomposed sub-ops are supported. Additional ops beyond this count may also be reachable via CIA (CompositeImplicitAutograd) decompositions. ``` with --report-full ``` rule (4): aten.convolution.default aten.convolution_backward.default aten.index.Tensor aten.index_select.default op_strategy (612): aten.__ilshift__.Scalar aten.__ilshift__.Tensor aten.__irshift__.Scalar aten.__irshift__.Tensor ...(truncated for git commit msg) single_dim_strategy (13): aten._fft_c2c.default aten._fft_c2r.default aten._fft_r2c.default aten._index_put_impl_.default ...(truncated for git commit msg) decomp table (not directly registered) (669): aten.__iand__.Scalar aten.__iand__.Tensor aten.__ior__.Scalar aten.__ior__.Tensor ...(truncated for git commit msg) ``` Pull Request resolved: pytorch#176034 Approved by: https://github.com/pianpwk ghstack dependencies: pytorch#175821
python -m torch.distributed.tensor._ops.strategy_validation --report ``` ====================================================================== DTensor operator registration report ====================================================================== Directly registered: rule (register_prop_rule): 4 op_strategy (register_op_strategy): 612 single_dim_strategy: 13 total: 625 Decomposition table (not directly registered): 669 These ops have entries in torch._decomp.decomposition_table but no direct DTensor strategy. They may work at runtime via DecompShardingStrategy if all decomposed sub-ops are supported. Additional ops beyond this count may also be reachable via CIA (CompositeImplicitAutograd) decompositions. ``` with --report-full ``` rule (4): aten.convolution.default aten.convolution_backward.default aten.index.Tensor aten.index_select.default op_strategy (612): aten.__ilshift__.Scalar aten.__ilshift__.Tensor aten.__irshift__.Scalar aten.__irshift__.Tensor ...(truncated for git commit msg) single_dim_strategy (13): aten._fft_c2c.default aten._fft_c2r.default aten._fft_r2c.default aten._index_put_impl_.default ...(truncated for git commit msg) decomp table (not directly registered) (669): aten.__iand__.Scalar aten.__iand__.Tensor aten.__ior__.Scalar aten.__ior__.Tensor ...(truncated for git commit msg) ``` Pull Request resolved: pytorch#176034 Approved by: https://github.com/pianpwk ghstack dependencies: pytorch#175821
Stack from ghstack (oldest at bottom):
python -m torch.distributed.tensor._ops.strategy_validation --report
with --report-full