Skip to content

[DTensor] report registered ops#176034

Closed
wconstab wants to merge 1 commit intogh/wconstab/552/basefrom
gh/wconstab/552/head
Closed

[DTensor] report registered ops#176034
wconstab wants to merge 1 commit intogh/wconstab/552/basefrom
gh/wconstab/552/head

Conversation

@wconstab
Copy link
Copy Markdown
Contributor

@wconstab wconstab commented Feb 28, 2026

Stack from ghstack (oldest at bottom):

python -m torch.distributed.tensor._ops.strategy_validation --report

======================================================================
DTensor operator registration report
======================================================================

Directly registered:
  rule (register_prop_rule):               4
  op_strategy (register_op_strategy):    612
  single_dim_strategy:                    13
  total:                                 625

Decomposition table (not directly registered): 669
  These ops have entries in torch._decomp.decomposition_table but no
  direct DTensor strategy. They may work at runtime via
  DecompShardingStrategy if all decomposed sub-ops are supported.
  Additional ops beyond this count may also be reachable via CIA
  (CompositeImplicitAutograd) decompositions.

with --report-full

rule (4):
  aten.convolution.default
  aten.convolution_backward.default
  aten.index.Tensor
  aten.index_select.default

op_strategy (612):
  aten.__ilshift__.Scalar
  aten.__ilshift__.Tensor
  aten.__irshift__.Scalar
  aten.__irshift__.Tensor
  ...(truncated for git commit msg)

single_dim_strategy (13):
  aten._fft_c2c.default
  aten._fft_c2r.default
  aten._fft_r2c.default
  aten._index_put_impl_.default
  ...(truncated for git commit msg)

decomp table (not directly registered) (669):
  aten.__iand__.Scalar
  aten.__iand__.Tensor
  aten.__ior__.Scalar
  aten.__ior__.Tensor
  ...(truncated for git commit msg)

python -m torch.distributed.tensor._ops.strategy_validation --report

```
======================================================================
DTensor operator registration report
======================================================================

Directly registered:
  rule (register_prop_rule):               4
  op_strategy (register_op_strategy):    612
  single_dim_strategy:                    13
  total:                                 625

Decomposition table (not directly registered): 669
  These ops have entries in torch._decomp.decomposition_table but no
  direct DTensor strategy. They may work at runtime via
  DecompShardingStrategy if all decomposed sub-ops are supported.
  Additional ops beyond this count may also be reachable via CIA
  (CompositeImplicitAutograd) decompositions.
```

with --report-full

```
rule (4):
  aten.convolution.default
  aten.convolution_backward.default
  aten.index.Tensor
  aten.index_select.default

op_strategy (612):
  aten.__ilshift__.Scalar
  aten.__ilshift__.Tensor
  aten.__irshift__.Scalar
  aten.__irshift__.Tensor
  ...(truncated for git commit msg)

single_dim_strategy (13):
  aten._fft_c2c.default
  aten._fft_c2r.default
  aten._fft_r2c.default
  aten._index_put_impl_.default
  ...(truncated for git commit msg)

decomp table (not directly registered) (669):
  aten.__iand__.Scalar
  aten.__iand__.Tensor
  aten.__ior__.Scalar
  aten.__ior__.Tensor
  ...(truncated for git commit msg)

```

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Feb 28, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/176034

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 4417437 with merge base 7eeab8a (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

wconstab added a commit that referenced this pull request Feb 28, 2026
python -m torch.distributed.tensor._ops.strategy_validation --report

```
======================================================================
DTensor operator registration report
======================================================================

Directly registered:
  rule (register_prop_rule):               4
  op_strategy (register_op_strategy):    612
  single_dim_strategy:                    13
  total:                                 625

Decomposition table (not directly registered): 669
  These ops have entries in torch._decomp.decomposition_table but no
  direct DTensor strategy. They may work at runtime via
  DecompShardingStrategy if all decomposed sub-ops are supported.
  Additional ops beyond this count may also be reachable via CIA
  (CompositeImplicitAutograd) decompositions.
```

with --report-full

```
rule (4):
  aten.convolution.default
  aten.convolution_backward.default
  aten.index.Tensor
  aten.index_select.default

op_strategy (612):
  aten.__ilshift__.Scalar
  aten.__ilshift__.Tensor
  aten.__irshift__.Scalar
  aten.__irshift__.Tensor
  ...(truncated for git commit msg)

single_dim_strategy (13):
  aten._fft_c2c.default
  aten._fft_c2r.default
  aten._fft_r2c.default
  aten._index_put_impl_.default
  ...(truncated for git commit msg)

decomp table (not directly registered) (669):
  aten.__iand__.Scalar
  aten.__iand__.Tensor
  aten.__ior__.Scalar
  aten.__ior__.Tensor
  ...(truncated for git commit msg)

```

ghstack-source-id: 1fe849c
Pull Request resolved: #176034
@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Mar 2, 2026

@pytorchbot merge

@pytorch-bot pytorch-bot Bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 2, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command
For more information see pytorch-bot wiki.

@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Mar 3, 2026

@pytorchbot merge

@wconstab
Copy link
Copy Markdown
Contributor Author

wconstab commented Mar 4, 2026

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit to anatoliylitv/pytorch that referenced this pull request Mar 4, 2026
python -m torch.distributed.tensor._ops.strategy_validation --report

```
======================================================================
DTensor operator registration report
======================================================================

Directly registered:
  rule (register_prop_rule):               4
  op_strategy (register_op_strategy):    612
  single_dim_strategy:                    13
  total:                                 625

Decomposition table (not directly registered): 669
  These ops have entries in torch._decomp.decomposition_table but no
  direct DTensor strategy. They may work at runtime via
  DecompShardingStrategy if all decomposed sub-ops are supported.
  Additional ops beyond this count may also be reachable via CIA
  (CompositeImplicitAutograd) decompositions.
```

with --report-full

```
rule (4):
  aten.convolution.default
  aten.convolution_backward.default
  aten.index.Tensor
  aten.index_select.default

op_strategy (612):
  aten.__ilshift__.Scalar
  aten.__ilshift__.Tensor
  aten.__irshift__.Scalar
  aten.__irshift__.Tensor
  ...(truncated for git commit msg)

single_dim_strategy (13):
  aten._fft_c2c.default
  aten._fft_c2r.default
  aten._fft_r2c.default
  aten._index_put_impl_.default
  ...(truncated for git commit msg)

decomp table (not directly registered) (669):
  aten.__iand__.Scalar
  aten.__iand__.Tensor
  aten.__ior__.Scalar
  aten.__ior__.Tensor
  ...(truncated for git commit msg)

```
Pull Request resolved: pytorch#176034
Approved by: https://github.com/pianpwk
ghstack dependencies: pytorch#175821
EmanueleCoradin pushed a commit to EmanueleCoradin/pytorch that referenced this pull request Mar 30, 2026
python -m torch.distributed.tensor._ops.strategy_validation --report

```
======================================================================
DTensor operator registration report
======================================================================

Directly registered:
  rule (register_prop_rule):               4
  op_strategy (register_op_strategy):    612
  single_dim_strategy:                    13
  total:                                 625

Decomposition table (not directly registered): 669
  These ops have entries in torch._decomp.decomposition_table but no
  direct DTensor strategy. They may work at runtime via
  DecompShardingStrategy if all decomposed sub-ops are supported.
  Additional ops beyond this count may also be reachable via CIA
  (CompositeImplicitAutograd) decompositions.
```

with --report-full

```
rule (4):
  aten.convolution.default
  aten.convolution_backward.default
  aten.index.Tensor
  aten.index_select.default

op_strategy (612):
  aten.__ilshift__.Scalar
  aten.__ilshift__.Tensor
  aten.__irshift__.Scalar
  aten.__irshift__.Tensor
  ...(truncated for git commit msg)

single_dim_strategy (13):
  aten._fft_c2c.default
  aten._fft_c2r.default
  aten._fft_r2c.default
  aten._index_put_impl_.default
  ...(truncated for git commit msg)

decomp table (not directly registered) (669):
  aten.__iand__.Scalar
  aten.__iand__.Tensor
  aten.__ior__.Scalar
  aten.__ior__.Tensor
  ...(truncated for git commit msg)

```
Pull Request resolved: pytorch#176034
Approved by: https://github.com/pianpwk
ghstack dependencies: pytorch#175821
@github-actions github-actions Bot deleted the gh/wconstab/552/head branch April 4, 2026 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: distributed (dtensor) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants