Skip to content

torchdim Python port#160236

Closed
ezyang wants to merge 16 commits intogh/ezyang/3127/basefrom
gh/ezyang/3127/head
Closed

torchdim Python port#160236
ezyang wants to merge 16 commits intogh/ezyang/3127/basefrom
gh/ezyang/3127/head

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Aug 9, 2025

Stack from ghstack (oldest at bottom):

The big semantic change (and the reason for this port) is that we no longer monkeypatch Tensor with torchdim's special methods. The new algorithm for handling dispatch is that we first land in __torch_function__ and we see if a special FCD implementation needs to be dispatch to first, and if there is nothing we fallback to the standard level strategy.

Because there is no longer C binding equivalent of classes, we've condensed _C.Dim and Dim together, and similar for Tensor. This resulted in some bugs as the Python API is sometimes different from the C API. I've attempted to disambiguate these but there may still be mistakes (many early bugs were due to this problem). Dim and DimEntry are especially painful as Dim must abide by Tensor equality semantics, but is pointer equality in C (DimEntry doesn't have this problem). Another difference between C/Python that is subtle is we no longer get implicit conversions from Dim to DimEntry, this also caused some bugs.

Much of the mechanical porting work was done by claude code. I have a separate PR that deletes functorch._C, but it was useful having dim.cpp to point claude at it so I haven't done it in this PR. From a reviewing perspective, I need to re-review that I didn't forget to port anything, some noticeably missing "small" things are patched_dim_method. I am still in progress of carefully doing a side-by-side review of ports; "simplifications" from claude code were also a major source of bugs.

There are two major feature gaps in the implementation:

  • DelayedTensor and dot handling are not implemented yet. This should be reasonably easy, just need to do it. However, for the purposes of sharded propagation it is actually better not to reconstruct matmuls.
  • Splitting dimensions with an index like [x, y] doesn't work. The problem is that __getitem__ interprets this as advanced indexing and sends the list to torch.tensor to turn into a tensor, instead of being eligible for __torch_function__. I think I might need to hard code a special case for this or something?

Signed-off-by: Edward Yang ezyang@meta.com

cc @albanD

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 9, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160236

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 3a34b08 with merge base af8c232 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ezyang added a commit that referenced this pull request Aug 9, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 0db2026
Pull-Request: #160236
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Aug 10, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 1960626
Pull-Request: #160236
@ezyang ezyang changed the title [WIP] torchdim Python port torchdim Python port Aug 10, 2025
@ezyang ezyang requested a review from zdevito August 10, 2025 04:18
@ezyang
Copy link
Contributor Author

ezyang commented Aug 10, 2025

This is ready for review!

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Aug 10, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 70c67dd
Pull-Request: #160236
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Aug 12, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 1c0e378
Pull-Request: #160236
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Aug 12, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 6351385
Pull-Request: #160236
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Aug 12, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: d70a502
Pull-Request: #160236
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Aug 12, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 96c16d3
Pull-Request: #160236
@ezyang ezyang added the suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) label Aug 12, 2025
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Aug 12, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 36e04aa
Pull-Request: #160236
ezyang added a commit that referenced this pull request Sep 18, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 5b2acaf
Pull-Request: #160236
[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 19, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 530e4fa
Pull-Request: #160236
@ezyang
Copy link
Contributor Author

ezyang commented Sep 19, 2025

Comments have been addressed. I think I'll handle csrc removal next PR.

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 19, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 0dff4de
Pull-Request: #160236
@ezyang
Copy link
Contributor Author

ezyang commented Sep 19, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: linux-aarch64 / linux-jammy-aarch64-py3.10 / build

Details for Dev Infra team Raised by workflow job

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Sep 20, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 0b9d3d1
Pull-Request: #160236
@ezyang
Copy link
Contributor Author

ezyang commented Sep 21, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Sep 21, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>
ghstack-source-id: 0b9d3d1
Pull-Request: #160236
pytorchmergebot pushed a commit that referenced this pull request Sep 21, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: #163340
Approved by: https://github.com/aorenste
ghstack dependencies: #160236
mansiag05 pushed a commit to mansiag05/pytorch that referenced this pull request Sep 22, 2025
The big semantic change (and the reason for this port) is that we no longer monkeypatch Tensor with torchdim's special methods. The new algorithm for handling dispatch is that we first land in `__torch_function__` and we see if a special FCD implementation needs to be dispatch to first, and if there is nothing we fallback to the standard level strategy.

Because there is no longer C binding equivalent of classes, we've condensed _C.Dim and Dim together, and similar for Tensor. This resulted in some bugs as the Python API is sometimes different from the C API. I've attempted to disambiguate these but there may still be mistakes (many early bugs were due to this problem). Dim and DimEntry are especially painful as Dim must abide by Tensor equality semantics, but is pointer equality in C (DimEntry doesn't have this problem). Another difference between C/Python that is subtle is we no longer get implicit conversions from Dim to DimEntry, this also caused some bugs.

Much of the mechanical porting work was done by claude code. I have a separate PR that deletes functorch._C, but it was useful having dim.cpp to point claude at it so I haven't done it in this PR. From a reviewing perspective, I need to re-review that I didn't forget to port anything, some noticeably missing "small" things are patched_dim_method. I am still in progress of carefully doing a side-by-side review of ports; "simplifications" from claude code were also a major source of bugs.

There are two major feature gaps in the implementation:

- DelayedTensor and dot handling are not implemented yet. This should be reasonably easy, just need to do it.  However, for the purposes of sharded propagation it is actually better not to reconstruct matmuls.
- Splitting dimensions with an index like `[x, y]` doesn't work. The problem is that `__getitem__` interprets this as advanced indexing and sends the list to torch.tensor to turn into a tensor, instead of being eligible for `__torch_function__`. I think I might need to hard code a special case for this or something?

Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: pytorch#160236
Approved by: https://github.com/zdevito, https://github.com/albanD
mansiag05 pushed a commit to mansiag05/pytorch that referenced this pull request Sep 22, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: pytorch#163340
Approved by: https://github.com/aorenste
ghstack dependencies: pytorch#160236
cleonard530 pushed a commit to cleonard530/pytorch that referenced this pull request Sep 22, 2025
The big semantic change (and the reason for this port) is that we no longer monkeypatch Tensor with torchdim's special methods. The new algorithm for handling dispatch is that we first land in `__torch_function__` and we see if a special FCD implementation needs to be dispatch to first, and if there is nothing we fallback to the standard level strategy.

Because there is no longer C binding equivalent of classes, we've condensed _C.Dim and Dim together, and similar for Tensor. This resulted in some bugs as the Python API is sometimes different from the C API. I've attempted to disambiguate these but there may still be mistakes (many early bugs were due to this problem). Dim and DimEntry are especially painful as Dim must abide by Tensor equality semantics, but is pointer equality in C (DimEntry doesn't have this problem). Another difference between C/Python that is subtle is we no longer get implicit conversions from Dim to DimEntry, this also caused some bugs.

Much of the mechanical porting work was done by claude code. I have a separate PR that deletes functorch._C, but it was useful having dim.cpp to point claude at it so I haven't done it in this PR. From a reviewing perspective, I need to re-review that I didn't forget to port anything, some noticeably missing "small" things are patched_dim_method. I am still in progress of carefully doing a side-by-side review of ports; "simplifications" from claude code were also a major source of bugs.

There are two major feature gaps in the implementation:

- DelayedTensor and dot handling are not implemented yet. This should be reasonably easy, just need to do it.  However, for the purposes of sharded propagation it is actually better not to reconstruct matmuls.
- Splitting dimensions with an index like `[x, y]` doesn't work. The problem is that `__getitem__` interprets this as advanced indexing and sends the list to torch.tensor to turn into a tensor, instead of being eligible for `__torch_function__`. I think I might need to hard code a special case for this or something?

Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: pytorch#160236
Approved by: https://github.com/zdevito, https://github.com/albanD
cleonard530 pushed a commit to cleonard530/pytorch that referenced this pull request Sep 22, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: pytorch#163340
Approved by: https://github.com/aorenste
ghstack dependencies: pytorch#160236
dsashidh pushed a commit to dsashidh/pytorch that referenced this pull request Sep 26, 2025
The big semantic change (and the reason for this port) is that we no longer monkeypatch Tensor with torchdim's special methods. The new algorithm for handling dispatch is that we first land in `__torch_function__` and we see if a special FCD implementation needs to be dispatch to first, and if there is nothing we fallback to the standard level strategy.

Because there is no longer C binding equivalent of classes, we've condensed _C.Dim and Dim together, and similar for Tensor. This resulted in some bugs as the Python API is sometimes different from the C API. I've attempted to disambiguate these but there may still be mistakes (many early bugs were due to this problem). Dim and DimEntry are especially painful as Dim must abide by Tensor equality semantics, but is pointer equality in C (DimEntry doesn't have this problem). Another difference between C/Python that is subtle is we no longer get implicit conversions from Dim to DimEntry, this also caused some bugs.

Much of the mechanical porting work was done by claude code. I have a separate PR that deletes functorch._C, but it was useful having dim.cpp to point claude at it so I haven't done it in this PR. From a reviewing perspective, I need to re-review that I didn't forget to port anything, some noticeably missing "small" things are patched_dim_method. I am still in progress of carefully doing a side-by-side review of ports; "simplifications" from claude code were also a major source of bugs.

There are two major feature gaps in the implementation:

- DelayedTensor and dot handling are not implemented yet. This should be reasonably easy, just need to do it.  However, for the purposes of sharded propagation it is actually better not to reconstruct matmuls.
- Splitting dimensions with an index like `[x, y]` doesn't work. The problem is that `__getitem__` interprets this as advanced indexing and sends the list to torch.tensor to turn into a tensor, instead of being eligible for `__torch_function__`. I think I might need to hard code a special case for this or something?

Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: pytorch#160236
Approved by: https://github.com/zdevito, https://github.com/albanD
dsashidh pushed a commit to dsashidh/pytorch that referenced this pull request Sep 26, 2025
Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: pytorch#163340
Approved by: https://github.com/aorenste
ghstack dependencies: pytorch#160236
@github-actions github-actions bot deleted the gh/ezyang/3127/head branch October 22, 2025 02:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged skip-pr-sanity-checks suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants