Skip to content

Add torch.swapdims and torch.swapaxes#46041

Closed
kshitij12345 wants to merge 30 commits intopytorch:masterfrom
kshitij12345:develop/numpy/swapdim
Closed

Add torch.swapdims and torch.swapaxes#46041
kshitij12345 wants to merge 30 commits intopytorch:masterfrom
kshitij12345:develop/numpy/swapdim

Conversation

@kshitij12345
Copy link
Copy Markdown
Collaborator

@kshitij12345 kshitij12345 commented Oct 8, 2020

Reference #38349

Delegates to torch.transpose (not sure what is the best way to alias)

TODO:

  • Add test
  • Add documentation

@mruberry mruberry self-requested a review October 8, 2020 21:28
@mruberry mruberry added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 8, 2020
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Oct 8, 2020

💊 CI failures summary and remediations

As of commit 15ac3c5 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 72 times.

_(aten, sum) \
_(aten, sum_to_size) \
_(aten, svd) \
_(aten, swapdim) \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

names should be "swapdims" and "swapaxes" (since "axes" is plural)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right. Thanks!

Comment thread aten/src/ATen/native/TensorShape.cpp Outdated
Comment thread aten/src/ATen/native/native_functions.yaml Outdated
Comment thread docs/source/tensor_view.rst Outdated
Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
with self.assertRaisesRegex(IndexError, "Dimension out of range"):
torch.swapdim(x, 0, 5)

@unittest.skipIf(not TEST_NUMPY, "NumPy not found")
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We now require NumPy to run the test suite and these skips are no longer needed.

Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
Comment thread test/test_torch.py Outdated
Comment thread tools/autograd/derivatives.yaml Outdated
Comment thread tools/autograd/gen_autograd.py Outdated
Comment thread torch/_torch_docs.py Outdated
@mruberry
Copy link
Copy Markdown
Collaborator

Hey @kshitij12345, thanks for this PR! I added a link to the instructions for adding an alias. Would you go through them and also add the "swapaxes" alias?

@codecov
Copy link
Copy Markdown

codecov Bot commented Oct 12, 2020

Codecov Report

Merging #46041 (15ac3c5) into master (49f0e5d) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master   #46041   +/-   ##
=======================================
  Coverage   81.31%   81.31%           
=======================================
  Files        1839     1839           
  Lines      198692   198706   +14     
=======================================
+ Hits       161558   161571   +13     
- Misses      37134    37135    +1     

@kshitij12345 kshitij12345 requested a review from apaszke as a code owner October 17, 2020 09:36
@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@mruberry PTAL :)

Comment thread test/test_torch.py Outdated
- func: swapaxes_(Tensor(a!) self, int axis1, int axis2) -> Tensor(a!)
use_c10_dispatcher: full
variants: method
device_guard: False
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we look at the dispatch entries for transpose:

- func: transpose.int(Tensor(a) self, int dim0, int dim1) -> Tensor(a)
  use_c10_dispatcher: full
  variants: function, method
  device_guard: False
  dispatch:
    DefaultBackend: transpose

- func: transpose_(Tensor(a!) self, int dim0, int dim1) -> Tensor(a!)
  use_c10_dispatcher: full
  variants: method
  device_guard: False
  dispatch:
    DefaultBackend: transpose_

We see that device_guard: False appears on both the in place and out-of-place variants and that the dispatch is DefaultBackend: transpose or DefaultBackend: transpose_.

Note also that transpose starts its dim indexing at zero, so swapaxes should have axis0 and axis1, and swapdims should have dim0 and dim1, too.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry So should we remove device_guard from the signature? Sorry didn't understand comment.

Note also that transpose starts its dim indexing at zero, so swapaxes should have axis0 and axis1, and swapdims should have dim0 and dim1, too.

Sure. The reason for this was because numpy names argument axis1, axis2, but makes sense to be more similar to transpose.

Copy link
Copy Markdown
Collaborator Author

@kshitij12345 kshitij12345 Nov 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry I think I understand now. device_guard should be on both signatures. Thanks!

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, this looks correct.

Comment thread torch/_tensor_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/overrides.py Outdated
@mruberry
Copy link
Copy Markdown
Collaborator

@kshitij12345 Yup, please open an github issue on pt/xla, we will try to take a look soon!

We can skip the tests on XLA for now. XLA can restore them once it implements the functions (on its own time).

@mruberry mruberry self-requested a review November 12, 2020 07:54
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kshitij12345! This looks very good and close to being finished. I made some cleanup comments that should be easy to address.

* rename axis/dim{1,2} -> axis/dim{0,1}
* signature update: device_guard false for out-of-place variant
* update docs and override for argument change
* refactor test_movedim_view code
@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@mruberry PTAL:)

Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kshitij12345! Everything looks great. Just have a final question about simplifying the docs.

Comment thread torch/_torch_docs.py Outdated
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

Just a last very small change to write "Numpy" as "NumPy."

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

Updated. Thanks for the great review as always :)

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mruberry
Copy link
Copy Markdown
Collaborator

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

You're welcome ;) Thanks for the PR!

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@mruberry merged this pull request in 68a3a3f.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Reference pytorch#38349

Delegates to `torch.transpose` (not sure what is the best way to alias)

TODO:
* [x] Add test
* [x] Add documentation

Pull Request resolved: pytorch#46041

Reviewed By: gchanan

Differential Revision: D25022816

Pulled By: mruberry

fbshipit-source-id: c80223d081cef84f523ef9b23fbedeb2f8c1efc5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants