Add torch.swapdims and torch.swapaxes#46041
Add torch.swapdims and torch.swapaxes#46041kshitij12345 wants to merge 30 commits intopytorch:masterfrom
torch.swapdims and torch.swapaxes#46041Conversation
💊 CI failures summary and remediationsAs of commit 15ac3c5 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 72 times. |
| _(aten, sum) \ | ||
| _(aten, sum_to_size) \ | ||
| _(aten, svd) \ | ||
| _(aten, swapdim) \ |
There was a problem hiding this comment.
names should be "swapdims" and "swapaxes" (since "axes" is plural)
There was a problem hiding this comment.
Ah right. Thanks!
| with self.assertRaisesRegex(IndexError, "Dimension out of range"): | ||
| torch.swapdim(x, 0, 5) | ||
|
|
||
| @unittest.skipIf(not TEST_NUMPY, "NumPy not found") |
There was a problem hiding this comment.
We now require NumPy to run the test suite and these skips are no longer needed.
|
Hey @kshitij12345, thanks for this PR! I added a link to the instructions for adding an alias. Would you go through them and also add the "swapaxes" alias? |
Codecov Report
@@ Coverage Diff @@
## master #46041 +/- ##
=======================================
Coverage 81.31% 81.31%
=======================================
Files 1839 1839
Lines 198692 198706 +14
=======================================
+ Hits 161558 161571 +13
- Misses 37134 37135 +1 |
|
@mruberry PTAL :) |
| - func: swapaxes_(Tensor(a!) self, int axis1, int axis2) -> Tensor(a!) | ||
| use_c10_dispatcher: full | ||
| variants: method | ||
| device_guard: False |
There was a problem hiding this comment.
If we look at the dispatch entries for transpose:
- func: transpose.int(Tensor(a) self, int dim0, int dim1) -> Tensor(a)
use_c10_dispatcher: full
variants: function, method
device_guard: False
dispatch:
DefaultBackend: transpose
- func: transpose_(Tensor(a!) self, int dim0, int dim1) -> Tensor(a!)
use_c10_dispatcher: full
variants: method
device_guard: False
dispatch:
DefaultBackend: transpose_
We see that device_guard: False appears on both the in place and out-of-place variants and that the dispatch is DefaultBackend: transpose or DefaultBackend: transpose_.
Note also that transpose starts its dim indexing at zero, so swapaxes should have axis0 and axis1, and swapdims should have dim0 and dim1, too.
There was a problem hiding this comment.
@mruberry So should we remove device_guard from the signature? Sorry didn't understand comment.
Note also that transpose starts its dim indexing at zero, so swapaxes should have axis0 and axis1, and swapdims should have dim0 and dim1, too.
Sure. The reason for this was because numpy names argument axis1, axis2, but makes sense to be more similar to transpose.
There was a problem hiding this comment.
@mruberry I think I understand now. device_guard should be on both signatures. Thanks!
There was a problem hiding this comment.
Yep, this looks correct.
We can skip the tests on XLA for now. XLA can restore them once it implements the functions (on its own time). |
mruberry
left a comment
There was a problem hiding this comment.
Hey @kshitij12345! This looks very good and close to being finished. I made some cleanup comments that should be easy to address.
* rename axis/dim{1,2} -> axis/dim{0,1}
* signature update: device_guard false for out-of-place variant
* update docs and override for argument change
* refactor test_movedim_view code
|
@mruberry PTAL:) |
mruberry
left a comment
There was a problem hiding this comment.
Hey @kshitij12345! Everything looks great. Just have a final question about simplifying the docs.
mruberry
left a comment
There was a problem hiding this comment.
Looks great!
Just a last very small change to write "Numpy" as "NumPy."
|
Updated. Thanks for the great review as always :) |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
You're welcome ;) Thanks for the PR! |
Summary: Reference pytorch#38349 Delegates to `torch.transpose` (not sure what is the best way to alias) TODO: * [x] Add test * [x] Add documentation Pull Request resolved: pytorch#46041 Reviewed By: gchanan Differential Revision: D25022816 Pulled By: mruberry fbshipit-source-id: c80223d081cef84f523ef9b23fbedeb2f8c1efc5
Reference #38349
Delegates to
torch.transpose(not sure what is the best way to alias)TODO: