Add torch.flip{lr, ud}#38599
Conversation
💊 CI failures summary and remediationsAs of commit 6c7aaeb (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 31 times. |
mruberry
left a comment
There was a problem hiding this comment.
This is great, thanks @kshitij12345! I made some comments to improve the documentation. Also, I'd like to add more testing. We should add test_fliplr and test_flipud to TestTorchDeviceType in test_torch.py. That will let us test them on CPU and CUDA and on multiple device types, too. Further, that test class defines a helper, _np_compare, that you can use to implement most of the test for you. It will compare the results of torch.flip{lr, ud} to NumPy's results. Take a look at some of the tests using _np_compare already and let me know if you have any questions.
Once the docs are improved and we get the new tests in I think this PR will be good to go!
* update docs.
|
Sounds smart. Just re-request when you're ready. I pointed out a couple doc nits and @anjali411 has a suggestion for autograd testing, too. |
* fix `note`
Do review. Thanks. |
mruberry
left a comment
There was a problem hiding this comment.
Looks very good. Made a couple notes that we can reduce the test surface a little and should avoid a merge conflict by waiting a moment to rebase this PR. This PR also needs an update to:
- Add the function names to the list of interned strings (see )
- Update tensors.rst (https://github.com/pytorch/pytorch/blob/master/docs/source/tensors.rst) and torch.rst (https://github.com/pytorch/pytorch/blob/master/docs/source/torch.rst) to register the docs
Then we'll just need to verify the tests pass.
…lop/numpy/fliplr-flipud
* reduce test dtypes. * use compare_with_numpy. * update docs rst file. * add entry to aten_interned_strings.
|
@mruberry |
|
Flake8 failure looks non-relevant to the PR. |
You're correct. I submitted a fix but you have to rebase to see it. |
|
Gentle ping:) |
mruberry
left a comment
There was a problem hiding this comment.
Awesome! Thanks @kshitij12345!
If you'd like to implement another NumPy function that Torch is missing, maybe maximum and minimum or broadcast_to?
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Thanks. Sure. Would love to try |
Summary: Reference: pytorch#38349 TODO: * [x] Add Tests Pull Request resolved: pytorch#38599 Differential Revision: D21941884 Pulled By: mruberry fbshipit-source-id: 7a442ff11051c2c868cf8e3c04e4bba0f1a1d426
Reference: #38349
TODO: