add absolute alias for abs#36597
Conversation
facebook-github-bot
left a comment
There was a problem hiding this comment.
@jessebrizzi has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
This is a good change, however it would be nice if there was a mechanism to register an alias for an operator. This PR duplicates logic for cc @jamesr66a pytorch as a platform. One solution is a canonicalization pass, and another is just to only emit |
|
An alias mechanism would be nice, especially since we'll likely have several more aliases in the future. Let's talk briefly offline about possible implementations? |
|
NumPy has a bunch of back-compat legacy of itself e.g. numpy.diff vs older numpy.ediff1d and a related numpy.gradient. I hope PyTorch doesn't import all these older/quirky variants in the goal of numpy-compat... |
np.absolute is the actual function name in NumPy, and np.abs is an alias for it. Don't worry, we're not implementing legacy NumPy names. |
|
Then there also are |
💊 Build failures summary and remediationsAs of commit ffa8139 (more details on the Dr. CI page):
XLA failureJob pytorch_xla_linux_bionic_py3_6_clang9_build is failing. Please create an issue with title prefixed by This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 22 times. |
e3394d1 to
7cea9ca
Compare
mruberry
left a comment
There was a problem hiding this comment.
Cool first addition to PyTorch, Jesse!
facebook-github-bot
left a comment
There was a problem hiding this comment.
@jessebrizzi has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@jessebrizzi has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@jessebrizzi merged this pull request in 28f439d. |
Follow up to my comment #36597 This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as @glaringlee of the C++ API thinks we should. Bike shedding: maybe this should be `CanonicalizeOps` instead [ghstack-poisoned]
Follow up to my comment #36597 This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as @glaringlee of the C++ API thinks we should. Bike shedding: maybe this should be `CanonicalizeOps` instead [ghstack-poisoned]
Follow up to my comment #36597 This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as @glaringlee of the C++ API thinks we should. Bike shedding: maybe this should be `CanonicalizeOps` instead [ghstack-poisoned]
Summary: Pull Request resolved: #38735 Follow up to my comment #36597 This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should. Bike shedding: maybe this should be `CanonicalizeOps` instead Test Plan: Imported from OSS Differential Revision: D21673108 Pulled By: eellison fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
Summary: Adds an absolute alias for the abs function to match Numpy's use of both: https://docs.scipy.org/doc/numpy/reference/generated/numpy.absolute.html Adds test to ensure the output from abs and absolute are the same. Pull Request resolved: pytorch#36597 Differential Revision: D21024458 Pulled By: jessebrizzi fbshipit-source-id: 4f2987e7bc7cde444d0a93e833a0350844b48d44
Summary: Pull Request resolved: pytorch#38735 Follow up to my comment pytorch#36597 This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should. Bike shedding: maybe this should be `CanonicalizeOps` instead Test Plan: Imported from OSS Differential Revision: D21673108 Pulled By: eellison fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
Adds an absolute alias for the abs function to match Numpy's use of both:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.absolute.html
Adds test to ensure the output from abs and absolute are the same.