Skip to content

add absolute alias for abs#36597

Closed
jes-bz wants to merge 1 commit intopytorch:masterfrom
jes-bz:abs_absolute
Closed

add absolute alias for abs#36597
jes-bz wants to merge 1 commit intopytorch:masterfrom
jes-bz:abs_absolute

Conversation

@jes-bz
Copy link
Copy Markdown
Contributor

@jes-bz jes-bz commented Apr 14, 2020

Adds an absolute alias for the abs function to match Numpy's use of both:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.absolute.html

Adds test to ensure the output from abs and absolute are the same.

@jes-bz jes-bz requested a review from mruberry April 14, 2020 19:20
@jes-bz jes-bz requested a review from apaszke as a code owner April 14, 2020 19:20
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Apr 14, 2020
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jessebrizzi has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Comment thread test/test_torch.py Outdated
@eellison
Copy link
Copy Markdown
Contributor

This is a good change, however it would be nice if there was a mechanism to register an alias for an operator. This PR duplicates logic for abs and makes it harder to for downstream consumers of the IR, such as TorchScript passes. E.g. guard elimination (and therefore codegen) will now work for torch.abs but not for torch.absolute: https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/passes/guard_elimination.cpp#L261

cc @jamesr66a pytorch as a platform. One solution is a canonicalization pass, and another is just to only emit aten::abs in the IR from the start.

@mruberry
Copy link
Copy Markdown
Collaborator

An alias mechanism would be nice, especially since we'll likely have several more aliases in the future. Let's talk briefly offline about possible implementations?

@vadimkantorov
Copy link
Copy Markdown
Contributor

vadimkantorov commented Apr 14, 2020

NumPy has a bunch of back-compat legacy of itself e.g. numpy.diff vs older numpy.ediff1d and a related numpy.gradient. I hope PyTorch doesn't import all these older/quirky variants in the goal of numpy-compat...

@mruberry
Copy link
Copy Markdown
Collaborator

NumPy has a bunch of back-compat legacy of itself e.g. numpy.diff vs older numpy.ediff1d and a related numpy.gradient. I hope PyTorch doesn't import all these older/quirky variants in the goal of numpy-compat...

np.absolute is the actual function name in NumPy, and np.abs is an alias for it. Don't worry, we're not implementing legacy NumPy names.

Comment thread test/test_torch.py Outdated
@vadimkantorov
Copy link
Copy Markdown
Contributor

Then there also are subtract, power, multiply etc

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Apr 14, 2020

💊 Build failures summary and remediations

As of commit ffa8139 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

XLA failure

Job pytorch_xla_linux_bionic_py3_6_clang9_build is failing. Please create an issue with title prefixed by [PT_BREAK] in pytorch/xla and link to to this PR. If you have questions, please reach out to @ailzhang / @dlibenzi / @JackCaoG.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 22 times.

Comment thread torch/_tensor_docs.py Outdated
Comment thread test/test_torch.py Outdated
@jes-bz jes-bz force-pushed the abs_absolute branch 2 times, most recently from e3394d1 to 7cea9ca Compare April 14, 2020 23:30
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool first addition to PyTorch, Jesse!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jessebrizzi has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jessebrizzi has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@jessebrizzi merged this pull request in 28f439d.

@jes-bz jes-bz deleted the abs_absolute branch April 20, 2020 22:23
eellison pushed a commit that referenced this pull request May 20, 2020
Follow up to my comment #36597

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. 

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as @glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

[ghstack-poisoned]
eellison pushed a commit that referenced this pull request May 20, 2020
Follow up to my comment #36597

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. 

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as @glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

[ghstack-poisoned]
eellison pushed a commit that referenced this pull request May 20, 2020
Follow up to my comment #36597

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc. 

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as @glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request May 22, 2020
Summary:
Pull Request resolved: #38735

Follow up to my comment #36597

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc.

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

Test Plan: Imported from OSS

Differential Revision: D21673108

Pulled By: eellison

fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Adds an absolute alias for the abs function to match Numpy's use of both:
https://docs.scipy.org/doc/numpy/reference/generated/numpy.absolute.html

Adds test to ensure the output from abs and absolute are the same.
Pull Request resolved: pytorch#36597

Differential Revision: D21024458

Pulled By: jessebrizzi

fbshipit-source-id: 4f2987e7bc7cde444d0a93e833a0350844b48d44
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#38735

Follow up to my comment pytorch#36597

This adds a pass to convert op aliases into a normalized form. Having two ops generated in our IR that do the same thing makes the IR harder for downstream consumers of the IR, such as TorchScript passes but also ONNX, glow, etc.

Another solution would have been to fix our code generation to only emit `aten::abs` from the start. This seems trickier, and doesn't really buy us much if we still have to expose `aten::absolute` in C++, as glaringlee of the C++ API thinks we should.

Bike shedding: maybe this should be `CanonicalizeOps` instead

Test Plan: Imported from OSS

Differential Revision: D21673108

Pulled By: eellison

fbshipit-source-id: c328618907de1af22e07f57fd27fa619978c2817
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants