Skip to content

[numpy] Add torch.xlogy#48777

Closed
kshitij12345 wants to merge 30 commits intopytorch:masterfrom
kshitij12345:develop/numpy/xlogy
Closed

[numpy] Add torch.xlogy#48777
kshitij12345 wants to merge 30 commits intopytorch:masterfrom
kshitij12345:develop/numpy/xlogy

Conversation

@kshitij12345
Copy link
Copy Markdown
Collaborator

@kshitij12345 kshitij12345 commented Dec 3, 2020

Reference #38349
Fixes #22656

TODO:

  • Add docs
  • Add tests

@kshitij12345 kshitij12345 changed the title [WIP] [numpy] Add torch.xlogy [numpy] Add torch.xlogy Dec 3, 2020
@kshitij12345 kshitij12345 marked this pull request as ready for review December 3, 2020 16:24
@kshitij12345 kshitij12345 requested a review from mruberry December 3, 2020 16:25
@heitorschueroff heitorschueroff added module: numpy Related to numpy support, and also numpy compatibility of our operators triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Dec 3, 2020
Comment thread aten/src/ATen/native/native_functions.yaml
Comment thread test/test_autograd.py
self._test_atleast(device, torch.atleast_2d)
self._test_atleast(device, torch.atleast_3d)

def test_xlogy(self, device):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

grad and gradgrad checks are performed for every OpInfo in test_ops.py. Are those tests not sufficient for some reason? Is it because of xlogy's unique behavior at (0, 0)?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This is to particularly verify the behaviour at zero.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really interesting, cc @albanD. I wonder if we can better support this use case in the future by extending OpInfos with metadata like "special_autograd_values."

The OpInfo test may also generate zeros, although it's not very likely. If it did, would those cause the regular autograd check to fail? I'm trying to understand what the OpInfo-based autograd tests can cover vs. what this extra test needs to validate.

Comment thread test/test_binary_ufuncs.py
other: grad / (1 + pow(2, self - other))

- name: xlogy.Tensor(Tensor self, Tensor other) -> Tensor
self: grad * at::xlogy((self != 0), other)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @albanD for this derivative

Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py Outdated
Comment thread torch/_torch_docs.py
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kshitij12345! Thanks for submitting this PR adding xlogy. The issue it fixes has been open for a long time.

I made some comments/suggestions, especially about extending this to include method, inplace, and out= variants and using the OpInfo pattern.

Eventually we'll want to add a BinaryUfuncInfo class to the OpInfo architecture. But I think we could add xlogy as a regular OpInfo for now.

Comment thread aten/src/ATen/native/cuda/BinaryMiscOpsKernels.cu Outdated
Comment thread aten/src/ATen/native/native_functions.yaml Outdated
Comment thread torch/_torch_docs.py Outdated
r"""
xlogy(self, other, *, out=None) -> Tensor

Computes ``self * log(other)`` except it treats ``0 * log(0)`` as ``0``.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we looked at this briefly before, but I don't think we can say "it treats..." because if self is zero then all values of log, including negative values, will return 0. And PyTorch treats the logarithm of negative numbers as nan, too:

t = torch.tensor((-2., -1, 0, 1, 2))
torch.log(t)
: tensor([   nan,    nan,   -inf, 0.0000, 0.6931])

The problem with the "it treats" clause here is that this implies it's the ONLY exception from the normal computation.

What about saying, "Computes ``self * log(other)`` but with the following special cases:"

Then putting the mathematical cases below that line, then the sentence about this being similar to SciPy's xlogy after that?

Comment thread torch/_torch_docs.py Outdated
self (Number or Tensor)
other (Number or Tensor)

.. note:: Both attr:`self` and attr:`other` cannot be number.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Both attr:self and attr:other cannot be number." -> "At least one of :attr:`self` or :attr:`other` must be a tensor."

dtypesIfCUDA=all_types_and(torch.bool, torch.half, torch.bfloat16),
test_inplace_grad=True,
supports_tensor_out=True,
skips=(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This skip should be able to removed after a rebase

Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @kshitij12345!

Overall this looks very good and thorough. Just a few small comments inline.

@mruberry
Copy link
Copy Markdown
Collaborator

The current coverage_test2 failure is unrelated to this PR, sorry about that.

Just let me know when this is ready for another review!

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@mruberry This is ready for review. Have addressed comments.

Comment thread torch/_torch_docs.py Outdated
\text{out}_{i} = \begin{cases}
\text{NaN} & \text{if } \text{other}_{i} = \text{NaN} \\
0 & \text{if } \text{self}_{i} = 0.0 \\
self_i * \log{(other_i)} & \text{otherwise}
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self_i -> \text{self}_i and other_i -> \text{other}_i on this line

And one last change is that "self" should be renamed to "input" in this doc, sorry I forgot to mention that earlier. Just like how the parameters are named in add:

https://pytorch.org/docs/master/generated/torch.add.html?highlight=add#torch.add

Comment thread torch/_torch_docs.py Outdated
""" + r"""

Args:
self (Number or Tensor)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"self" -> "input" here, too

Comment thread torch/_torch_docs.py Outdated
self (Number or Tensor)
other (Number or Tensor)

.. note:: At least one of :attr:`self` or :attr:`other` must be a tensor.
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"self" -> "input" here as well

@mruberry mruberry self-requested a review December 21, 2020 11:40
Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thanks @kshitij12345!

One last nit is the naming of the params (should be 'input', not 'self') and a bit of Latex formatting. Just ping me when that's done and I'll merge this.

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@mruberry PTAL :)

Copy link
Copy Markdown
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Thanks @kshitij12345!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@mruberry merged this pull request in 2780400.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Reference pytorch#38349
Fixes pytorch#22656

TODO:
* [x] Add docs
* [x] Add tests

Pull Request resolved: pytorch#48777

Reviewed By: ngimel

Differential Revision: D25681346

Pulled By: mruberry

fbshipit-source-id: 369e0a29ac8a2c44de95eec115bf75943fe1aa45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: numpy Related to numpy support, and also numpy compatibility of our operators open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement torch.xlogy

6 participants