[numpy] Add torch.xlogy#48777
Conversation
| self._test_atleast(device, torch.atleast_2d) | ||
| self._test_atleast(device, torch.atleast_3d) | ||
|
|
||
| def test_xlogy(self, device): |
There was a problem hiding this comment.
grad and gradgrad checks are performed for every OpInfo in test_ops.py. Are those tests not sufficient for some reason? Is it because of xlogy's unique behavior at (0, 0)?
There was a problem hiding this comment.
Yes. This is to particularly verify the behaviour at zero.
There was a problem hiding this comment.
This is really interesting, cc @albanD. I wonder if we can better support this use case in the future by extending OpInfos with metadata like "special_autograd_values."
The OpInfo test may also generate zeros, although it's not very likely. If it did, would those cause the regular autograd check to fail? I'm trying to understand what the OpInfo-based autograd tests can cover vs. what this extra test needs to validate.
| other: grad / (1 + pow(2, self - other)) | ||
|
|
||
| - name: xlogy.Tensor(Tensor self, Tensor other) -> Tensor | ||
| self: grad * at::xlogy((self != 0), other) |
mruberry
left a comment
There was a problem hiding this comment.
Hey @kshitij12345! Thanks for submitting this PR adding xlogy. The issue it fixes has been open for a long time.
I made some comments/suggestions, especially about extending this to include method, inplace, and out= variants and using the OpInfo pattern.
Eventually we'll want to add a BinaryUfuncInfo class to the OpInfo architecture. But I think we could add xlogy as a regular OpInfo for now.
| r""" | ||
| xlogy(self, other, *, out=None) -> Tensor | ||
|
|
||
| Computes ``self * log(other)`` except it treats ``0 * log(0)`` as ``0``. |
There was a problem hiding this comment.
I think we looked at this briefly before, but I don't think we can say "it treats..." because if self is zero then all values of log, including negative values, will return 0. And PyTorch treats the logarithm of negative numbers as nan, too:
t = torch.tensor((-2., -1, 0, 1, 2))
torch.log(t)
: tensor([ nan, nan, -inf, 0.0000, 0.6931])
The problem with the "it treats" clause here is that this implies it's the ONLY exception from the normal computation.
What about saying, "Computes ``self * log(other)`` but with the following special cases:"
Then putting the mathematical cases below that line, then the sentence about this being similar to SciPy's xlogy after that?
| self (Number or Tensor) | ||
| other (Number or Tensor) | ||
|
|
||
| .. note:: Both attr:`self` and attr:`other` cannot be number. |
There was a problem hiding this comment.
"Both attr:self and attr:other cannot be number." -> "At least one of :attr:`self` or :attr:`other` must be a tensor."
| dtypesIfCUDA=all_types_and(torch.bool, torch.half, torch.bfloat16), | ||
| test_inplace_grad=True, | ||
| supports_tensor_out=True, | ||
| skips=( |
There was a problem hiding this comment.
This skip should be able to removed after a rebase
mruberry
left a comment
There was a problem hiding this comment.
Hey @kshitij12345!
Overall this looks very good and thorough. Just a few small comments inline.
|
The current coverage_test2 failure is unrelated to this PR, sorry about that. Just let me know when this is ready for another review! |
|
@mruberry This is ready for review. Have addressed comments. |
| \text{out}_{i} = \begin{cases} | ||
| \text{NaN} & \text{if } \text{other}_{i} = \text{NaN} \\ | ||
| 0 & \text{if } \text{self}_{i} = 0.0 \\ | ||
| self_i * \log{(other_i)} & \text{otherwise} |
There was a problem hiding this comment.
self_i -> \text{self}_i and other_i -> \text{other}_i on this line
And one last change is that "self" should be renamed to "input" in this doc, sorry I forgot to mention that earlier. Just like how the parameters are named in add:
https://pytorch.org/docs/master/generated/torch.add.html?highlight=add#torch.add
| """ + r""" | ||
|
|
||
| Args: | ||
| self (Number or Tensor) |
There was a problem hiding this comment.
"self" -> "input" here, too
| self (Number or Tensor) | ||
| other (Number or Tensor) | ||
|
|
||
| .. note:: At least one of :attr:`self` or :attr:`other` must be a tensor. |
There was a problem hiding this comment.
"self" -> "input" here as well
mruberry
left a comment
There was a problem hiding this comment.
Looks great, thanks @kshitij12345!
One last nit is the naming of the params (should be 'input', not 'self') and a bit of Latex formatting. Just ping me when that's done and I'll merge this.
* self -> input * update math formatting
|
@mruberry PTAL :) |
mruberry
left a comment
There was a problem hiding this comment.
Awesome! Thanks @kshitij12345!
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Reference pytorch#38349 Fixes pytorch#22656 TODO: * [x] Add docs * [x] Add tests Pull Request resolved: pytorch#48777 Reviewed By: ngimel Differential Revision: D25681346 Pulled By: mruberry fbshipit-source-id: 369e0a29ac8a2c44de95eec115bf75943fe1aa45
Reference #38349
Fixes #22656
TODO: