Implement logaddexp#38384
Conversation
💊 CI failures summary and remediationsAs of commit 97ab0ca (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 53 times. |
|
Remove WIP from the title when you're ready for review |
|
@mruberry anything left? |
Just mark it as no longer WIP/draft when you're ready for review. |
There was a problem hiding this comment.
Do you need a /*check_mem_overlap=*/=true here? What if result is self or other?
There was a problem hiding this comment.
there will be no problem if result is self or other, but problem might occur when view of tensor overlap with other view, so check_mem_overlap is definitely needed here.
There was a problem hiding this comment.
Same check_mem_overlap question as above.
There was a problem hiding this comment.
What happens if a == b == inf or -inf?
There was a problem hiding this comment.
Same question when a == b == +/- inf
There was a problem hiding this comment.
@albanD What's our plan for testing the derivatives of new functions like this?
There was a problem hiding this comment.
I am also curious.
There was a problem hiding this comment.
Don't import NumPy, instead assert TEST_NUMPY
There was a problem hiding this comment.
You should move these tests into TestTorchDeviceType so you can run them on both the CPU and GPU. Right now you're just testing the CPU. Then see the helper function "_np_compare," which you can use to simplify your tests, and the @dtypes decorator so you can test multiple dtypes. Since you're only enabling logaddexp on float types, you should test at least torch.long (assert throws RuntimeError), torch.float32, and torch.complex64 (assert throws RuntimeError). That way if someone implements logaddexp for complex64, for example, they'll know to update this test.
_np_compare works by taking values to test at. Your value generation is OK but you should add some more "interesting" values like -math.pi, 0, and math.pi, as well as extremal values like nan, -inf, and inf.
There was a problem hiding this comment.
This comment is a little misleading. Maybe something like: "the tensor whose exponential is added to the exponential of input before the log is taken"?
There was a problem hiding this comment.
your suggestion is too verbose, may be I should just delete it and let the generator generate the second input tensor for it?
There was a problem hiding this comment.
Same language/doc changes here as with the above.
mruberry
left a comment
There was a problem hiding this comment.
This PR is looking really good. I requested changes to the tests and docs and have a couple additional questions. I also want to check with @albanD if/how we're planning to validate the gradients of these functions.
Once we get that fixed up I think this PR will be good to go!
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Resolve pytorch#38377 Related pytorch#38349 This op should be disambiguated with `logsumexp` which do a reduction on a tensor over a specific axis. Pull Request resolved: pytorch#38384 Differential Revision: D21737336 Pulled By: mruberry fbshipit-source-id: 7864d04ca304c0fb2937bb083583e3e3d6ef205d
Resolve #38377
Related #38349
This op should be disambiguated with
logsumexpwhich do a reduction on a tensor over a specific axis.