Implement torch.nextafter#42580
Conversation
💊 CI failures summary and remediationsAs of commit bdfabb0 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
413481a to
b6aba39
Compare
b6aba39 to
82c24b0
Compare
|
@mruberry PTAL |
| r""" | ||
| nextafter(input, other, *, out=None) -> Tensor | ||
|
|
||
| Return the next floating-point value after input towards other, elementwise. |
There was a problem hiding this comment.
:attr:`input` and :attr:`other` here and in the following paragraph.
| def test_nextafter(self, device, dtype): | ||
| # Test special cases | ||
| t1 = torch.tensor([0, 0, 10], device=device, dtype=dtype) | ||
| t2 = torch.tensor([inf, -inf, 10], device=device, dtype=dtype) |
| t1 = torch.tensor([0, 0, 10], device=device, dtype=dtype) | ||
| t2 = torch.tensor([inf, -inf, 10], device=device, dtype=dtype) | ||
| actual = torch.nextafter(t1, t2) | ||
| expected = np.nextafter(t1.cpu().numpy(), t2.cpu().numpy()) |
There was a problem hiding this comment.
Better test the flip of these torch.nextafter(t2, t1) as well
There was a problem hiding this comment.
This looks really good, @muthuArivoli, nice work! I just made a few suggestions for the test and doc.
|
@mruberry, I removed the half implementation since it didn't work. I also have made all the other changes requested. |
|
Everything looks very good - would just add 'nan' as another special value in the test (along with inf/-inf)? |
I added nan in the test after that test mentioned. Other tests seemed to use |
Aha! Thank you for pointing that out. My mistake. |
mruberry
left a comment
There was a problem hiding this comment.
Awesome work, @muthuArivoli! Thanks for this PR.
I just updated #38349, too, if you're interested in other functions.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
We have some internal mobile builds that are failing after this PR: cc @dreiss I'm not very familiar with mobile but I can follow-up with them to get a better understanding of what's going on here and how to address this issue. One option, if you like, would be to removed the CPU vectorized code paths for now, add this function, and then add the vectorized code paths back once we have a plan to deal with this mobile issue. Either that or we can wait for the mobile team to weigh-in. |
|
Now that #42291 is in, this PR will need analogous updates. |
|
Just updated with the mobile fixes. I realized however, that I haven't done anything for autograd. Do I need to do anything for autograd, and if so, how would I, since I don't think the gradient is well defined for nextafter? |
You could add a stub like this: Which would throw a nicer error message if someone tries to backward through nextafter. |
Thanks! Just added that. |
|
@mruberry Could you run this against the internal mobile builds? |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Android builds look good. Starting the land process. |
Summary: Related to pytorch#38349. Pull Request resolved: pytorch#42580 Reviewed By: smessmer Differential Revision: D23012260 Pulled By: mruberry fbshipit-source-id: ce82a63c4ad407ec6ffea795f575ca7c58cd6137
Related to #38349.