Clarify, make consistent, and test the behavior of logspace when dtype is integral#47647
Clarify, make consistent, and test the behavior of logspace when dtype is integral#47647xuhdev wants to merge 4 commits intopytorch:masterfrom
Conversation
|
Hi @xuhdev! Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention. You currently have a record in our system, but we do not have a signature on file. In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
💊 CI failures summary and remediationsAs of commit 878a465 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
8343c4d to
cf01217
Compare
326b298 to
375a47b
Compare
There was a problem hiding this comment.
could you clarify your change here please?
based on the test_tensor_creation_ops.py change, you are comparing within the device type instead of crossing cpu/gpu. maybe adding a test doing the cpu/gpu cross comparison can help us review this change.
There was a problem hiding this comment.
I have added the test. This is to make the implementation consistent with CPU, because the CPU implementation uses double instead of float here.
There was a problem hiding this comment.
does your added CPU/GPU comparison test break when you use the original float here?
i guess my point was, cuda impl used float for a reason, and according to the comment on the CPU side, it was "autopromoted". e.g. we might not need that extra precision
There was a problem hiding this comment.
@walterddr Yes, it broke when I kept them as float. Actually it broke the tests on CUDA alone. Likely I guess this is about log/exp, and low precision can easily cause trouble.
i guess my point was, cuda impl used float for a reason, and according to the comment on the CPU side, it was "autopromoted". e.g. we might not need that extra precision
My speculation is that the CUDA implementation used float for perhaps performance reasons? cc @colesbury
Also, I'm having a hard time understanding the current asan test failure, which doesn't show what is broken (it still failed for the same reason when I did not add anything to the CUDA implementation).
walterddr
left a comment
There was a problem hiding this comment.
looks good. I got one more comment and it should be good to go
There was a problem hiding this comment.
you can just create a separate test
@skipCPUIf(True, "compares with CPU")
@dtypesIfCUDA(torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64)
def test_logspace_integral(self, device, dtype):
...
see how other special typed logspace test were doing before this one.
There was a problem hiding this comment.
I added this. I chose to not skip all CPU tests because there are some parts not concerning CPUs.
torch.logspace doesn't seem to have explained how integers are handled. Add some clarification and some test when dtype is integral.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
thanks for the change. lgtm now. I'll add @colesbury to see if there's any concern on the float->double type change from the cuda side.
|
@walterddr merged this pull request in 0ae0fac. |
|
This pull request has been reverted by 3df5f9c. |
|
@walterddr Is there a reason for the reversion? |
|
the change actually broke ASAN instead of being flaky. |
|
also I should've checked this more carefully before, but seems like torch.logspace doesn't produce the same expected results as numpy.logspace. --> its a matter of whether we should convert the start/stop into integer before or after the fact. I will look into it and iterate on top of this PR |
…e is integral torch.logspace doesn't seem to have explained how integers are handled. Add some clarification and some test when dtype is integral. The CUDA implementation is also updated to be consistent with CPU implementation. Following up pytorch#47647
|
@walterddr Thanks, please keep me posted. |
|
@walterddr Do we have any plan to further resolve this issue? |
…e is integral (pytorch#47647) Summary: torch.logspace doesn't seem to have explained how integers are handled. Add some clarification and some test when dtype is integral. The CUDA implementation is also updated to be consistent with CPU implementation. Pull Request resolved: pytorch#47647 Reviewed By: gchanan Differential Revision: D25843351 Pulled By: walterddr fbshipit-source-id: 45237574d04c56992c18766667ff1ed71be77ac3
torch.logspace doesn't seem to have explained how integers are handled.
Add some clarification and some test when dtype is integral.
The CUDA implementation is also updated to be consistent with CPU implementation.