[numpy] torch.lgamma: promote integer inputs to float#50140
[numpy] torch.lgamma: promote integer inputs to float#50140kshitij12345 wants to merge 17 commits intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit 341664f (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
| dtypesIfCUDA=all_types_and(torch.bool, torch.half), | ||
| skips=( | ||
| SkipInfo('TestUnaryUfuncs', 'test_reference_numerics', | ||
| dtypes=[torch.bfloat16]), |
There was a problem hiding this comment.
Skipping Bfloat16
>>> import torch
>>> torch.tensor(500.0, dtype=torch.bfloat16)
tensor(500., dtype=torch.bfloat16)
>>> t = torch.tensor(500.0, dtype=torch.bfloat16)
>>> torch.lgamma(t)
tensor(2608., dtype=torch.bfloat16)
>>> torch.lgamma(t.to(torch.float32))
tensor(2605.1160)There was a problem hiding this comment.
Good skip. This again highlights that we should break-up test_reference_numerics into different value ranges (like small values, large values, extremal values, for example).
There was a problem hiding this comment.
It would be helpful if ranges can be specified per dtype or for group of dtype (maybe something similar to precision-override), because we don't want to decrease the range of all dtypes just to be able to support one dtype with smaller range.
Codecov Report
@@ Coverage Diff @@
## master #50140 +/- ##
==========================================
- Coverage 80.91% 80.91% -0.01%
==========================================
Files 1926 1926
Lines 210014 210023 +9
==========================================
+ Hits 169943 169945 +2
- Misses 40071 40078 +7 |
| return scipy.special.expit(x) | ||
|
|
||
| def reference_lgamma(x): | ||
| # scipy.special.gammaln returns `-inf` when input is `-inf`. |
There was a problem hiding this comment.
This is a bug in SciPy.
scipy.special.gammaln is documented as equivalent to math.lgamma, but math.lgamma returns inf when given -inf.
Would you file an issue with SciPy, @kshitij12345?
cc @rgommers
|
|
||
| def reference_lgamma(x): | ||
| # scipy.special.gammaln returns `-inf` when input is `-inf`. | ||
| # While Pytorch, C and C++, all return `inf` when input is `-inf`. |
There was a problem hiding this comment.
This is a good comment. Would you extend it to include Python's math.lgamma, too?
| # https://en.cppreference.com/w/c/numeric/math/lgamma | ||
|
|
||
| # To handle the above discrepancy, | ||
| # we never pass `-inf` to scipy.special.gammaln. |
There was a problem hiding this comment.
This workaround is more clever than this comment suggests. It's not only that this never passes -inf to scipy.special.gammaln, but that it replaces -inf with inf so the mapping -inf->inf occurs as expected.
There was a problem hiding this comment.
Should we phrase it as,
# To handle the above discrepancy,
# we never pass `-inf` to scipy.special.gammaln and replace them with `inf`.
There was a problem hiding this comment.
That's pretty good, but it requires the reader make the connection about how -inf is mapped to inf.
"this replaces -inf with inf so values that were originally -inf map to inf as expected"?
There was a problem hiding this comment.
I think it is very verbose but atleast clear. Will use that thanks!
| # Reference: https://github.com/pytorch/pytorch/pull/50140#issuecomment-756150214 | ||
| SkipInfo('TestUnaryUfuncs', 'test_reference_numerics', | ||
| dtypes=[torch.float32, torch.float64], active_if=IS_WINDOWS), | ||
| # Backward of `lgamma` uses `digamma` but `digamma` |
There was a problem hiding this comment.
This is interesting. From the perspective of this PR this skip is correct, but I thought test_variant_consistency_jit wrapped the backward call in a try/except block to catch these errors?
There was a problem hiding this comment.
I have verified. I don't see any try/execpt block in the failing test.
Stack Trace
======================================================================
ERROR: test_variant_consistency_jit_lgamma_cpu_bfloat16 (__main__.TestCommonCPU)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/kshiteej/.conda/envs/PytorchENV/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 284, in instantiated_test
raise rte
File "/home/kshiteej/.conda/envs/PytorchENV/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 279, in instantiated_test
result = test_fn(self, *args)
File "/home/kshiteej/.conda/envs/PytorchENV/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 253, in test_wrapper
return test(*args, **kwargs)
File "test/test_ops.py", line 290, in test_variant_consistency_jit
no_grad=(dtype not in dtypes_to_grad_check))
File "/home/kshiteej/.conda/envs/PytorchENV/lib/python3.7/site-packages/torch/testing/_internal/common_jit.py", line 77, in check_against_reference
allow_unused=allow_unused)
File "/home/kshiteej/.conda/envs/PytorchENV/lib/python3.7/site-packages/torch/autograd/__init__.py", line 225, in grad
inputs, allow_unused, accumulate_grad=False)
RuntimeError: "digamma" not implemented for 'BFloat16'
----------------------------------------------------------------------
Ran 94 tests in 5.282s
FAILED (errors=1, skipped=8)
Code fails in check_against_reference
Lines 272 to 300 in 5546a12
The code for check_against_reference also doesn't have any try/except
pytorch/torch/testing/_internal/common_jit.py
Lines 39 to 108 in 5546a12
Note:
The test_variant_consistency_eager test has a try/except block which checks if the eager mode raises or not.
Lines 183 to 237 in 5546a12
There was a problem hiding this comment.
Aha! That's why I was confused. Thank you for verifying. I'll update the test suite tracker.
|
Clever workarounds for that SciPy reference's bug on this one. Nice job, @kshitij12345. I think we can kill this, too: Line 6920 in 78e71ce Would you also take a look at test_variant_consistency_jit briefly to see what's going on in backward? I really thought we had written that test so backwards could fail and we would just verify it failed in both eager mode and the jit. |
|
Thanks for the review. Changes:
Have checked the failing Note : Thanks! |
|
@mruberry PTAL :) |
mruberry
left a comment
There was a problem hiding this comment.
Great work as usual, @kshitij12345. The test failure looks unrelated.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| SkipInfo('TestCommon', 'test_variant_consistency_jit', | ||
| dtypes=[torch.bfloat16]), | ||
| ), | ||
| promotes_integers_to_float=True), |
There was a problem hiding this comment.
This PR has to be rebased because this will cause a logical merge conflict with @peterbell10's
I think it just needs to be changed to safe_casts_outputs.
There was a problem hiding this comment.
Thanks! Updated.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Reference: pytorch#42515 Pull Request resolved: pytorch#50140 Reviewed By: mrshenli Differential Revision: D25951094 Pulled By: mruberry fbshipit-source-id: e53f1dbddff889710f05d43dbc9587382d3decb0
Reference: #42515