PyTorch master@92a0f78
Our digamma single-precision floating point accuracy is bad near the poles. This is not caught by our tests because we only test on a few values:
|
def test_digamma(self): |
|
from scipy.special import digamma |
|
self._testMath(torch.digamma, digamma, large=False, precs=(2e-8, 3e-4)) |
input = torch.tensor([-1.99999994], dtype=torch.float32)
fp32 = input.digamma()
fp64 = input.double().digamma()
print(((fp32.double() - fp64) / fp64).item()) # 0.24012388522631392
This is a relative error of 24%
We also are returning real numbers at the poles (for both float32 and float64) when we should be returning inf.
PyTorch master@92a0f78
Our digamma single-precision floating point accuracy is bad near the poles. This is not caught by our tests because we only test on a few values:
pytorch/test/test_torch.py
Lines 272 to 274 in 92a0f78
This is a relative error of 24%
We also are returning real numbers at the poles (for both float32 and float64) when we should be returning inf.