Skip to content

More precise digamma#6517

Merged
zou3519 merged 4 commits intopytorch:masterfrom
zou3519:digamma2
Apr 13, 2018
Merged

More precise digamma#6517
zou3519 merged 4 commits intopytorch:masterfrom
zou3519:digamma2

Conversation

@zou3519
Copy link
Copy Markdown
Contributor

@zou3519 zou3519 commented Apr 11, 2018

Fixes #6190.

This is a rebase of #3955 with some tweaks for better performance around
poles. The code is ported over from cephes with permission.

By itself, the cephes code returns inf for the poles.

For better performance around the poles with float32, one intermediate
step is always computed with double precision, regardless of dtype.
This step does PI / tan(PI * input). This is necessary because small (1e-6)
rounding errors for the inputs to tan have strong effects on the output
(ie, the derivative of tan is very large at some points).

cc @colesbury @apaszke

zou3519 added 3 commits April 11, 2018 13:20
Fixes pytorch#6190.

This is a rebase of pytorch#3955 with some tweaks for better performance around
poles. The code is ported over from cephes with permission.

By itself, the cephes code returns inf for the poles.

For better performance around the poles with float32, one intermediate
step is always computed with double precision, regardless of dtype.
This step does `PI / tan(PI * input)`. This is necessary because small (1e-6)
rounding errors for the inputs to tan have strong effects on the output
(ie, the derivative of tan is very large at some points).
@zou3519
Copy link
Copy Markdown
Contributor Author

zou3519 commented Apr 11, 2018

@pytorchbot retest this please

@ssnl ssnl self-assigned this Apr 12, 2018
@ssnl
Copy link
Copy Markdown
Collaborator

ssnl commented Apr 12, 2018

Claiming to remind me to review later. If anyone feels like reviewing, please feel free to do so. :)

Copy link
Copy Markdown
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haven't reviewed the math (I assume it just comes from Cephes). Two minor things, but LGTM.

Comment thread aten/src/TH/THMath.h
return result;
}

static inline float TH_polevlf(float x, float *A, size_t len) {

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread aten/src/THC/THCTensorMathPointwise.cuh Outdated

compute_type x = ScalarConvert<real, compute_type>::to(*in);
if (x == 0) {
*out = ScalarConvert<float, real>::to(INFINITY);

This comment was marked as off-topic.

@zou3519 zou3519 merged commit 6c0f740 into pytorch:master Apr 13, 2018
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
* More precise digamma

Fixes pytorch#6190.

This is a rebase of pytorch#3955 with some tweaks for better performance around
poles. The code is ported over from cephes with permission.

By itself, the cephes code returns inf for the poles.

For better performance around the poles with float32, one intermediate
step is always computed with double precision, regardless of dtype.
This step does `PI / tan(PI * input)`. This is necessary because small (1e-6)
rounding errors for the inputs to tan have strong effects on the output
(ie, the derivative of tan is very large at some points).

* Replace usages of finite-differences digamma with newly implemented digamma

* Better behavior near and at poles

* ScalarConvert -> scalar_cast for readability
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants