Skip to content

Beta function#78031

Closed
0x00b1 wants to merge 9 commits intopytorch:masterfrom
0x00b1:beta
Closed

Beta function#78031
0x00b1 wants to merge 9 commits intopytorch:masterfrom
0x00b1:beta

Conversation

@0x00b1
Copy link
Contributor

@0x00b1 0x00b1 commented May 21, 2022

Euler beta function:

torch.special.beta(input, other, *, out=None) → Tensor

reentrant_gamma and reentrant_ln_gamma implementations (using Stirling’s approximation) are provided. I started working on this before I realized we were missing a gamma implementation (despite providing incomplete gamma implementations). Uses the coefficients computed by Steve Moshier to replicate SciPy’s implementation. Likewise, it mimics SciPy’s behavior (instead of the behavior in Cephes).

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 21, 2022

🔗 Helpful links

❌ 1 New Failures

As of commit d17c6b4 (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-05-24T15:55:44.7985799Z The PR is introduc...m to confirm whether this change is wanted or not.
2022-05-24T15:55:44.7973563Z processing existing schema:  text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0)
2022-05-24T15:55:44.7974699Z processing existing schema:  count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-24T15:55:44.7975953Z processing existing schema:  duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-24T15:55:44.7977346Z processing existing schema:  source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0)
2022-05-24T15:55:44.7978716Z processing existing schema:  line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0)
2022-05-24T15:55:44.7979688Z processing existing schema:  __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-24T15:55:44.7981319Z processing existing schema:  enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-24T15:55:44.7982211Z processing existing schema:  disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-24T15:55:44.7983936Z processing existing schema:  _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0)
2022-05-24T15:55:44.7984974Z processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0)
2022-05-24T15:55:44.7985799Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2022-05-24T15:55:44.7986084Z 
2022-05-24T15:55:44.7986144Z Broken ops: [
2022-05-24T15:55:44.7986491Z 	prims::uniform(int[] shape, *, Scalar low, Scalar high, int dtype, Device device) -> (Tensor)
2022-05-24T15:55:44.7986928Z 	prims::empty_strided(int[] shape, int[] strides, *, int dtype, Device device, bool requires_grad) -> (Tensor)
2022-05-24T15:55:44.7997324Z 	prims::var(Tensor inp, int[]? dims, *, int correction, int? output_dtype=None) -> (Tensor)
2022-05-24T15:55:44.7997647Z 	prims::item(Tensor a) -> (Scalar)
2022-05-24T15:55:44.7997978Z 	prims::where(Tensor pred, Tensor a, Tensor b) -> (Tensor)
2022-05-24T15:55:44.7998290Z 	prims::cat(Tensor[] tensors, int dim) -> (Tensor)
2022-05-24T15:55:44.7998588Z 	prims::zeta(Tensor self, Tensor other) -> (Tensor)
2022-05-24T15:55:44.7998866Z 	prims::log10(Tensor self) -> (Tensor)

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@0x00b1 0x00b1 force-pushed the beta branch 2 times, most recently from 3061248 to 5bb1fa4 Compare May 21, 2022 15:43
@0x00b1 0x00b1 changed the title torch.special.beta Beta function May 21, 2022
@0x00b1 0x00b1 mentioned this pull request May 22, 2022
25 tasks
@0x00b1 0x00b1 force-pushed the beta branch 2 times, most recently from 79aff9b to a3bb1a2 Compare May 23, 2022 15:16
@0x00b1 0x00b1 marked this pull request as ready for review May 23, 2022 18:00
@albanD albanD removed their request for review May 23, 2022 18:01
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! Test failure is irrelevant.

I think this is fine to land as-is EXCEPT see my question about citing sources/licenses for the function implementations. If we need citations we should get them in with this PR

@0x00b1
Copy link
Contributor Author

0x00b1 commented May 24, 2022

@pytorchbot merge this please

@0x00b1 0x00b1 deleted the beta branch May 24, 2022 21:08
@github-actions
Copy link
Contributor

Hey @0x00b1.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@jeffdaily
Copy link
Collaborator

This broke the ROCm build on trunk.

https://ossci-raw-job-status.s3.amazonaws.com/log/6581698413

@jeffdaily
Copy link
Collaborator

The beta template function is only defined for the jiterator, not the non-jiterator code path.

@suo
Copy link
Member

suo commented May 24, 2022

@pytorchbot revert -m "broke trunk, see the above message" -c nosignal

@suo
Copy link
Member

suo commented May 24, 2022

If you'd like to run rocm tests on this PR, add the ciflow/trunk label.

pytorchmergebot added a commit that referenced this pull request May 24, 2022
This reverts commit da16450.

Reverted #78031 on behalf of https://github.com/suo due to broke trunk, see the above message
@0x00b1 0x00b1 added the ciflow/trunk Trigger trunk jobs on your pull request label May 24, 2022
@0x00b1
Copy link
Contributor Author

0x00b1 commented May 24, 2022

thanks @suo and @jeffdaily

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request cla signed Merged Reverted

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants