Fix mvlgamma_ FPE crash on x86 with integer input#164230
Fix mvlgamma_ FPE crash on x86 with integer input#164230RohitRathore1 wants to merge 2 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/164230
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 7aec5bc with merge base 5623628 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "topic: not user facing" |
|
Hi @malfet please review this PR once you are available. Thanks! |
|
Hi @malfet please review this PR, thanks! |
| args = args.add(self.unsqueeze(-1)); | ||
| const auto p2_sub_p = static_cast<double>(p * (p - 1)); | ||
| return self.copy_(args.lgamma_().sum(-1).add_(p2_sub_p * std::log(c10::pi<double>) * QUARTER)); | ||
| return at::mvlgamma_out(self, self, p); |
There was a problem hiding this comment.
Do you know why the argument order is flipped compared to the function just below?
There was a problem hiding this comment.
The argument order flipped because line 911 calls the public API at::mvlgamma_out(out, self, p) while line 914 defines the native implementation with signature (self, p, result). These are two different functions i.e., the public API wrapper vs the native implementation. According to the generated header, the public API is at::Tensor & mvlgamma_out(at::Tensor & out, const at::Tensor & self, int64_t p). So the call at::mvlgamma_out(self, self, p) correctly maps to (out=self, self=self, p=p).
116536d to
def3b05
Compare
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
def3b05 to
2a562e7
Compare
albanD
left a comment
There was a problem hiding this comment.
Thanks for the details. That sounds good!
Lint needs fixing but then you can merge!
|
hi @albanD please can you help me to understand how can I resolve such type of linting failures? https://github.com/pytorch/pytorch/actions/runs/19327144562/job/55280965920?pr=164230 |
|
@pytorchbot rebase |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
bbc4e69 to
7aec5bc
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Fixes pytorch#161871. Behaviour on arm: ``` PyTorch version: 2.10.0a0+gitdef3b05 Architecture: arm64 Platform: Darwin Processor: arm Testing mvlgamma_ with integer tensor on arm64... Got expected error: mvlgamma: result type Long can't be cast to the desired output type Float ``` and on x86: ``` PyTorch version: 2.10.0a0+git1310d6a Architecture: x86_64 Platform: Linux Processor: x86_64 Testing mvlgamma_ with integer tensor on x86_64... Got expected error: mvlgamma: result type Long can't be cast to the desired output type Float ``` Pull Request resolved: pytorch#164230 Approved by: https://github.com/albanD
Fixes #161871.
Behaviour on arm:
and on x86:
cc: @malfet