-
Notifications
You must be signed in to change notification settings - Fork 27.7k
Nan propagation of min/max function broken for POWER #57537
Copy link
Copy link
Closed
Labels
module: NaNs and InfsProblems related to NaN and Inf handling in floating pointProblems related to NaN and Inf handling in floating pointmodule: POWERIssues specific to the POWER/ppc architectureIssues specific to the POWER/ppc architecturetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Metadata
Metadata
Assignees
Labels
module: NaNs and InfsProblems related to NaN and Inf handling in floating pointProblems related to NaN and Inf handling in floating pointmodule: POWERIssues specific to the POWER/ppc architectureIssues specific to the POWER/ppc architecturetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Bug
Running the tests shows a failure with nan propagation for
torch.max/torch.minintroduced in 1.8.0 via #41541Example:
To Reproduce
Run tests on PPC
See e.g https://gist.github.com/Flamefire/3f11af7f908742908d0d035d3e430646
Expected behavior
Tests succeed
Environment
conda,pip, source): sourceAdditional context
This failure was (knowingly) introduced by the mentioned PR.
Question now is: Is the behavior on NaNs defined by PyTorch? If so there is a bug which needs fixing. Otherwise the relevant tests should be removed or changed.
Also see e.g. https://eigen.tuxfamily.org/bz/show_bug.cgi?id=1494 / https://gitlab.com/libeigen/eigen/-/issues/1494
If the NaN-propagating behavior is wanted, I don't see any other approach but not using
vec_minetc on VSX