baby steps on patching inf/nan behavior & aten::amin support in nvfuser#75646
baby steps on patching inf/nan behavior & aten::amin support in nvfuser#75646jjsjann123 wants to merge 6 commits intopytorch:masterfrom
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 80c232e (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
cc'ing @kevinstephano @rdspring1 |
test/test_jit_cuda_fuser.py
Outdated
| "Requires fusion optimization pass to be effective") | ||
| @unittest.skipIf(is_pre_volta(), "reduction not supported in pre volta device") | ||
| def test_inf_quick_patch(self): | ||
| x = torch.tensor([-float('inf'), -float('inf'), 4.0], device="cuda") |
There was a problem hiding this comment.
This test is not enough to catch previously problematic values - you need input full of -inf (nothing else there) to catch that FLT_MIN is not the correct initializer, similarly for amin. You need yet another input for nan propagation (unless you have those tests in other places, in which case it's probably better to unify)
|
@pytorchbot merge this |
|
Hey @jjsjann123. |
…er (#75646) Summary: Fixes #75622 1. Instead of getting max/min_value for reduction init value, we go with (-)infinity instead so we can properly preserve inf inputs; 2. Adding inf/(-)inf/nan for float value. 3. Adding aten::amin in nvfuser (kevinstephano rdspring1 for review) Pull Request resolved: #75646 Approved by: https://github.com/rdspring1, https://github.com/kevinstephano, https://github.com/ngimel Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/692ebc8d8bbd10c21530254b29d458dc9b871386 Reviewed By: osalpekar Differential Revision: D35618790 Pulled By: mehtanirav fbshipit-source-id: 406965941919ad1777b74d36898709eb17580fa1
Fixes pytorch#75622 1. Instead of getting max/min_value for reduction init value, we go with (-)infinity instead so we can properly preserve inf inputs; 2. Adding inf/(-)inf/nan for float value. 3. Adding aten::amin in nvfuser (@kevinstephano @rdspring1 for review) Pull Request resolved: pytorch#75646 Approved by: https://github.com/rdspring1, https://github.com/kevinstephano, https://github.com/ngimel
…er (#1588) Fixes pytorch#75622 1. Instead of getting max/min_value for reduction init value, we go with (-)infinity instead so we can properly preserve inf inputs; 2. Adding inf/(-)inf/nan for float value. 3. Adding aten::amin in nvfuser (@kevinstephano @rdspring1 for review) Pull Request resolved: pytorch#75646 Approved by: https://github.com/rdspring1, https://github.com/kevinstephano, https://github.com/ngimel
Fixes pytorch#75622 1. Instead of getting max/min_value for reduction init value, we go with (-)infinity instead so we can properly preserve inf inputs; 2. Adding inf/(-)inf/nan for float value. 3. Adding aten::amin in nvfuser (@kevinstephano @rdspring1 for review) Pull Request resolved: pytorch#75646 Approved by: https://github.com/rdspring1, https://github.com/kevinstephano, https://github.com/ngimel
Fixes #75622 1. Instead of getting max/min_value for reduction init value, we go with (-)infinity instead so we can properly preserve inf inputs; 2. Adding inf/(-)inf/nan for float value. 3. Adding aten::amin in nvfuser (@kevinstephano @rdspring1 for review) Pull Request resolved: pytorch/pytorch#75646 Approved by: https://github.com/rdspring1, https://github.com/kevinstephano, https://github.com/ngimel
Fixes #75622 1. Instead of getting max/min_value for reduction init value, we go with (-)infinity instead so we can properly preserve inf inputs; 2. Adding inf/(-)inf/nan for float value. 3. Adding aten::amin in nvfuser (@kevinstephano @rdspring1 for review) Pull Request resolved: pytorch/pytorch#75646 Approved by: https://github.com/rdspring1, https://github.com/kevinstephano, https://github.com/ngimel
…er (#1588) Fixes #75622 1. Instead of getting max/min_value for reduction init value, we go with (-)infinity instead so we can properly preserve inf inputs; 2. Adding inf/(-)inf/nan for float value. 3. Adding aten::amin in nvfuser (@kevinstephano @rdspring1 for review) Pull Request resolved: pytorch/pytorch#75646 Approved by: https://github.com/rdspring1, https://github.com/kevinstephano, https://github.com/ngimel
Fixes #75622