patching clamp for one sided clamp#75558
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 17c2c80 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
Cool, can you do a more systematic testing for extreme values? E.g. min/max look like they should be codegening fmin/fmax and thus handle nans correctly, but without systematic testing it's hard to be sure, there might be other ops like relu/hardtanh/hardsigmoid and the like that might or might not do correct things for nan/inf propagation, or produces 0-s in the non-differentiable points? |
I think our unary/binary test do cover all these special numbers. pytorch/test/test_jit_cuda_fuser.py Lines 126 to 130 in 1af7105 pytorch/test/test_jit_cuda_fuser.py Lines 544 to 567 in 1af7105 pytorch/test/test_jit_cuda_fuser.py Lines 791 to 796 in 1af7105
|
|
Apparently, they weren't covering these special numbers, or these failure wouldn't have happened? Note, although this is arguably an error case, numpy, jax and torch eager all have the same behavior in this case, with nvfuser being an outlier. |
|
clamp is a
|
|
As far as NVFuser is concerned, clamp is a unary op with kwargs, NVFuser accepts only (Tensor, Scalar, Scalar) overload. |
|
And yes, adding a test for this behavior (since it's documented) is necessary, I wasn't able to find existing one. cc @mruberry |
| bool has_high = value_map.count(node->inputs()[2]->unique()) != 0; | ||
| Val* high = has_high | ||
| ? *value_map[node->inputs()[2]->unique()] | ||
| : IrBuilder::create<Double>(std::numeric_limits<float>::max()); |
There was a problem hiding this comment.
You still have these very misleading high and low assignments, you should be able to remove them?
|
@pytorchbot merge this |
|
Hey @jjsjann123. |
Summary: Fixes #75088 The solution is just to avoid putting random value for non-specified clamp as pointed out in #75088 (comment) Pull Request resolved: #75558 Approved by: https://github.com/ngimel Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/0203341bbde7cebdb9a04d9f021797b8bea7de2f Reviewed By: mehtanirav Differential Revision: D35582770 fbshipit-source-id: d7781249f568c8cf28ecbf4bbce3c4d3b0f947ce
Fixes pytorch#75558 (comment) Updated clamp logic to be consistent with aten. This avoids us producing different result when clamp was given a min/max argument where min > max. We don't have an issue opened for this. It is also tricky to argue the right behavior, but getting better consistency with eager is always(?!) a good thing.
Fixes pytorch#75088 The solution is just to avoid putting random value for non-specified clamp as pointed out in pytorch#75088 (comment) Pull Request resolved: pytorch#75558 Approved by: https://github.com/ngimel
Fixes pytorch#75088 The solution is just to avoid putting random value for non-specified clamp as pointed out in pytorch#75088 (comment) Pull Request resolved: pytorch#75558 Approved by: https://github.com/ngimel
Fixes pytorch#75088 The solution is just to avoid putting random value for non-specified clamp as pointed out in pytorch#75088 (comment) Pull Request resolved: pytorch#75558 Approved by: https://github.com/ngimel
Fixes #75088 The solution is just to avoid putting random value for non-specified clamp as pointed out in pytorch/pytorch#75088 (comment) Pull Request resolved: pytorch/pytorch#75558 Approved by: https://github.com/ngimel
Fixes #75088 The solution is just to avoid putting random value for non-specified clamp as pointed out in pytorch/pytorch#75088 (comment) Pull Request resolved: pytorch/pytorch#75558 Approved by: https://github.com/ngimel
Fixes #75088 The solution is just to avoid putting random value for non-specified clamp as pointed out in pytorch/pytorch#75088 (comment) Pull Request resolved: pytorch/pytorch#75558 Approved by: https://github.com/ngimel
Fixes #75088
The solution is just to avoid putting random value for non-specified clamp as pointed out in #75088 (comment)