Add meta tensor support for _amp_foreach_non_finite_check_and_unscale_ and nan_to_num#94633
Add meta tensor support for _amp_foreach_non_finite_check_and_unscale_ and nan_to_num#94633
Conversation
|
@bdhirsh, even with these changes cherry-picked onto my pytorch/functioanlization branch, I still see the same errors: Anything obvious I'm missing in these changes? Thanks a lot! |
|
@wonjoolee95 it looks like it's because nan_to_num's last 3 arguments are all defaultable (you need to include the defaults in your decomp - our tests probably try to call |
4e8fc43 to
3f30969
Compare
3f30969 to
f56287d
Compare
|
The CIs are now looking a lot greener, the failing tests are failing with seemingly unrelated error: I'll give it a retry. However, I'm still seeing the same error at #94633 (comment) even with this. Looking into it more. |
|
Synced with Brian offline, putting some information here. I was able to verify that With that said, the |
|
Oddly enough, I can actually see that these ops work as intended in a Python intepretor: |
|
Closed with pytorch/xla#4687. |
Fixes #92916
Add meta tensor support for
_amp_foreach_non_finite_check_and_unscale_andnan_to_numcc @alanwaketan