Disable TF32 in pinv_jvp and pinv_backward#67948
Disable TF32 in pinv_jvp and pinv_backward#67948crcrpar wants to merge 1 commit intopytorch:masterfrom
pinv_jvp and pinv_backward#67948Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 66f27a2 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
I'll merge this, but can we have a more systematic solution? A lot of linear algebra functions use matmuls in the backward. We currently disable tf32 in cublas handle that magma uses, but I think not for any other circumstances for linalg functions, and we don't run ampere in CI. |
|
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
Do you have more context on when it is ok to use tf32 and when not? (like a doc/slides) |
|
In linear algebra - never. The only context where it's ok to use tf32 is in linear layers in nn networks. |
|
Should we change the default then to never use it except when used by |
|
There;s #67384 discussing it, where using it only in nn.functional.linear is listed as option 3. However it's way to confusing when |
|
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: Disable TF32 in some linalg functions See also pytorch/pytorch#67948 #50453 pytorch/pytorch#44240 Pull Request resolved: pytorch/pytorch#73460 Reviewed By: albanD Differential Revision: D34493487 Pulled By: ngimel fbshipit-source-id: 958cd968ea09df3b5a4d2b4a26aaf0dfddc53981 (cherry picked from commit cd75ec645b86c4b4a66c35696ce891d006f3833b)
Summary: Disable TF32 in some linalg functions See also pytorch/pytorch#67948 #50453 pytorch/pytorch#44240 Pull Request resolved: pytorch/pytorch#73460 Reviewed By: albanD Differential Revision: D34493487 Pulled By: ngimel fbshipit-source-id: 958cd968ea09df3b5a4d2b4a26aaf0dfddc53981 (cherry picked from commit cd75ec645b86c4b4a66c35696ce891d006f3833b)
Summary: Fixes pytorch#67947 cc ptrblck xwang233 zasdfgbnm Pull Request resolved: pytorch#67948 Reviewed By: H-Huang Differential Revision: D32251934 Pulled By: ngimel fbshipit-source-id: a2b1a118337b38db61350c9e49f1ba19030d70ec
Summary: Disable TF32 in some linalg functions See also pytorch#67948 pytorch#50453 pytorch#44240 Pull Request resolved: pytorch#73460 Reviewed By: albanD Differential Revision: D34493487 Pulled By: ngimel fbshipit-source-id: 958cd968ea09df3b5a4d2b4a26aaf0dfddc53981 (cherry picked from commit cd75ec6)
Fixes #67947
cc @ptrblck @xwang233 @zasdfgbnm