Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158209
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 1291029 with merge base c8c221c ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
aten/src/ATen/Context.cpp
Outdated
| "Suggest to use a new setting of API control of a more fine-grained TF32 behavior, e.g, " | ||
| "torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. " | ||
| "Old setting, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, " | ||
| "allowTF32CuDNN() and allowTF32CuBLAS() are still supported, and is going to be deprecated. Please see " |
There was a problem hiding this comment.
Perhaps:
Please use the new API settings to control TF32 behavior, such as ...
| --------------------------------------------------- | ||
|
|
||
| Starting in PyTorch 2.4, there is a set of APIs to control the internal computation precision | ||
| Starting in PyTorch 2.9, there is a set of APIs to control the internal computation precision |
There was a problem hiding this comment.
No, #125888 was firsted drafted in PyTorch 2.4 window. Before that PR, we cannot control TF32 for mkldnn backend. I update this to 2.9, because PR is merged in PyTorch 2.9.
There was a problem hiding this comment.
If the APIs existed in 2.4, this should say 2.4.
There was a problem hiding this comment.
No, the APIs are finally upstreamed in 2.9, 2.4 didn't have that. #125888 was firsted drafted in PyTorch 2.4 window, but is upstreamed in 2.9. Mkldnn Docs needs to be updated to 2.9.
db58c80 to
a2469b6
Compare
aten/src/ATen/Context.cpp
Outdated
| C10_ALWAYS_INLINE void warn_deprecated_fp32_precision_api(){ | ||
| TORCH_WARN_ONCE( | ||
| "This API is going to be deprecated, please see " | ||
| "Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 " |
There was a problem hiding this comment.
I don't recall if we have a way, from here, to set the warning stacklevel argument.
Setting that value properly will make it point to the user code setting the value and will be more obvious on what they did wrong.
aten/src/ATen/Context.cpp
Outdated
| C10_ALWAYS_INLINE void warn_deprecated_fp32_precision_api(){ | ||
| TORCH_WARN_ONCE( | ||
| "This API is going to be deprecated, please see " | ||
| "Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 " |
There was a problem hiding this comment.
| "Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 " | |
| "Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32'" |
aten/src/ATen/Context.cpp
Outdated
| TORCH_WARN_ONCE( | ||
| "This API is going to be deprecated, please see " | ||
| "Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 " | ||
| "or torch.backends.cuda.matmul.fp32_precision = ieee. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, " |
There was a problem hiding this comment.
| "or torch.backends.cuda.matmul.fp32_precision = ieee. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, " | |
| "or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, " |
1480de7 to
1291029
Compare
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…#159480) Summary: According to #158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead. Fixes #159440 Test Plan: CI Pull Request resolved: #159480 Approved by: https://github.com/xmfan, https://github.com/oulgen
…#159480) Summary: According to #158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead. Fixes #159440 Test Plan: CI Pull Request resolved: #159480 Approved by: https://github.com/xmfan, https://github.com/oulgen
Fixes #ISSUE_NUMBER