Skip to content

Update warning of TF32#158209

Closed
yanbing-j wants to merge 2 commits intopytorch:mainfrom
yanbing-j:yanbing/update_tf32_warning
Closed

Update warning of TF32#158209
yanbing-j wants to merge 2 commits intopytorch:mainfrom
yanbing-j:yanbing/update_tf32_warning

Conversation

@yanbing-j
Copy link
Collaborator

Fixes #ISSUE_NUMBER

@pytorch-bot
Copy link

pytorch-bot bot commented Jul 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158209

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 1291029 with merge base c8c221c (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

@jansel jansel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are going to deprecate the old stuff, we should update the docs to not suggest using the old stuff. Things the docs tell you to do should not spam warnings at you.

A simpler fix would be to remove the warning.

Comment on lines +79 to +82
"Suggest to use a new setting of API control of a more fine-grained TF32 behavior, e.g, "
"torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. "
"Old setting, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, "
"allowTF32CuDNN() and allowTF32CuBLAS() are still supported, and is going to be deprecated. Please see "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps:

Please use the new API settings to control TF32 behavior, such as ...

---------------------------------------------------

Starting in PyTorch 2.4, there is a set of APIs to control the internal computation precision
Starting in PyTorch 2.9, there is a set of APIs to control the internal computation precision
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was 2.4 wrong?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, #125888 was firsted drafted in PyTorch 2.4 window. Before that PR, we cannot control TF32 for mkldnn backend. I update this to 2.9, because PR is merged in PyTorch 2.9.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the APIs existed in 2.4, this should say 2.4.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the APIs are finally upstreamed in 2.9, 2.4 didn't have that. #125888 was firsted drafted in PyTorch 2.4 window, but is upstreamed in 2.9. Mkldnn Docs needs to be updated to 2.9.

@yanbing-j yanbing-j force-pushed the yanbing/update_tf32_warning branch from db58c80 to a2469b6 Compare July 14, 2025 06:48
@yanbing-j yanbing-j added the ciflow/trunk Trigger trunk jobs on your pull request label Jul 14, 2025
C10_ALWAYS_INLINE void warn_deprecated_fp32_precision_api(){
TORCH_WARN_ONCE(
"This API is going to be deprecated, please see "
"Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 "
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't recall if we have a way, from here, to set the warning stacklevel argument.
Setting that value properly will make it point to the user code setting the value and will be more obvious on what they did wrong.

C10_ALWAYS_INLINE void warn_deprecated_fp32_precision_api(){
TORCH_WARN_ONCE(
"This API is going to be deprecated, please see "
"Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 "
"Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32'"

TORCH_WARN_ONCE(
"This API is going to be deprecated, please see "
"Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = tf32 "
"or torch.backends.cuda.matmul.fp32_precision = ieee. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"or torch.backends.cuda.matmul.fp32_precision = ieee. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, "
"or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, "

@yanbing-j yanbing-j added the topic: not user facing topic category label Jul 15, 2025
@yanbing-j yanbing-j marked this pull request as ready for review July 16, 2025 01:21
@yanbing-j
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

masnesral added a commit that referenced this pull request Jul 30, 2025
Summary: According to #158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead.

Fixes #159440

Test Plan: CI

[ghstack-poisoned]
masnesral added a commit that referenced this pull request Jul 30, 2025
Summary: According to #158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead.

Fixes #159440

Test Plan: CI

ghstack-source-id: a22b482
Pull Request resolved: #159480
pytorchmergebot pushed a commit that referenced this pull request Jul 30, 2025
…#159480)

Summary: According to #158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead.

Fixes #159440

Test Plan: CI

Pull Request resolved: #159480
Approved by: https://github.com/xmfan, https://github.com/oulgen
yangw-dev pushed a commit that referenced this pull request Aug 1, 2025
…#159480)

Summary: According to #158209, the API is deprecated and we should be using torch.backends.cuda.matmul.fp32_precision instead.

Fixes #159440

Test Plan: CI

Pull Request resolved: #159480
Approved by: https://github.com/xmfan, https://github.com/oulgen
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants