Skip to content

[MTIA] Allow users who know what they are doing to ignore all device mismatches in tracing and take a preferred device.#159931

Closed
patrick-toulme wants to merge 1 commit intopytorch:mainfrom
patrick-toulme:export-D79698438
Closed

[MTIA] Allow users who know what they are doing to ignore all device mismatches in tracing and take a preferred device.#159931
patrick-toulme wants to merge 1 commit intopytorch:mainfrom
patrick-toulme:export-D79698438

Conversation

@patrick-toulme
Copy link
Contributor

@patrick-toulme patrick-toulme commented Aug 6, 2025

Summary:
Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical.

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations.

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set

  torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'

to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438

@pytorch-bot
Copy link

pytorch-bot bot commented Aug 6, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/159931

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit db686a4 with merge base 3a2c3c8 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

  • pull / linux-jammy-py3_9-clang9-xla / test (xla, 1, 1, lf.linux.12xlarge, unstable) (gh) (#158876)
    /var/lib/jenkins/workspace/xla/torch_xla/csrc/runtime/BUILD:476:14: Compiling torch_xla/csrc/runtime/xla_util_test.cpp failed: (Exit 1): gcc failed: error executing CppCompile command (from target //torch_xla/csrc/runtime:xla_util_test) /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 229 arguments skipped)

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

@patrick-toulme
Copy link
Contributor Author

patrick-toulme commented Aug 6, 2025

I have seen this FakeTensorDevicePropagation in many issues. We should allow users an escape hatch to get around any intermediate device mismatch if they are confident the mismatch is in intermediate (non materialized) tensors and not graph inputs.

#144748
#151670
#151296

I have also seen this issue many times internally.

patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_non_cpu_device = True
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_non_cpu_device = True
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

@patrick-toulme
Copy link
Contributor Author

Tests passed local but failed on PR. Will debug tests and push again

patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_non_cpu_device = True
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_non_cpu_device = True
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

@patrick-toulme patrick-toulme requested review from aorenste, jansel, laithsakka and masnesral and removed request for aorenste and masnesral August 6, 2025 16:16
patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

patrick-toulme added a commit to patrick-toulme/pytorch that referenced this pull request Aug 6, 2025
…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical. 

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations. 

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set 
```
        torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

…mismatches in tracing and take the non CPU device. (pytorch#159931)

Summary:
Pull Request resolved: pytorch#159931

Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical.

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations.

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set
```
        torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438
@patrick-toulme patrick-toulme requested a review from jansel August 6, 2025 22:54
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D79698438

@patrick-toulme
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@patrick-toulme
Copy link
Contributor Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label Aug 7, 2025
@patrick-toulme
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

hinriksnaer pushed a commit to hinriksnaer/pytorch that referenced this pull request Aug 8, 2025
…mismatches in tracing and take a preferred device. (pytorch#159931)

Summary:
Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical.

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations.

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set
```
  torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438

Pull Request resolved: pytorch#159931
Approved by: https://github.com/jansel
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
…mismatches in tracing and take a preferred device. (pytorch#159931)

Summary:
Device mismatches in tracing can most often be ignored. These are only logical mismatches not physical.

Take any intermediate computation, and that computation will not actually materialize in a compiled binary execution. So a device mismatch in the middle of the program is not real. The runtime will never materialize those tensors on CPU device during the execution, as they are temporary allocations.

If a user knows his tensors at graph input are all on the correct device, then he can ignore all tracing errors.

Users who know what they are doing should have an escape hatch to ignore any device mismatch in tracing.

Users can set
```
  torch._functorch.config.fake_tensor_prefer_device_type = 'mtia'
```
to forcefully override any mismatch and prefer the non cpu device. This unblocks vLLM graph mode for MTIA.

Test Plan:
Added two unit tests.

Rollback Plan:

Differential Revision: D79698438

Pull Request resolved: pytorch#159931
Approved by: https://github.com/jansel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request fb-exported Merged topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants