Skip to content

[Inductor-FX] Support Tensor.item#165599

Closed
blaine-rister wants to merge 4 commits intomainfrom
brister/item_fx
Closed

[Inductor-FX] Support Tensor.item#165599
blaine-rister wants to merge 4 commits intomainfrom
brister/item_fx

Conversation

@blaine-rister
Copy link
Contributor

@blaine-rister blaine-rister commented Oct 16, 2025

Feature

This PR supports compiling Tensor.item with Inductor's FX backend. This maps to a custom WrapperCodeGen method called codegen_dynamic_scalar.

Implementation

The implementation is fairly mechanical, following the usual flow for these types of PRs.

  1. Introduce a new Wrapper IR line for this, called DynamicScalarLine.
  2. Split PythonWrapperCodegen.codegen_dynamic_scalar into 2 parts: a public method which generates the Wrapper IR line, and a private one generating Python from Wrapper IR.
  3. Implement an FX codegen method for the wrapper IR line. This one calls aten.where.Scalar to handle code like 1 if x.item() else 0, which is a bit tricky. It also calls aten.item.default to convert tensors to scalars.

Test plan

Added CI tests mirroring the AOTI ones. They test float, int and bool types, the latter taking a distinct codegen path.

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165599

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

✅ You can merge normally! (1 Unrelated Failure)

As of commit 6c429e2 with merge base b4fd471 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@blaine-rister blaine-rister requested review from angelayi and jansel and removed request for jansel October 16, 2025 20:55
@blaine-rister blaine-rister marked this pull request as ready for review October 16, 2025 21:00
result_fx_node = generate_item(where_fx_node)
elif len(keypath) == 1 and isinstance(keypath[0], DivideByKey):
result_fx_node = graph.call_function(
operator.floordiv,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think you could add some tests for these paths? It's not really clear to me when we run into these case 😅

Copy link
Contributor Author

@blaine-rister blaine-rister Oct 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. I think this DivideByKey path is only triggered by a pretty obscure optimization. Will add a CI test for it. The other paths are covered by the existing CI.

Copy link
Contributor Author

@blaine-rister blaine-rister Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had difficulty constructing a test which would trigger this path, so I created #165954 to break the corresponding logic in the Python backend and see which tests failed. It seems that no tests exercise this path, so I decided to delete it from this PR.

@blaine-rister blaine-rister added the topic: not user facing topic category label Oct 16, 2025
@blaine-rister
Copy link
Contributor Author

@pytorchbot merge -i

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 21, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged while ignoring the following 0 checks:

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Oct 21, 2025
# Feature
This PR supports compiling `Tensor.item` with Inductor's FX backend. This maps to a custom WrapperCodeGen method called `codegen_dynamic_scalar`.

# Implementation
The implementation is fairly mechanical, following the usual flow for these types of PRs.
1. Introduce a new Wrapper IR line for this, called `DynamicScalarLine`.
2. Split `PythonWrapperCodegen.codegen_dynamic_scalar` into 2 parts: a public method which generates the Wrapper IR line, and a private one generating Python from Wrapper IR.
3. Implement an FX codegen method for the wrapper IR line. This one calls `aten.where.Scalar` to handle code like `1 if x.item() else 0`, which is a bit tricky. It also calls `aten.item.default` to convert tensors to scalars.

# Test plan
Added CI tests mirroring the AOTI ones. They test float, int and bool types, the latter taking a distinct codegen path.

Pull Request resolved: pytorch#165599
Approved by: https://github.com/angelayi, https://github.com/jansel
zhudada0120 pushed a commit to zhudada0120/pytorch that referenced this pull request Oct 22, 2025
# Feature
This PR supports compiling `Tensor.item` with Inductor's FX backend. This maps to a custom WrapperCodeGen method called `codegen_dynamic_scalar`.

# Implementation
The implementation is fairly mechanical, following the usual flow for these types of PRs.
1. Introduce a new Wrapper IR line for this, called `DynamicScalarLine`.
2. Split `PythonWrapperCodegen.codegen_dynamic_scalar` into 2 parts: a public method which generates the Wrapper IR line, and a private one generating Python from Wrapper IR.
3. Implement an FX codegen method for the wrapper IR line. This one calls `aten.where.Scalar` to handle code like `1 if x.item() else 0`, which is a bit tricky. It also calls `aten.item.default` to convert tensors to scalars.

# Test plan
Added CI tests mirroring the AOTI ones. They test float, int and bool types, the latter taking a distinct codegen path.

Pull Request resolved: pytorch#165599
Approved by: https://github.com/angelayi, https://github.com/jansel
@github-actions github-actions bot deleted the brister/item_fx branch November 21, 2025 02:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants