Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165599
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (1 Unrelated Failure)As of commit 6c429e2 with merge base b4fd471 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| result_fx_node = generate_item(where_fx_node) | ||
| elif len(keypath) == 1 and isinstance(keypath[0], DivideByKey): | ||
| result_fx_node = graph.call_function( | ||
| operator.floordiv, |
There was a problem hiding this comment.
do you think you could add some tests for these paths? It's not really clear to me when we run into these case 😅
There was a problem hiding this comment.
Sure. I think this DivideByKey path is only triggered by a pretty obscure optimization. Will add a CI test for it. The other paths are covered by the existing CI.
There was a problem hiding this comment.
I had difficulty constructing a test which would trigger this path, so I created #165954 to break the corresponding logic in the Python backend and see which tests failed. It seems that no tests exercise this path, so I decided to delete it from this PR.
|
@pytorchbot merge -i |
Merge startedYour change will be merged while ignoring the following 0 checks: Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
# Feature This PR supports compiling `Tensor.item` with Inductor's FX backend. This maps to a custom WrapperCodeGen method called `codegen_dynamic_scalar`. # Implementation The implementation is fairly mechanical, following the usual flow for these types of PRs. 1. Introduce a new Wrapper IR line for this, called `DynamicScalarLine`. 2. Split `PythonWrapperCodegen.codegen_dynamic_scalar` into 2 parts: a public method which generates the Wrapper IR line, and a private one generating Python from Wrapper IR. 3. Implement an FX codegen method for the wrapper IR line. This one calls `aten.where.Scalar` to handle code like `1 if x.item() else 0`, which is a bit tricky. It also calls `aten.item.default` to convert tensors to scalars. # Test plan Added CI tests mirroring the AOTI ones. They test float, int and bool types, the latter taking a distinct codegen path. Pull Request resolved: pytorch#165599 Approved by: https://github.com/angelayi, https://github.com/jansel
# Feature This PR supports compiling `Tensor.item` with Inductor's FX backend. This maps to a custom WrapperCodeGen method called `codegen_dynamic_scalar`. # Implementation The implementation is fairly mechanical, following the usual flow for these types of PRs. 1. Introduce a new Wrapper IR line for this, called `DynamicScalarLine`. 2. Split `PythonWrapperCodegen.codegen_dynamic_scalar` into 2 parts: a public method which generates the Wrapper IR line, and a private one generating Python from Wrapper IR. 3. Implement an FX codegen method for the wrapper IR line. This one calls `aten.where.Scalar` to handle code like `1 if x.item() else 0`, which is a bit tricky. It also calls `aten.item.default` to convert tensors to scalars. # Test plan Added CI tests mirroring the AOTI ones. They test float, int and bool types, the latter taking a distinct codegen path. Pull Request resolved: pytorch#165599 Approved by: https://github.com/angelayi, https://github.com/jansel
Feature
This PR supports compiling
Tensor.itemwith Inductor's FX backend. This maps to a custom WrapperCodeGen method calledcodegen_dynamic_scalar.Implementation
The implementation is fairly mechanical, following the usual flow for these types of PRs.
DynamicScalarLine.PythonWrapperCodegen.codegen_dynamic_scalarinto 2 parts: a public method which generates the Wrapper IR line, and a private one generating Python from Wrapper IR.aten.where.Scalarto handle code like1 if x.item() else 0, which is a bit tricky. It also callsaten.item.defaultto convert tensors to scalars.Test plan
Added CI tests mirroring the AOTI ones. They test float, int and bool types, the latter taking a distinct codegen path.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben