Fix codegen subclass wrapper dropping trailing args (e.g. rng state)#177202
Fix codegen subclass wrapper dropping trailing args (e.g. rng state)#177202aorenste wants to merge 1 commit intogh/aorenste/214/basefrom
Conversation
The codegen'd subclass wrapper only iterated over inp_metas when building unwrapped_args, but FunctionalizedRngRuntimeWrapper appends rng seed/offset to args after inp_metas. The old data-driven loop in runtime_unwrap_tensor_subclasses passed these through because it iterated over all args. Add an extend() call to forward any trailing args not covered by inp_metas. Authored with Claude. [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/177202
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit dff529c with merge base 3d71ad9 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
The codegen'd subclass wrapper only iterated over inp_metas when building unwrapped_args, but FunctionalizedRngRuntimeWrapper appends rng seed/offset to args after inp_metas. The old data-driven loop in runtime_unwrap_tensor_subclasses passed these through because it iterated over all args. Add an extend() call to forward any trailing args not covered by inp_metas. Authored with Claude. ghstack-source-id: 5e2d54b Pull Request resolved: #177202
| ) | ||
| _codegen_unwrap_subclass(state, meta, inp_var, indent=1) | ||
|
|
||
| # Pass through any trailing args not covered by inp_metas |
There was a problem hiding this comment.
What if there are no trailing args here? i.e len(inp_metas) == len(args)? Should we gate this emit behind that?
There was a problem hiding this comment.
Actually - looking closer... it looks like we're supposed to pass along extra runtime parameters - so we don't know at codegen time how many extra there will be.
There was a problem hiding this comment.
In that case I think this PR is good then - just wanted to make sure we did the diligence to think of this case :)
| ) | ||
| _codegen_unwrap_subclass(state, meta, inp_var, indent=1) | ||
|
|
||
| # Pass through any trailing args not covered by inp_metas |
There was a problem hiding this comment.
In that case I think this PR is good then - just wanted to make sure we did the diligence to think of this case :)
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ytorch#177202) The codegen'd subclass wrapper only iterated over inp_metas when building unwrapped_args, but FunctionalizedRngRuntimeWrapper appends rng seed/offset to args after inp_metas. The old data-driven loop in runtime_unwrap_tensor_subclasses passed these through because it iterated over all args. Add an extend() call to forward any trailing args not covered by inp_metas. Authored with Claude. Pull Request resolved: pytorch#177202 Approved by: https://github.com/Lucaskabela
Stack from ghstack (oldest at bottom):
The codegen'd subclass wrapper only iterated over inp_metas when
building unwrapped_args, but FunctionalizedRngRuntimeWrapper appends
rng seed/offset to args after inp_metas. The old data-driven loop
in runtime_unwrap_tensor_subclasses passed these through because it
iterated over all args. Add an extend() call to forward any trailing
args not covered by inp_metas.
Authored with Claude.