Skip to content

Pipeline _create_aot_dispatcher_function#158173

Closed
ezyang wants to merge 2 commits intogh/ezyang/3099/basefrom
gh/ezyang/3099/head
Closed

Pipeline _create_aot_dispatcher_function#158173
ezyang wants to merge 2 commits intogh/ezyang/3099/basefrom
gh/ezyang/3099/head

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Jul 12, 2025

Stack from ghstack (oldest at bottom):

Two main things of note:

  • Review this diff without whitespace changes
  • To ensure that context managers correctly propagate to later pipeline
    stages, I am using the ExitStack trick: there is an ExitStack which is
    in scope for the entire pipeline, and inside of the individual
    pipeline stages we push context managers onto this stack when we want
    them to survive into the next pipeline stage. This is not obviously
    what the best final form of the code is, but
    create_aot_dispatcher_function is called from multiple locations so I
    can't just inline the context managers into the call site.

Signed-off-by: Edward Z. Yang ezyang@meta.com

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Jul 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/158173

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit 480f506 with merge base 4b9a6f7 (image):

NEW FAILURE - The following job has failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

@jamesjwu jamesjwu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems fine; can you run nanogpt or some other benchmark and confirm that chromium events, etc are still created as a test that this exit stack trick didn't miss anything?

@ezyang
Copy link
Contributor Author

ezyang commented Jul 14, 2025

rm -rf /tmp/a && TORCH_TRACE=/tmp/a pytest test/dynamo/test_aot_autograd.py -k test_arg_dupe_via_dynamo_recompiles_many_args_param_non_tensor_arg

produces https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmp5CY6TA/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

image

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158176

)
return compiled_fn, fw_metadata
return (
compiler_fn,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, compiler_fn used to run under that with and now it runs in the parent function but under 'stack' which is the same

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158176

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158213

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158251

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158319

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158176

2 similar comments
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158176

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #158176

pytorchmergebot pushed a commit that referenced this pull request Jul 16, 2025
…8176)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: #158176
Approved by: https://github.com/jamesjwu
ghstack dependencies: #158149, #158150, #158173
pytorchmergebot pushed a commit that referenced this pull request Jul 16, 2025
The starting point for this refactor is that I need access to the fully
general joint graph representation in an export-like interface, but I
then subsequently need a way to feed this joint graph into the rest of
the compilation pipeline so I can get an actual callable that I can run
once I've finished modifying it.  Previously, people had added export
capabilities to AOTAutograd by having an export flag that toggled what
exactly the functions return and triggering aot_dispatch to go to a
different "export" implementation, but I've found this difficult to
understand and has lead to a bit of duplicate code for the export path.

So the idea here is to reorganize the structure of the function calls in AOTAutograd. Here, it is helpful to first describe how things used to work:

* Start with aot_autograd.py top level functions like aot_function, _aot_export_function and aot_module_simplified. These call:
  * create_aot_dispatcher_function. This does a bunch of stuff (forward metadata collection) and adds many context managers. This calls:
    * One of aot_dispatch_base, aot_dispatch_export or aot_dispatch_autograd, which:
      * Call aot_dispatch_autograd_graph or aot_dispatch_base_graph to actually do the graph capture
      * Do some base/export/autograd specific post-processing on the graph

Notice the pattern of nested function invocations means that there is no way to easily get the graph capture result from the autograd case; furthermore, the export path is "bolted" on to force the entire chain of functions to have a different return result than normal, and no way to *resume* the rest of the post-processing to actually get a callable.

Here is the new structure:

* Start with aot_autograd.py top level functions like aot_function, _aot_export_function and aot_module_simplified. These now orchestrate this top level flow:
  * Start a context manager (stack); this stateful context block takes care of all of the nested context managers which originally necessitated the nested call structure
  * Call create_aot_state to do initial setup and setup all the context managers on stack. These context managers do NOT exit upon return of this.
  * Call aot_stage1_graph_capture to do the graph capture
  * Call aot_stage2_compile or aot_stage2_export depending on what postprocessing you want

With this new structure, it's now possible (although not done in this PR) to return the graph after aot_stage1_graph_capture and do something with it, before running aot_stage2_compile to finish the job.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: #158213
Approved by: https://github.com/jamesjwu
ghstack dependencies: #158149, #158150, #158173, #158176
pytorchmergebot pushed a commit that referenced this pull request Jul 16, 2025
…nd functions to frontend_utils (#158251)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: #158251
Approved by: https://github.com/jamesjwu
ghstack dependencies: #158149, #158150, #158173, #158176, #158213
pytorchmergebot pushed a commit that referenced this pull request Jul 16, 2025
Also a small amount of extra code cleanup.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: #158319
Approved by: https://github.com/jingsh
ghstack dependencies: #158149, #158150, #158173, #158176, #158213, #158251
@github-actions github-actions bot deleted the gh/ezyang/3099/head branch August 16, 2025 02:18
Khanaksahu pushed a commit to Khanaksahu/pytorch that referenced this pull request Nov 17, 2025
Two main things of note:

- Review this diff without whitespace changes
- To ensure that context managers correctly propagate to later pipeline
  stages, I am using the ExitStack trick: there is an ExitStack which is
  in scope for the entire pipeline, and inside of the individual
  pipeline stages we push context managers onto this stack when we want
  them to survive into the next pipeline stage.  This is not obviously
  what the best final form of the code is, but
  create_aot_dispatcher_function is called from multiple locations so I
  can't just inline the context managers into the call site.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
ghstack-source-id: 58e968d
Pull-Request: pytorch/pytorch#158173
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants