Skip to content

pre_dispatch aot_export#115188

Closed
tugsbayasgalan wants to merge 19 commits intogh/tugsbayasgalan/175/basefrom
gh/tugsbayasgalan/175/head
Closed

pre_dispatch aot_export#115188
tugsbayasgalan wants to merge 19 commits intogh/tugsbayasgalan/175/basefrom
gh/tugsbayasgalan/175/head

Conversation

@tugsbayasgalan
Copy link
Copy Markdown
Contributor

@tugsbayasgalan tugsbayasgalan commented Dec 5, 2023

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Dec 5, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/115188

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit d1f1c53 with merge base 36dccc2 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

# functorch transforms, since these transforms always run above __torch_dispatch__.
# That's why this util lives here, and not in functorch.
def dispatch_functionalize(func):
def dispatch_functionalize(func, mode=None):
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should ban passing in mode as follow up. Didn't want to mess with auto_functionalize in this PR

Copy link
Copy Markdown
Collaborator

@bdhirsh bdhirsh Dec 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbh I don't think it would be a much larger PR if you use _detect_functional_mode() everywhere it's needed directly in this PR (and then you don't have to worry about the fact that you're changing the API for dispatch_functionalize only to have to unwind it later)

return None


def _detect_functional_mode():
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice - we could probably give proxy mode similar treatment


flattened_wrapped_args = pytree.arg_tree_leaves(*func_args)
flattened_wrapped_kwargs = pytree.arg_tree_leaves(**func_kwargs)
mode = mode or FunctionalTensorMode()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be using _detect_functional_mode()?

pytorchmergebot pushed a commit that referenced this pull request Dec 21, 2023
@jeanschmidt
Copy link
Copy Markdown
Contributor

@pytorchbot revert -m "sadly, it is required to revert this commit in order to revert #115454" -c ghfirst

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@tugsbayasgalan your PR has been successfully reverted.

pytorchmergebot added a commit that referenced this pull request Dec 21, 2023
This reverts commit a267d67.

Reverted #115188 on behalf of https://github.com/jeanschmidt due to sadly, it is required to revert this commit in order to revert #115454 ([comment](#115188 (comment)))
dmenig pushed a commit to dmenig/pytorch that referenced this pull request Dec 21, 2023
@tugsbayasgalan
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: Command git -C /home/runner/work/pytorch/pytorch cherry-pick -x 4eb220cdd5c139b8235ff1b3b2495d2845c470ba returned non-zero exit code 1

Auto-merging test/dynamo/test_functions.py
Auto-merging test/test_functionalization.py
Auto-merging test/test_proxy_tensor.py
Auto-merging torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py
CONFLICT (content): Merge conflict in torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py
Auto-merging torch/_functorch/_aot_autograd/traced_function_transforms.py
Auto-merging torch/_functorch/aot_autograd.py
CONFLICT (content): Merge conflict in torch/_functorch/aot_autograd.py
Auto-merging torch/_higher_order_ops/cond.py
Auto-merging torch/_utils.py
Auto-merging torch/fx/experimental/proxy_tensor.py
error: could not apply 4eb220cdd5c... [WIP] pre_dispatch aot_export
hint: After resolving the conflicts, mark them with
hint: "git add/rm <pathspec>", then run
hint: "git cherry-pick --continue".
hint: You can instead skip this commit with "git cherry-pick --skip".
hint: To abort and get back to the state before "git cherry-pick",
hint: run "git cherry-pick --abort".
Details for Dev Infra team Raised by workflow job

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng

[ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 aakhundov kadeng

[ghstack-poisoned]
@tugsbayasgalan
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Dec 25, 2023
pytorchmergebot pushed a commit that referenced this pull request Dec 28, 2023
@facebook-github-bot facebook-github-bot deleted the gh/tugsbayasgalan/175/head branch December 28, 2023 15:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants