Skip to content

[ONNX] Add preprocess pass for onnx export#41832

Closed
BowenBao wants to merge 5 commits intopytorch:masterfrom
BowenBao:onnx_pre_pass
Closed

[ONNX] Add preprocess pass for onnx export#41832
BowenBao wants to merge 5 commits intopytorch:masterfrom
BowenBao:onnx_pre_pass

Conversation

@BowenBao
Copy link
Copy Markdown
Collaborator

in _jit_pass_onnx, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.

  • After preprocessing, _jit_pass_onnx should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
  • _jit_pass_onnx_peephole should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
  • Producing more valid ONNX nodes in _jit_pass_onnx enables better utilization of the ONNX shape inference [ONNX] Utilize ONNX shape inference for ONNX exporter #40628.

@BowenBao BowenBao added the module: onnx Related to torch.onnx label Jul 22, 2020
@BowenBao BowenBao requested a review from apaszke as a code owner July 22, 2020 01:42
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Jul 22, 2020
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Jul 22, 2020

💊 CI failures summary and remediations

As of commit c0a532f (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 24 times.

@mrshenli mrshenli added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 23, 2020
Comment thread torch/csrc/jit/python/init.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/preprocess.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/preprocess.cpp Outdated
Copy link
Copy Markdown
Contributor

@neginraoof neginraoof left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@bzinodev merged this pull request in a6c8730.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.
* After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
* `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
* Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference pytorch#40628.

Pull Request resolved: pytorch#41832

Reviewed By: ZolotukhinM

Differential Revision: D22968334

Pulled By: bzinodev

fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: onnx Related to torch.onnx oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants