Skip to content

[ONNX] Utilize ONNX shape inference for ONNX exporter#40628

Closed
BowenBao wants to merge 5 commits intopytorch:masterfrom
BowenBao:onnx_shape_inf
Closed

[ONNX] Utilize ONNX shape inference for ONNX exporter#40628
BowenBao wants to merge 5 commits intopytorch:masterfrom
BowenBao:onnx_shape_inf

Conversation

@BowenBao
Copy link
Copy Markdown
Collaborator

It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.

We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.

  • Added pass to run onnx shape inference on a given node. The node has to have namespace onnx.
  • Moved helper functions from export.cpp to a common place for re-use.
  • This feature is currently experimental, and can be turned on through flag onnx_shape_inference in internal api torch.onnx._export.
  • Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future.

@BowenBao BowenBao requested a review from apaszke as a code owner June 26, 2020 17:29
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Jun 26, 2020

💊 CI failures summary and remediations

As of commit 58158a0 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 120 times.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Jun 26, 2020
@BowenBao BowenBao added the module: onnx Related to torch.onnx label Jun 26, 2020
@spandantiwari
Copy link
Copy Markdown

@BowenBao - Does this also include type propagation, or is that a separate PR?

@BowenBao
Copy link
Copy Markdown
Collaborator Author

@BowenBao - Does this also include type propagation, or is that a separate PR?

Yes this includes type as well.

Comment thread tools/build_variables.bzl Outdated
Comment thread torch/csrc/jit/passes/onnx/helper.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/shape_inference.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/shape_inference.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/shape_inference.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/shape_inference.h Outdated
Comment thread torch/onnx/utils.py Outdated
Comment thread torch/csrc/jit/passes/onnx/shape_inference.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/shape_inference.cpp Outdated
@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jul 7, 2020
Comment thread torch/csrc/jit/passes/onnx/shape_type_inference.cpp Outdated
Comment thread torch/csrc/jit/passes/onnx/shape_type_inference.cpp Outdated
Copy link
Copy Markdown

@spandantiwari spandantiwari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. If you could please capture any of the upcoming work items related to this in a comment or the opening description that will be great for context.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@BowenBao BowenBao force-pushed the onnx_shape_inf branch 2 times, most recently from 03c59c8 to 10155aa Compare August 3, 2020 23:39
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@BowenBao BowenBao force-pushed the onnx_shape_inf branch 2 times, most recently from 0b56320 to 5d6f8cf Compare August 25, 2020 17:41
clang tidy

add helper func

remove useless comment

add missing type

fix flake and clang-tidy

update clang-tidy skip files

address comments

clang-tidy

add more comments

rebase with master

clang format

clang-tidy on unrelated file

clang-format

newline
@BowenBao
Copy link
Copy Markdown
Collaborator Author

@bzinodev CI is green, please help import this one.

Copy link
Copy Markdown
Member

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, this is happening!

@spandantiwari
Copy link
Copy Markdown

Awesome, this is happening!

@houseroad - Yes, it is a first PR :-). We still have some ground to cover, but a good start. Thanks for all the brainstorming and discussions on this.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@bzinodev merged this pull request in 08126c9.

facebook-github-bot pushed a commit that referenced this pull request Sep 17, 2020
…43991)

Summary:
Fixes the `true_divide` symbolic to cast tensors correctly.
The logic depends on knowing input types at export time, which is a known gap for exporting scripted modules. On that end we are improving exporter by enabling ONNX shape inference #40628, and starting to increase coverage for scripting support.

Pull Request resolved: #43991

Reviewed By: mruberry

Differential Revision: D23674614

Pulled By: bzinodev

fbshipit-source-id: 1b1b85340eef641f664a14c4888781389c886a8b
xuzhao9 pushed a commit that referenced this pull request Sep 18, 2020
…43991)

Summary:
Fixes the `true_divide` symbolic to cast tensors correctly.
The logic depends on knowing input types at export time, which is a known gap for exporting scripted modules. On that end we are improving exporter by enabling ONNX shape inference #40628, and starting to increase coverage for scripting support.

Pull Request resolved: #43991

Reviewed By: mruberry

Differential Revision: D23674614

Pulled By: bzinodev

fbshipit-source-id: 1b1b85340eef641f664a14c4888781389c886a8b
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.
* After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
* `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
* Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference pytorch#40628.

Pull Request resolved: pytorch#41832

Reviewed By: ZolotukhinM

Differential Revision: D22968334

Pulled By: bzinodev

fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.

We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.

* Added pass to run onnx shape inference on a given node. The node has to have namespace `onnx`.
* Moved helper functions from `export.cpp` to a common place for re-use.
* This feature is currently experimental, and can be turned on through flag `onnx_shape_inference` in internal api `torch.onnx._export`.
* Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future.

Pull Request resolved: pytorch#40628

Reviewed By: mrshenli

Differential Revision: D22709746

Pulled By: bzinodev

fbshipit-source-id: b52aeeae00667e66e0b0c1144022f7af9a8b2948
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…ytorch#43991)

Summary:
Fixes the `true_divide` symbolic to cast tensors correctly.
The logic depends on knowing input types at export time, which is a known gap for exporting scripted modules. On that end we are improving exporter by enabling ONNX shape inference pytorch#40628, and starting to increase coverage for scripting support.

Pull Request resolved: pytorch#43991

Reviewed By: mruberry

Differential Revision: D23674614

Pulled By: bzinodev

fbshipit-source-id: 1b1b85340eef641f664a14c4888781389c886a8b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: onnx Related to torch.onnx oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants