[ONNX] Utilize ONNX shape inference for ONNX exporter#40628
[ONNX] Utilize ONNX shape inference for ONNX exporter#40628BowenBao wants to merge 5 commits intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit 58158a0 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 120 times. |
|
@BowenBao - Does this also include type propagation, or is that a separate PR? |
Yes this includes type as well. |
spandantiwari
left a comment
There was a problem hiding this comment.
LGTM. If you could please capture any of the upcoming work items related to this in a comment or the opening description that will be great for context.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
03c59c8 to
10155aa
Compare
26dd024 to
ea7a49f
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
ea7a49f to
ae98158
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
0b56320 to
5d6f8cf
Compare
clang tidy add helper func remove useless comment add missing type fix flake and clang-tidy update clang-tidy skip files address comments clang-tidy add more comments rebase with master clang format clang-tidy on unrelated file clang-format newline
5d6f8cf to
58158a0
Compare
|
@bzinodev CI is green, please help import this one. |
houseroad
left a comment
There was a problem hiding this comment.
Awesome, this is happening!
@houseroad - Yes, it is a first PR :-). We still have some ground to cover, but a good start. Thanks for all the brainstorming and discussions on this. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@bzinodev has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
…43991) Summary: Fixes the `true_divide` symbolic to cast tensors correctly. The logic depends on knowing input types at export time, which is a known gap for exporting scripted modules. On that end we are improving exporter by enabling ONNX shape inference #40628, and starting to increase coverage for scripting support. Pull Request resolved: #43991 Reviewed By: mruberry Differential Revision: D23674614 Pulled By: bzinodev fbshipit-source-id: 1b1b85340eef641f664a14c4888781389c886a8b
…43991) Summary: Fixes the `true_divide` symbolic to cast tensors correctly. The logic depends on knowing input types at export time, which is a known gap for exporting scripted modules. On that end we are improving exporter by enabling ONNX shape inference #40628, and starting to increase coverage for scripting support. Pull Request resolved: #43991 Reviewed By: mruberry Differential Revision: D23674614 Pulled By: bzinodev fbshipit-source-id: 1b1b85340eef641f664a14c4888781389c886a8b
Summary: in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function. * After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses. * `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes. * Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference pytorch#40628. Pull Request resolved: pytorch#41832 Reviewed By: ZolotukhinM Differential Revision: D22968334 Pulled By: bzinodev fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
Summary: It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules. We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR. * Added pass to run onnx shape inference on a given node. The node has to have namespace `onnx`. * Moved helper functions from `export.cpp` to a common place for re-use. * This feature is currently experimental, and can be turned on through flag `onnx_shape_inference` in internal api `torch.onnx._export`. * Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future. Pull Request resolved: pytorch#40628 Reviewed By: mrshenli Differential Revision: D22709746 Pulled By: bzinodev fbshipit-source-id: b52aeeae00667e66e0b0c1144022f7af9a8b2948
…ytorch#43991) Summary: Fixes the `true_divide` symbolic to cast tensors correctly. The logic depends on knowing input types at export time, which is a known gap for exporting scripted modules. On that end we are improving exporter by enabling ONNX shape inference pytorch#40628, and starting to increase coverage for scripting support. Pull Request resolved: pytorch#43991 Reviewed By: mruberry Differential Revision: D23674614 Pulled By: bzinodev fbshipit-source-id: 1b1b85340eef641f664a14c4888781389c886a8b
It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.
We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.
onnx.export.cppto a common place for re-use.onnx_shape_inferencein internal apitorch.onnx._export.