Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON#67460
Merged
thiagocrepaldi merged 3 commits intopytorch:onnx_ms_1from Nov 8, 2021
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
Contributor
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 6659836 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
62624d3 to
aecd2e6
Compare
BowenBao
reviewed
Oct 29, 2021
7af56fc to
8acc3d3
Compare
a6aa2a5 to
0440fb9
Compare
added 2 commits
November 4, 2021 12:15
The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through OperatorExportTypes.ONNX_ATEN_FALLBACK, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces ONNX_ATEN_STRICT_FALLBACK as an alternative fallback for other backend implementations, such as ONNX Runtime that only support ONNX and ATEN operators. One alternative that it was considered was using `PYTORCH_ONNX_CAFFE2_BUNDLE` along with `ONNX_ATEN_FALLBACK` to introduce Caffe2-specific changes, but it seems Caffe2 is supported even when `PYTORCH_ONNX_CAFFE2_BUNDLE` is `False`.
0440fb9 to
3f86347
Compare
a34e12f to
b99931a
Compare
garymm
requested changes
Nov 5, 2021
b99931a to
38ffc5c
Compare
5ac32b3 to
c4d57ca
Compare
c4d57ca to
fef50d3
Compare
fef50d3 to
6659836
Compare
BowenBao
added a commit
that referenced
this pull request
Nov 16, 2021
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined [ghstack-poisoned]
garymm
added a commit
to garymm/pytorch
that referenced
this pull request
Dec 1, 2021
1. Make description of ONNX_ATEN_FALLBACK more accurate (after pytorch#67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it.
garymm
added a commit
that referenced
this pull request
Dec 6, 2021
[ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it.
BowenBao
added a commit
that referenced
this pull request
Dec 7, 2021
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Dec 7, 2021
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Dec 7, 2021
[ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. [ghstack-poisoned]
BowenBao
added a commit
to BowenBao/pytorch
that referenced
this pull request
Jan 5, 2022
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined ghstack-source-id: 8ae8ac0 Pull Request resolved: pytorch#68490
BowenBao
added a commit
to BowenBao/pytorch
that referenced
this pull request
Jan 5, 2022
[ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after pytorch#67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. ghstack-source-id: 6572ba2 Pull Request resolved: pytorch#69549
BowenBao
added a commit
that referenced
this pull request
Jan 7, 2022
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Jan 7, 2022
…69260)" [ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Jan 7, 2022
[ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Jan 7, 2022
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Jan 11, 2022
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Jan 11, 2022
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Jan 11, 2022
…69260)" [ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267) [ghstack-poisoned]
BowenBao
added a commit
that referenced
this pull request
Jan 11, 2022
[ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267) [ghstack-poisoned]
facebook-github-bot
pushed a commit
that referenced
this pull request
Jan 14, 2022
Summary: Pull Request resolved: #69549 [ONNX] minor clarifications of docstrings 1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460). 2. Specify minimum and maximum values for opset_version. This is pretty important information and we should make users dig through source code to find it. Test Plan: Imported from OSS Reviewed By: malfet Differential Revision: D32994267 Pulled By: msaroufim fbshipit-source-id: ba641404107baa23506d337eca742fc1fe9f0772
BowenBao
added a commit
to BowenBao/pytorch
that referenced
this pull request
Jan 21, 2022
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined ghstack-source-id: 8ae8ac0 Pull Request resolved: pytorch#68490
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.
Currently this feature is available through OperatorExportTypes.ONNX_ATEN_FALLBACK, but it also performs changes to
the graph that are runnable by Caffe2, only.
This PR introduces restricts caffe2-specific graph transformations for ONNX_ATEN_FALLBACK operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)
The first version of this PR introduced a new operator export type
ONNX_ATEN__STRICT_FALLBACK, which essentially is the same asONNX_ATEN_FALLBACKbut without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback oneBC-breaking note
The global constant torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE is removed in favor of a less visible torch.onnx._CAFFE2_ATEN_FALLBACK.
PYTORCH_ONNX_CAFFE2_BUNDLEis really a dead code flag always set toFalse. One alternative would be fixing it, but as #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation.The method torch.onnx.export now defaults to
ONNXwhenoperator_export_typeis not specified.Previously
torch.onnx.export'soperator_export_typeintended to default toONNX_ATEN_FALLBACKwhenPYTORCH_ONNX_CAFFE2_BUNDLEwas set, but it would never happend asPYTORCH_ONNX_CAFFE2_BUNDLEis always undefined