Do not build PyTorch with caffe2 by default#66658
Do not build PyTorch with caffe2 by default#66658malfet wants to merge 12 commits intopytorch:masterfrom
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slowFor more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit a1072dc (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
1 similar comment
|
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
f52c1e7 to
f4c9288
Compare
Skip `test_c10_layer_norm` if PyTorch is compiled without caffe2
de984d3 to
e02d36f
Compare
|
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
|
@pytorchbot ciflow rerun -l ciflow/all |
|
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: CAFFE2 has been deprecated for a while, but still included in every PyTorch build. We should stop building it by default, although CI should still validate that caffe2 code is buildable. Build even fewer dependencies when compiling mobile builds without Caffe2 Introduce `TEST_CAFFE2` in torch.common.utils Skip `TestQuantizedEmbeddingOps` and `TestJit.test_old_models_bc` is code is compiled without Caffe2 Should be landed after pytorch/builder#864 Pull Request resolved: #66658 Reviewed By: driazati, seemethere, janeyx99 Differential Revision: D31669156 Pulled By: malfet fbshipit-source-id: 1cc45e2d402daf913a4685eb9f841cc3863e458d
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined [ghstack-poisoned]
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined ghstack-source-id: 8ae8ac0 Pull Request resolved: pytorch#68490
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)" The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781) [ghstack-poisoned]
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined ghstack-source-id: 8ae8ac0 Pull Request resolved: pytorch#68490
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined ghstack-source-id: 8ae8ac0 Pull Request resolved: pytorch#68490
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460) The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined ghstack-source-id: 8ae8ac0 Pull Request resolved: pytorch#68490
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490) Summary: Pull Request resolved: #68490 The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Co-authored-by: Nikita Shulga <nshulga@fb.com> Test Plan: Imported from OSS Reviewed By: jansel Differential Revision: D32483781 Pulled By: malfet fbshipit-source-id: e9b447db9466b369e77d747188685495aec3f124
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490) Summary: Pull Request resolved: #68490 The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops. Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`, but it also performs changes to the graph that are runnable by Caffe2, only. This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK` operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build) The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`, which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one ## BC-breaking note ### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`. `PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False. One alternative would be fixing it, but #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation. ### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified. Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined Co-authored-by: Nikita Shulga <nshulga@fb.com> Test Plan: Imported from OSS Reviewed By: jansel Differential Revision: D32483781 Pulled By: malfet fbshipit-source-id: e9b447db9466b369e77d747188685495aec3f124 (cherry picked from commit 5fb1eb1)
CAFFE2 has been deprecated for a while, but still included in every PyTorch build.
We should stop building it by default, although CI should still validate that caffe2 code is buildable.
Build even fewer dependencies when compiling mobile builds without Caffe2
Introduce
TEST_CAFFE2in torch.common.utilsSkip
TestQuantizedEmbeddingOpsandTestJit.test_old_models_bcis code is compiled without Caffe2Should be landed after pytorch/builder#864