Skip to content

Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON#67460

Merged
thiagocrepaldi merged 3 commits intopytorch:onnx_ms_1from
thiagocrepaldi:thiagofc/add-onnx-aten-strict-fallback
Nov 8, 2021
Merged

Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON#67460
thiagocrepaldi merged 3 commits intopytorch:onnx_ms_1from
thiagocrepaldi:thiagofc/add-onnx-aten-strict-fallback

Conversation

@thiagocrepaldi
Copy link
Collaborator

@thiagocrepaldi thiagocrepaldi commented Oct 28, 2021

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through OperatorExportTypes.ONNX_ATEN_FALLBACK, but it also performs changes to
the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for ONNX_ATEN_FALLBACK operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type ONNX_ATEN__STRICT_FALLBACK, which essentially is the same as ONNX_ATEN_FALLBACK but without caffe2 transformations. It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

BC-breaking note

The global constant torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE is removed in favor of a less visible torch.onnx._CAFFE2_ATEN_FALLBACK.

PYTORCH_ONNX_CAFFE2_BUNDLE is really a dead code flag always set to False. One alternative would be fixing it, but as #66658 disables Caffe2 build by default. Making a Caffe2 feature a private one seems to make more sense for future deprecation.

The method torch.onnx.export now defaults to ONNX when operator_export_type is not specified.

Previously torch.onnx.export's operator_export_type intended to default to ONNX_ATEN_FALLBACK when PYTORCH_ONNX_CAFFE2_BUNDLE was set, but it would never happend as PYTORCH_ONNX_CAFFE2_BUNDLE is always undefined

@pytorch-probot
Copy link

pytorch-probot bot commented Oct 28, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/thiagocrepaldi/pytorch/blob/6659836cf86b944a81d41242a0545a942cad25b8/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Oct 28, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 6659836 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Oct 28, 2021
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from 62624d3 to aecd2e6 Compare October 28, 2021 19:36
@thiagocrepaldi thiagocrepaldi changed the title [WIP] Add ONNX_ATEN_FALLBACK as new torch::onnx::OperatorExportTypes Add ONNX_ATEN_FALLBACK as new torch::onnx::OperatorExportTypes Oct 29, 2021
@thiagocrepaldi thiagocrepaldi marked this pull request as ready for review October 29, 2021 19:26
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from 7af56fc to 8acc3d3 Compare November 2, 2021 20:41
@thiagocrepaldi thiagocrepaldi requested a review from garymm November 2, 2021 20:41
@thiagocrepaldi thiagocrepaldi changed the title Add ONNX_ATEN_FALLBACK as new torch::onnx::OperatorExportTypes Restrict caffe2-specific changes for torch::onnx::OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON Nov 2, 2021
@thiagocrepaldi thiagocrepaldi changed the title Restrict caffe2-specific changes for torch::onnx::OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON Nov 2, 2021
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch 2 times, most recently from a6aa2a5 to 0440fb9 Compare November 3, 2021 15:42
Thiago Crepaldi added 2 commits November 4, 2021 12:15
The use of ATEN as a fallback operator during ONNX conversion is
important for increasing operator coverage or even provide more
efficient implementations over some ONNX ops.

Currently this feature is available through
OperatorExportTypes.ONNX_ATEN_FALLBACK, but it also performs changes to
the graph that are runnable by Caffe2, only.

This PR introduces ONNX_ATEN_STRICT_FALLBACK as an alternative fallback
for other backend implementations, such as ONNX Runtime that only
support ONNX and ATEN operators.

One alternative that it was considered was using
`PYTORCH_ONNX_CAFFE2_BUNDLE` along with `ONNX_ATEN_FALLBACK` to
introduce Caffe2-specific changes, but it seems Caffe2 is supported even
when `PYTORCH_ONNX_CAFFE2_BUNDLE` is `False`.
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from 0440fb9 to 3f86347 Compare November 4, 2021 19:16
Copy link
Collaborator

@garymm garymm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this!

@thiagocrepaldi thiagocrepaldi requested a review from garymm November 5, 2021 16:44
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from a34e12f to b99931a Compare November 5, 2021 16:53
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from b99931a to 38ffc5c Compare November 5, 2021 17:29
@thiagocrepaldi thiagocrepaldi requested a review from garymm November 5, 2021 17:30
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from 5ac32b3 to c4d57ca Compare November 5, 2021 18:06
@thiagocrepaldi thiagocrepaldi requested a review from garymm November 5, 2021 18:08
Copy link
Collaborator

@garymm garymm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from c4d57ca to fef50d3 Compare November 5, 2021 18:24
@thiagocrepaldi thiagocrepaldi force-pushed the thiagofc/add-onnx-aten-strict-fallback branch from fef50d3 to 6659836 Compare November 5, 2021 18:56
@thiagocrepaldi thiagocrepaldi merged commit a14c317 into pytorch:onnx_ms_1 Nov 8, 2021
BowenBao added a commit that referenced this pull request Nov 16, 2021
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

[ghstack-poisoned]
garymm added a commit to garymm/pytorch that referenced this pull request Dec 1, 2021
1. Make description of ONNX_ATEN_FALLBACK more accurate (after pytorch#67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.
garymm added a commit that referenced this pull request Dec 6, 2021
[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.
BowenBao added a commit that referenced this pull request Dec 7, 2021
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)"

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Dec 7, 2021
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)"

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Dec 7, 2021
[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

[ghstack-poisoned]
BowenBao added a commit to BowenBao/pytorch that referenced this pull request Jan 5, 2022
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460)

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

ghstack-source-id: 8ae8ac0
Pull Request resolved: pytorch#68490
BowenBao added a commit to BowenBao/pytorch that referenced this pull request Jan 5, 2022
[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after pytorch#67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

ghstack-source-id: 6572ba2
Pull Request resolved: pytorch#69549
BowenBao added a commit that referenced this pull request Jan 7, 2022
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)"

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 7, 2022
…69260)"

[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 7, 2022
[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 7, 2022
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)"

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 11, 2022
…s for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)"

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 11, 2022
…portTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460)"

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

Differential Revision: [D32483781](https://our.internmc.facebook.com/intern/diff/D32483781)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 11, 2022
…69260)"

[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267)

[ghstack-poisoned]
BowenBao added a commit that referenced this pull request Jan 11, 2022
[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

Differential Revision: [D32994267](https://our.internmc.facebook.com/intern/diff/D32994267)

[ghstack-poisoned]
facebook-github-bot pushed a commit that referenced this pull request Jan 14, 2022
Summary:
Pull Request resolved: #69549

[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32994267

Pulled By: msaroufim

fbshipit-source-id: ba641404107baa23506d337eca742fc1fe9f0772
BowenBao added a commit to BowenBao/pytorch that referenced this pull request Jan 21, 2022
…NNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (pytorch#67460)

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but pytorch#66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

ghstack-source-id: 8ae8ac0
Pull Request resolved: pytorch#68490
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed oncall: jit Add this issue/PR to JIT oncall triage queue open source

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants