[ONNX] Add quantization support to more single output ops#83008
[ONNX] Add quantization support to more single output ops#83008justinchuby wants to merge 16 commits intogh/justinchuby/4/basefrom
Conversation
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
🔗 Helpful links
❌ 1 New FailuresAs of commit cd6feb0 (more details on the Dr. CI page): Expand to see more
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
This comment was marked as resolved.
This comment was marked as resolved.
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
#80039 - Remove magic number and assign it to INT64_MAX [ghstack-poisoned]
|
Please checkout CI failure |
#80039 - Implement quantization support for single output ops - quantized::sigmoid - quantized::instance_norm - aten::reshape - aten::reshape_as - aten::sum - aten::mean - aten::prod - aten::t - aten::numpy_T - aten::expand - aten::expand_as - aten::embedding - aten::embedding_bag - aten::view - aten::select - aten::eq - aten::ne - aten::gt - aten::lt - aten::le - aten::ge - quantized::layer_norm - aten::elu - aten::selu - aten::maximum - aten::minimum - aten::amax - aten::amin - aten::hardtanh - aten::hardswish - quantized::group_norm - aten::as_strided - quantized::leaky_relu - aten::transpose - Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity) - Remove magic number and assign it to INT64_MAX - implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling [ghstack-poisoned]
#80039 - Implement quantization support for single output ops - quantized::sigmoid - quantized::instance_norm - aten::reshape - aten::reshape_as - aten::sum - aten::mean - aten::prod - aten::t - aten::numpy_T - aten::expand - aten::expand_as - aten::embedding - aten::embedding_bag - aten::view - aten::select - aten::eq - aten::ne - aten::gt - aten::lt - aten::le - aten::ge - quantized::layer_norm - aten::elu - aten::selu - aten::maximum - aten::minimum - aten::amax - aten::amin - aten::hardtanh - aten::hardswish - quantized::group_norm - aten::as_strided - quantized::leaky_relu - aten::transpose - Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity) - Remove magic number and assign it to INT64_MAX - implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling [ghstack-poisoned]
#80039 - Implement quantization support for single output ops - quantized::sigmoid - quantized::instance_norm - aten::reshape - aten::reshape_as - aten::sum - aten::mean - aten::prod - aten::t - aten::numpy_T - aten::expand - aten::expand_as - aten::embedding - aten::embedding_bag - aten::view - aten::select - aten::eq - aten::ne - aten::gt - aten::lt - aten::le - aten::ge - quantized::layer_norm - aten::elu - aten::selu - aten::maximum - aten::minimum - aten::amax - aten::amin - aten::hardtanh - aten::hardswish - quantized::group_norm - aten::as_strided - quantized::leaky_relu - aten::transpose - Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity) - Remove magic number and assign it to INT64_MAX - implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling [ghstack-poisoned]
#80039 - Implement quantization support for single output ops - quantized::sigmoid - quantized::instance_norm - aten::reshape - aten::reshape_as - aten::sum - aten::mean - aten::prod - aten::t - aten::numpy_T - aten::expand - aten::expand_as - aten::embedding - aten::embedding_bag - aten::view - aten::select - aten::eq - aten::ne - aten::gt - aten::lt - aten::le - aten::ge - quantized::layer_norm - aten::elu - aten::selu - aten::maximum - aten::minimum - aten::amax - aten::amin - aten::hardtanh - aten::hardswish - quantized::group_norm - aten::as_strided - quantized::leaky_relu - aten::transpose - Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity) - Remove magic number and assign it to INT64_MAX - implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling [ghstack-poisoned]
#80039 - Implement quantization support for single output ops - quantized::sigmoid - quantized::instance_norm - aten::reshape - aten::reshape_as - aten::sum - aten::mean - aten::prod - aten::t - aten::numpy_T - aten::expand - aten::expand_as - aten::embedding - aten::embedding_bag - aten::view - aten::select - aten::eq - aten::ne - aten::gt - aten::lt - aten::le - aten::ge - quantized::layer_norm - aten::elu - aten::selu - aten::maximum - aten::minimum - aten::amax - aten::amin - aten::hardtanh - aten::hardswish - quantized::group_norm - aten::as_strided - quantized::leaky_relu - aten::transpose - Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity) - Remove magic number and assign it to INT64_MAX - implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling [ghstack-poisoned]
|
@BowenBao please take another look |
|
@pytorchbot merge -g |
|
@pytorchbot successfully started a merge job. Check the current status here. |
|
@pytorchbot merge -f "All checks passed" |
|
The merge job was canceled. If you believe this is a mistake,then you can re trigger it through pytorch-bot. |
|
@pytorchbot successfully started a merge job. Check the current status here. |
…83008) Summary: #80039 - Implement quantization support for single output ops - quantized::sigmoid - quantized::instance_norm - aten::reshape - aten::reshape_as - aten::sum - aten::mean - aten::prod - aten::t - aten::numpy_T - aten::expand - aten::expand_as - aten::embedding - aten::embedding_bag - aten::view - aten::select - aten::eq - aten::ne - aten::gt - aten::lt - aten::le - aten::ge - quantized::layer_norm - aten::elu - aten::selu - aten::maximum - aten::minimum - aten::amax - aten::amin - aten::hardtanh - aten::hardswish - quantized::group_norm - aten::as_strided - quantized::leaky_relu - aten::transpose - Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity) - Remove magic number and assign it to INT64_MAX - implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling Pull Request resolved: #83008 Approved by: https://github.com/BowenBao Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/80cfafc3857c981d67507f135232bf43da4a1caa Reviewed By: weiwangmeta Differential Revision: D38947052 fbshipit-source-id: 02b0be6c3d27e0a816fc2e97658d119021be3cb1
Stack from ghstack (oldest at bottom):
#80039
quantized_argsand have the wrapper closed overscaleandzero_pointinstead (for purity)_unpack_quantized_tensorfor handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling