Skip to content

[quant] Add support for quantized::conv1d operator#38248

Closed
supriyar wants to merge 6 commits intogh/supriyar/112/basefrom
gh/supriyar/112/head
Closed

[quant] Add support for quantized::conv1d operator#38248
supriyar wants to merge 6 commits intogh/supriyar/112/basefrom
gh/supriyar/112/head

Conversation

@supriyar
Copy link
Copy Markdown
Contributor

@supriyar supriyar commented May 11, 2020

Stack from ghstack:

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D21553661

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented May 11, 2020

💊 CI failures summary and remediations

As of commit 85b68b1 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 16 times.

@supriyar supriyar requested a review from jerryzh168 May 11, 2020 18:21
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
supriyar added a commit that referenced this pull request May 11, 2020
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: bb6dd5b
Pull Request resolved: #38248
if (ctx.qEngine() == at::QEngine::QNNPACK) {
TORCH_CHECK(
kSpatialDim == 2,
"quantized::conv2d_unpack (qnnpack): QNNPACK only supports Conv2d "
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment here is in error, we do support unpack for Qconv1d here.

input_channels_per_group, (length, ),
output_channels_per_group, groups, kernel, [stride], [pad],
[dilation], X_scale, X_zero_point, W_scale, W_zero_point,
Y_scale, Y_zero_point, use_bias, use_relu, use_channelwise
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the work to be complete, the following need to be done (can be done in later PRs)

  1. Ensure that conv1d quantized module calls these ops + module level tests https://github.com/pytorch/pytorch/blob/master/torch/nn/quantized/modules/conv.py#L224
  2. Support for Conv1dRelu module
  3. Op level benchmarks for Conv1d (Please align shapes with float benchmark so that we can compare speedup easily) (https://github.com/pytorch/pytorch/blob/master/benchmarks/operator_benchmark/pt/qconv_test.py#L62)
  4. Fusion support: https://github.com/pytorch/pytorch/blob/master/torch/quantization/fuse_modules.py#L95 (For Conv1d+Relu, Conv1d+Bn1d+Relu)
  5. QAT support for Conv1d+Bn1d (This can be low priority)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@raghuramank100 1, 2 and 3 are done (with graph mode support) in this stack. I'll work on the fusion support next.

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@supriyar supriyar requested a review from raghuramank100 May 11, 2020 22:07
supriyar added a commit that referenced this pull request May 11, 2020
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: fe93833
Pull Request resolved: #38248
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
}
};

template <int kSpatialDim, bool kReluFused>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is kSpatialDim always 2 here? maybe remove if that is the case?

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@supriyar supriyar requested a review from jerryzh168 May 13, 2020 15:36
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in 2d221df.

@facebook-github-bot facebook-github-bot deleted the gh/supriyar/112/head branch May 17, 2020 14:18
titaiwangms added a commit that referenced this pull request Oct 5, 2022

According to #38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.

[ghstack-poisoned]
titaiwangms added a commit that referenced this pull request Oct 5, 2022

According to #38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Oct 25, 2022

According to #38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Oct 25, 2022

According to #38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Oct 25, 2022
According to #38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.
Pull Request resolved: #85997
Approved by: https://github.com/BowenBao
kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Nov 5, 2022
According to pytorch#38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.
Pull Request resolved: pytorch#85997
Approved by: https://github.com/BowenBao
kulinseth pushed a commit to kulinseth/pytorch that referenced this pull request Dec 10, 2022
According to pytorch#38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.
Pull Request resolved: pytorch#85997
Approved by: https://github.com/BowenBao
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary: Pull Request resolved: pytorch#38248

Test Plan: Imported from OSS

Differential Revision: D21553661

fbshipit-source-id: 430b4c3244be0cf1a18bdf16788a2023c524c10b
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 25, 2026
According to pytorch#38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.
Pull Request resolved: pytorch#85997
Approved by: https://github.com/BowenBao
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants