Skip to content

Base class for the quantized ConvTranspose#35370

Closed
z-a-f wants to merge 9 commits intogh/z-a-f/4/basefrom
gh/z-a-f/4/head
Closed

Base class for the quantized ConvTranspose#35370
z-a-f wants to merge 9 commits intogh/z-a-f/4/basefrom
gh/z-a-f/4/head

Conversation

@z-a-f
Copy link
Copy Markdown

@z-a-f z-a-f commented Mar 25, 2020

Stack from ghstack:

Differential Revision: D20641812

@z-a-f z-a-f requested a review from apaszke as a code owner March 25, 2020 07:46
z-a-f pushed a commit that referenced this pull request Mar 25, 2020
ghstack-source-id: 573934f
Pull Request resolved: #35370
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Mar 25, 2020

💊 CircleCI build failures summary and remediations

As of commit d985b9b (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no CircleCI failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 27 times.

Comment thread torch/nn/quantized/modules/conv.py Outdated
@z-a-f z-a-f requested a review from jerryzh168 March 25, 2020 20:42
Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

had a few comments

Comment thread torch/nn/quantized/modules/conv.py
Comment thread torch/nn/quantized/modules/conv.py Outdated
res.append(pad)
return res

def _output_padding(self, input, output_size, stride, padding, kernel_size):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Author

@z-a-f z-a-f Mar 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I can inherit from the non-quantized class. However, I will have to inherit from both _ConvNd and nn.ConvTransposeNd. This will create a diamond problem, and we gotta be careful with the MRO.


class _ConvTransposeNd(_ConvNd):
def __init__(self, in_channels, out_channels, kernel_size, stride,
padding, dilation, transposed, output_padding,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also I think we probably don't need transposed, it should always be true for ConvTranspose right?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is forr the consistency with the FP codebase

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this doesn't make sense, shall we change it?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can be a different PR

@z-a-f z-a-f requested a review from jerryzh168 March 26, 2020 07:15
Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, shall we remove transpose argument in _ConvTransposeNd?

@z-a-f
Copy link
Copy Markdown
Author

z-a-f commented Mar 27, 2020

Looks good, shall we remove transpose argument in _ConvTransposeNd?

We could, but in that case it will be inconsistent with the nn.modules.conv._ConvTransposeNd. Do you think we should still do it?

@jerryzh168
Copy link
Copy Markdown
Contributor

jerryzh168 commented Mar 28, 2020

Looks good, shall we remove transpose argument in _ConvTransposeNd?

We could, but in that case it will be inconsistent with the nn.modules.conv._ConvTransposeNd. Do you think we should still do it?

I think we should modify the float module as well, but you can have a separate PR

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@z-a-f merged this pull request in c5662dd.

@facebook-github-bot facebook-github-bot deleted the gh/z-a-f/4/head branch April 13, 2020 14:16
ashishfarmer pushed a commit to ashishfarmer/pytorch that referenced this pull request Apr 13, 2020
Summary: Pull Request resolved: pytorch#35370

Test Plan: Imported from OSS

Differential Revision: D20641812

Pulled By: z-a-f

fbshipit-source-id: 42bb1ed96d6b6e0a5da6e693d02ff616c33d9ef6
MauiDesign pushed a commit to MauiDesign/PyTorchPyTorch that referenced this pull request Aug 16, 2020
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary: Pull Request resolved: pytorch#35370

Test Plan: Imported from OSS

Differential Revision: D20641812

Pulled By: z-a-f

fbshipit-source-id: 42bb1ed96d6b6e0a5da6e693d02ff616c33d9ef6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants