Base class for the quantized ConvTranspose#35370
Base class for the quantized ConvTranspose#35370z-a-f wants to merge 9 commits intogh/z-a-f/4/basefrom
Conversation
💊 CircleCI build failures summary and remediationsAs of commit d985b9b (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no CircleCI failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 27 times. |
Differential Revision: [D20641812](https://our.internmc.facebook.com/intern/diff/D20641812)
Differential Revision: [D20641812](https://our.internmc.facebook.com/intern/diff/D20641812)
Differential Revision: [D20641812](https://our.internmc.facebook.com/intern/diff/D20641812)
| res.append(pad) | ||
| return res | ||
|
|
||
| def _output_padding(self, input, output_size, stride, padding, kernel_size): |
There was a problem hiding this comment.
is this the same as: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py#L496, can we reuse this code?
There was a problem hiding this comment.
Yes, I can inherit from the non-quantized class. However, I will have to inherit from both _ConvNd and nn.ConvTransposeNd. This will create a diamond problem, and we gotta be careful with the MRO.
|
|
||
| class _ConvTransposeNd(_ConvNd): | ||
| def __init__(self, in_channels, out_channels, kernel_size, stride, | ||
| padding, dilation, transposed, output_padding, |
There was a problem hiding this comment.
also I think we probably don't need transposed, it should always be true for ConvTranspose right?
There was a problem hiding this comment.
This is forr the consistency with the FP codebase
There was a problem hiding this comment.
this doesn't make sense, shall we change it?
Differential Revision: [D20641812](https://our.internmc.facebook.com/intern/diff/D20641812)
Differential Revision: [D20641812](https://our.internmc.facebook.com/intern/diff/D20641812)
Differential Revision: [D20641812](https://our.internmc.facebook.com/intern/diff/D20641812)
jerryzh168
left a comment
There was a problem hiding this comment.
Looks good, shall we remove transpose argument in _ConvTransposeNd?
We could, but in that case it will be inconsistent with the |
I think we should modify the float module as well, but you can have a separate PR |
Differential Revision: [D20641812](https://our.internmc.facebook.com/intern/diff/D20641812)
Summary: Pull Request resolved: pytorch#35370 Test Plan: Imported from OSS Differential Revision: D20641812 Pulled By: z-a-f fbshipit-source-id: 42bb1ed96d6b6e0a5da6e693d02ff616c33d9ef6
ghstack-source-id: fc2ec1c Pull Request resolved: pytorch/pytorch#35370
Summary: Pull Request resolved: pytorch#35370 Test Plan: Imported from OSS Differential Revision: D20641812 Pulled By: z-a-f fbshipit-source-id: 42bb1ed96d6b6e0a5da6e693d02ff616c33d9ef6
Stack from ghstack:
Differential Revision: D20641812