add type annotations to torch.nn.quantized.modules.conv#49702
add type annotations to torch.nn.quantized.modules.conv#49702guilhermeleobas wants to merge 5 commits intopytorch:masterfrom guilhermeleobas:quantized-conv
Conversation
💊 CI failures summary and remediationsAs of commit 2f41a83 (more details on the Dr. CI page):
1 job timed out:
🚧 1 fixed upstream failure:These were probably caused by upstream breakages that were already fixed.
Please rebase on the
|
Codecov Report
@@ Coverage Diff @@
## master #49702 +/- ##
==========================================
+ Coverage 75.14% 80.69% +5.54%
==========================================
Files 1891 1895 +4
Lines 205325 205464 +139
==========================================
+ Hits 154291 165795 +11504
+ Misses 51034 39669 -11365 |
rgommers
left a comment
There was a problem hiding this comment.
Thanks @guilhermeleobas. This looks good. I made a couple of comments on how the ignores could be avoided, but don't make them (yet) because I'm not sure they're desirable. Let's see what @ezyang and @malfet think.
The PR could also be merged as is, that'd be perfectly fine in practice.
There was a problem hiding this comment.
So the issue is that the subclass __init__ methods have two fewer parameters (no transposed, output_padding) than _ConvNd. The current fix works, but technically the current way the class hierarchy is organized is not correct and the better way to do it would be:
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1, bias=True,
padding_mode='zeros'):
# All subclasses have this signature
raise NotImplementedError
def _init(self, in_channels, out_channels, kernel_size, stride,
padding, dilation,
transposed, output_padding,
groups, bias,
padding_mode='zeros'):
super(_ConvNd, self).__init__()
and then subclasses should call super(Conv1d, self)._init(
I think that would be a good change, but it is only possible if there are no subclasses of _ConvNd floating around (it's private, but you never know).
Let's see what other reviewers think.
There was a problem hiding this comment.
Ok, I'll wait the other reviewers before changing this part of the code
There was a problem hiding this comment.
Soooo it is a little tricky. torch.nn._ConvNd is sadly de facto public API; see https://github.com/pytorch/fairseq/blob/master/fairseq/modules/quantization/scalar/modules/qconv.py#L14 for an example. But this is quantized ConvNd, and this code is relatively new and there ought to be less uses. cc @jerryzh168 for more thoughts.
I'm trying to decide if I agree that the alternate version is better.
There was a problem hiding this comment.
I guess the high level question I have is why the subclass init methods have fewer parameters. This seems like a blatant oversight to me.
There was a problem hiding this comment.
I guess the high level question I have is why the subclass init methods have fewer parameters
Because those two parameters are implicitly defined by the dimensionality of subclass - e.g. Conv1d hardcodes transpose, output_padding to False, (0,). It would not make sense for the user to define those when initializing Conv1d.
transpose is always False, so that really doesn't make much sense - probably code just copied from the non-quantized version or something like that.
There was a problem hiding this comment.
OK fair enough. The suggested alternative seems like a reasonable way to shut up the type checker then.
There was a problem hiding this comment.
I've updated the code to use _init.
There was a problem hiding this comment.
I see, are we going to have similar changes in https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py as well?
facebook-github-bot
left a comment
There was a problem hiding this comment.
@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@walterddr I accidentally commandeered the diff, feel free to commandeer it back |
|
no problem at all @ezyang please go ahead and merge it. :-) |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: closes pytorchgh-49700 No mypy issues were found in the first three entries deleted from `mypy.ini`: ``` [mypy-torch.nn.qat.modules.activations] ignore_errors = True [mypy-torch.nn.qat.modules.conv] ignore_errors = True [mypy-torch.nn.quantized.dynamic.modules.linear] ignore_errors = True ``` Pull Request resolved: pytorch#49702 Reviewed By: walterddr, zou3519 Differential Revision: D25767119 Pulled By: ezyang fbshipit-source-id: cb83e53549a299538e1b154cf8b79e3280f7392a
Summary: closes pytorchgh-49700 No mypy issues were found in the first three entries deleted from `mypy.ini`: ``` [mypy-torch.nn.qat.modules.activations] ignore_errors = True [mypy-torch.nn.qat.modules.conv] ignore_errors = True [mypy-torch.nn.quantized.dynamic.modules.linear] ignore_errors = True ``` Pull Request resolved: pytorch#49702 Reviewed By: walterddr, zou3519 Differential Revision: D25767119 Pulled By: ezyang fbshipit-source-id: cb83e53549a299538e1b154cf8b79e3280f7392a
closes gh-49700
No mypy issues were found in the first three entries deleted from
mypy.ini: