Skip to content

add type annotations to torch.nn.modules.conv#49564

Closed
guilhermeleobas wants to merge 17 commits intopytorch:masterfrom
guilhermeleobas:conv
Closed

add type annotations to torch.nn.modules.conv#49564
guilhermeleobas wants to merge 17 commits intopytorch:masterfrom
guilhermeleobas:conv

Conversation

@guilhermeleobas
Copy link
Copy Markdown
Collaborator

closes gh-49563

@guilhermeleobas guilhermeleobas added the module: typing Related to mypy type annotations label Dec 17, 2020
@guilhermeleobas guilhermeleobas self-assigned this Dec 17, 2020
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Dec 17, 2020

💊 CI failures summary and remediations

As of commit 7e0e612 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@codecov
Copy link
Copy Markdown

codecov Bot commented Dec 25, 2020

Codecov Report

Merging #49564 (2239d74) into master (b76822e) will increase coverage by 39.29%.
The diff coverage is 100.00%.

@@             Coverage Diff             @@
##           master   #49564       +/-   ##
===========================================
+ Coverage   41.40%   80.69%   +39.29%     
===========================================
  Files         511     1895     +1384     
  Lines       68789   205461   +136672     
===========================================
+ Hits        28480   165800   +137320     
+ Misses      40309    39661      -648     

@guilhermeleobas guilhermeleobas marked this pull request as ready for review December 25, 2020 19:04
Comment thread torch/nn/modules/conv.py Outdated
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With "pytorch does not support" I think you mean that TorchScript doesn't support it, right? It would be useful to document this List/Tuple/Sequence stuff (maybe on the wiki), because it will come back a lot.

  • Here all these things actually are tuples (because they're coerced to that with _single, _double, etc.). and Tuple is supported by torchscript. But everything is still annotated as List[int], and the JIT seems happy with that. Unclear to me why it shouldn't be annotated as Tuple[int, ...] instead. I tried that on this PR, and it seems to not fail the tests in test/jit/test_models.py (however, it doesn't fail for Sequence[int] either, which was unexpected for me).
  • Conv*D.__init__ uses _size_1_t etc. annotations, which are unions. The JIT seems happy with that as well, it's unclear to me why (Union is not supported).
  • It'd be useful to add Sequence to https://pytorch.org/docs/master/jit_language_reference.html#unsupported-typing-constructs
  • Can we rely on tests breaking if we change one of these annotations in a way that would break torchscript? I assume yes, but if not we need a test/check procedure (e.g. check if feature is in https://pytorch.org/docs/master/jit_builtin_functions.html#supported-pytorch-functions).

@suo, @ezyang could you comment on this?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added Sequence to the list of unsupported typing constructs

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually I am not sure if this is necessary --> I guess the reason why you want to use Sequence here is bc _output_padding returns a List[int] but others expect a Tuple[int, ...]. but in fact it is a helper function only for this module. we can change its return .

also if that's the case you can also change the argument types to Tuple[int, ...] as well. ??

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply, but yes, problem with Tuple[int, ...] is that torch.jit.script can not parse that correctly.
I've tried to fix that in #44774 but it requires a bit more work.
_size_N_t is a special type which is defined as Union[int, Tuple[int * N]], which gets automagically promoted to list by torch.jit runtime.

@mruberry mruberry requested a review from walterddr December 28, 2020 17:11
@mruberry
Copy link
Copy Markdown
Collaborator

@walterddr Are you the correct person to review this?

@mruberry mruberry added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Dec 28, 2020
Comment thread torch/nn/modules/conv.py Outdated
Comment on lines 535 to 537
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def _output_padding(self, input: Tensor, output_size: Optional[List[int]],
stride: List[int], padding: List[int], kernel_size: List[int],
dilation: Optional[List[int]] = None) -> List[int]:
def _output_padding(self, input: Tensor, output_size: Optional[List[int]],
stride: Tuple[int, ...], padding: Tuple[int, ...], kernel_size: Tuple[int, ...],
dilation: Optional[Tuple[int, ...]] = None) -> Tuple[int, ...]:

Then change the final return to

-        return ret
+        return tuple(ret)

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I've updated the source code with the suggested changes

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll revert the commit since one of the jit tests is failing with the message:

RuntimeError: 
Expression of type dots cannot be used in a type expression:
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 536
    def _output_padding(self, input: Tensor, output_size: Optional[List[int]],
                        stride: Tuple[int, ...], padding: Tuple[int, ...], kernel_size: Tuple[int, ...],
                                           ~~~ <--- HERE
                        dilation: Optional[Tuple[int, ...]] = None) -> Tuple[int, ...]:
        if output_size is None:
'ConvTranspose1d._output_padding' is being compiled since it was called from 'ConvTranspose1d.forward'
  File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 688
            raise ValueError('Only `zeros` padding mode is supported for ConvTranspose1d')
    
        output_padding = self._output_padding(
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            input, output_size, self.stride, self.padding, self.kernel_size, self.dilation)
            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
        return F.conv_transpose1d(
            input, self.weight, self.bias, self.stride, self.padding,

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expression of type dots cannot be used in a type expression:

That seems to disagree with the Tuple docs in https://pytorch.org/docs/master/jit_language_reference.html#supported-type. Looks like only fixed-length tuples are supported? If so, would be good to fix that docs entry.

This would explain my question above about everything being typed List[int].

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. i agree the doc is confusing.. it should be Tuple[T1, T2] instead of Tuple[T1, T2, ...] which means a variable-length tuple, see https://docs.python.org/3/library/typing.html#typing.Tuple

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did a bit of research. seems like in fact it is only afffecting Tuple[T, ...] annotation. i checked that the following actually worked so JIT can parse ellipsis correct.

def foo():
            n = torch.ones(16).reshape(2,2,2,2)
            return n[1,...,1]  # equivalent to return n[1,:,:,1]

Comment thread torch/nn/modules/conv.py Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually I am not sure if this is necessary --> I guess the reason why you want to use Sequence here is bc _output_padding returns a List[int] but others expect a Tuple[int, ...]. but in fact it is a helper function only for this module. we can change its return .

also if that's the case you can also change the argument types to Tuple[int, ...] as well. ??

Copy link
Copy Markdown
Contributor

@walterddr walterddr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me overall. please fix the remaining comments/docs and we should be good to go

Comment thread torch/nn/modules/conv.py Outdated
Comment on lines 535 to 537
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did a bit of research. seems like in fact it is only afffecting Tuple[T, ...] annotation. i checked that the following actually worked so JIT can parse ellipsis correct.

def foo():
            n = torch.ones(16).reshape(2,2,2,2)
            return n[1,...,1]  # equivalent to return n[1,:,:,1]

Comment thread torch/nn/modules/conv.py Outdated
Comment thread torch/nn/modules/conv.py Outdated
Comment thread torch/nn/modules/conv.py Outdated
Comment thread docs/source/jit_language_reference.rst Outdated
@malfet
Copy link
Copy Markdown
Contributor

malfet commented Dec 30, 2020

@walterddr Are you the correct person to review this?

I think those changes needs to be done in coordination with JIT team, because we seems to lack set of test that validates that classes in torch.nn.modules remain JITable, see #47528 as an example of regression that I've introduced a while back after adding type annotations.

@rgommers
Copy link
Copy Markdown
Collaborator

I think those changes needs to be done in coordination with JIT team, because we seems to lack set of test that validates that classes in torch.nn.modules remain JITable

That was my concern indeed. In general that's true for all type annotation PRs, they could affect if something is ijt-able. We can't go bother the JIT team for all those PRs, so if test coverage is lacking maybe we need some test procedure to verify anything that's touched in nontrivial ways has test coverage. Maybe something as simple as "insert an annotation known to not work with torchscript, see if any tests fail, and report that that was indeed the case on the PR"? And if we find untested functionality, include a new JIT test with the PR?

Something like that would be a hassle, so I hope it's not necessary - but it'd still be better than introducing more regressions.

@guilhermeleobas
Copy link
Copy Markdown
Collaborator Author

I have a few other PRs opened which annotates others torch.nn.modules.*. Would it be smart to put those on hold until one can verify the annotations do not introduce any regression?

@rgommers
Copy link
Copy Markdown
Collaborator

I have a few other PRs opened which annotates others torch.nn.modules.*. Would it be smart to put those on hold until one can verify the annotations do not introduce any regression?

I think so. Can you comment on each one to not merge until we've resolved the discussion here?

@guilhermeleobas
Copy link
Copy Markdown
Collaborator Author

@walterddr Are you the correct person to review this?

I think those changes needs to be done in coordination with JIT team, because we seems to lack set of test that validates that classes in torch.nn.modules remain JITable, see #47528 as an example of regression that I've introduced a while back after adding type annotations.

@malfet Is the code posted on #47528 supposed to work?

import torch
from torch import nn

class Pad(torch.nn.Module):
    def forward(self, x):
        pad_op =  nn.ZeroPad2d(padding=(10, 20, 0, 0))
        return pad_op(x)

m = torch.jit.script(Pad())

@malfet
Copy link
Copy Markdown
Contributor

malfet commented Jan 5, 2021

@guilhermeleobas it needs to be modified a bit, to something like (as class instance creation is not allowed during jit-compiled forward()/backward() methods):

import torch
from torch import nn

class Pad(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.pad_op = nn.ZeroPad2d(padding=(10, 20, 0, 0))

    def forward(self, x):
        return self.pad_op(x)

m = torch.jit.script(Pad())

@walterddr
Copy link
Copy Markdown
Contributor

could you also add tests for the conv1/2/3d modules similar to #49494? thanks

@guilhermeleobas
Copy link
Copy Markdown
Collaborator Author

PR is ready for review. I've included tests for Conv(Transposed)Nd classes.

@walterddr
Copy link
Copy Markdown
Contributor

the change is failing jit test though

@walterddr walterddr closed this Jan 14, 2021
@walterddr
Copy link
Copy Markdown
Contributor

closing & reopening PR to trigger CI again

@walterddr walterddr reopened this Jan 14, 2021
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@walterddr merged this pull request in 0d981ee.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
closes pytorchgh-49563

Pull Request resolved: pytorch#49564

Reviewed By: albanD

Differential Revision: D25917441

Pulled By: walterddr

fbshipit-source-id: 491dc06cfc1bbf694dfd9ccefca4f55488a931b2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: typing Related to mypy type annotations open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Enable torch.nn.modules.conv typechecks during CI

7 participants