add type annotations to torch.nn.modules.conv#49564
add type annotations to torch.nn.modules.conv#49564guilhermeleobas wants to merge 17 commits intopytorch:masterfrom guilhermeleobas:conv
Conversation
💊 CI failures summary and remediationsAs of commit 7e0e612 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Codecov Report
@@ Coverage Diff @@
## master #49564 +/- ##
===========================================
+ Coverage 41.40% 80.69% +39.29%
===========================================
Files 511 1895 +1384
Lines 68789 205461 +136672
===========================================
+ Hits 28480 165800 +137320
+ Misses 40309 39661 -648 |
There was a problem hiding this comment.
With "pytorch does not support" I think you mean that TorchScript doesn't support it, right? It would be useful to document this List/Tuple/Sequence stuff (maybe on the wiki), because it will come back a lot.
- Here all these things actually are tuples (because they're coerced to that with
_single,_double, etc.). andTupleis supported by torchscript. But everything is still annotated asList[int], and the JIT seems happy with that. Unclear to me why it shouldn't be annotated asTuple[int, ...]instead. I tried that on this PR, and it seems to not fail the tests intest/jit/test_models.py(however, it doesn't fail forSequence[int]either, which was unexpected for me). Conv*D.__init__uses_size_1_tetc. annotations, which are unions. The JIT seems happy with that as well, it's unclear to me why (Unionis not supported).- It'd be useful to add
Sequenceto https://pytorch.org/docs/master/jit_language_reference.html#unsupported-typing-constructs - Can we rely on tests breaking if we change one of these annotations in a way that would break torchscript? I assume yes, but if not we need a test/check procedure (e.g. check if feature is in https://pytorch.org/docs/master/jit_builtin_functions.html#supported-pytorch-functions).
There was a problem hiding this comment.
I've added Sequence to the list of unsupported typing constructs
There was a problem hiding this comment.
actually I am not sure if this is necessary --> I guess the reason why you want to use Sequence here is bc _output_padding returns a List[int] but others expect a Tuple[int, ...]. but in fact it is a helper function only for this module. we can change its return .
also if that's the case you can also change the argument types to Tuple[int, ...] as well. ??
There was a problem hiding this comment.
Sorry for the late reply, but yes, problem with Tuple[int, ...] is that torch.jit.script can not parse that correctly.
I've tried to fix that in #44774 but it requires a bit more work.
_size_N_t is a special type which is defined as Union[int, Tuple[int * N]], which gets automagically promoted to list by torch.jit runtime.
|
@walterddr Are you the correct person to review this? |
There was a problem hiding this comment.
| def _output_padding(self, input: Tensor, output_size: Optional[List[int]], | |
| stride: List[int], padding: List[int], kernel_size: List[int], | |
| dilation: Optional[List[int]] = None) -> List[int]: | |
| def _output_padding(self, input: Tensor, output_size: Optional[List[int]], | |
| stride: Tuple[int, ...], padding: Tuple[int, ...], kernel_size: Tuple[int, ...], | |
| dilation: Optional[Tuple[int, ...]] = None) -> Tuple[int, ...]: |
Then change the final return to
- return ret
+ return tuple(ret)
There was a problem hiding this comment.
Thanks! I've updated the source code with the suggested changes
There was a problem hiding this comment.
I'll revert the commit since one of the jit tests is failing with the message:
RuntimeError:
Expression of type dots cannot be used in a type expression:
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 536
def _output_padding(self, input: Tensor, output_size: Optional[List[int]],
stride: Tuple[int, ...], padding: Tuple[int, ...], kernel_size: Tuple[int, ...],
~~~ <--- HERE
dilation: Optional[Tuple[int, ...]] = None) -> Tuple[int, ...]:
if output_size is None:
'ConvTranspose1d._output_padding' is being compiled since it was called from 'ConvTranspose1d.forward'
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 688
raise ValueError('Only `zeros` padding mode is supported for ConvTranspose1d')
output_padding = self._output_padding(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
input, output_size, self.stride, self.padding, self.kernel_size, self.dilation)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
return F.conv_transpose1d(
input, self.weight, self.bias, self.stride, self.padding,
There was a problem hiding this comment.
Expression of type dots cannot be used in a type expression:
That seems to disagree with the Tuple docs in https://pytorch.org/docs/master/jit_language_reference.html#supported-type. Looks like only fixed-length tuples are supported? If so, would be good to fix that docs entry.
This would explain my question above about everything being typed List[int].
There was a problem hiding this comment.
yes. i agree the doc is confusing.. it should be Tuple[T1, T2] instead of Tuple[T1, T2, ...] which means a variable-length tuple, see https://docs.python.org/3/library/typing.html#typing.Tuple
There was a problem hiding this comment.
did a bit of research. seems like in fact it is only afffecting Tuple[T, ...] annotation. i checked that the following actually worked so JIT can parse ellipsis correct.
def foo():
n = torch.ones(16).reshape(2,2,2,2)
return n[1,...,1] # equivalent to return n[1,:,:,1]
There was a problem hiding this comment.
actually I am not sure if this is necessary --> I guess the reason why you want to use Sequence here is bc _output_padding returns a List[int] but others expect a Tuple[int, ...]. but in fact it is a helper function only for this module. we can change its return .
also if that's the case you can also change the argument types to Tuple[int, ...] as well. ??
walterddr
left a comment
There was a problem hiding this comment.
looks good to me overall. please fix the remaining comments/docs and we should be good to go
There was a problem hiding this comment.
did a bit of research. seems like in fact it is only afffecting Tuple[T, ...] annotation. i checked that the following actually worked so JIT can parse ellipsis correct.
def foo():
n = torch.ones(16).reshape(2,2,2,2)
return n[1,...,1] # equivalent to return n[1,:,:,1]
I think those changes needs to be done in coordination with JIT team, because we seems to lack set of test that validates that classes in torch.nn.modules remain JITable, see #47528 as an example of regression that I've introduced a while back after adding type annotations. |
That was my concern indeed. In general that's true for all type annotation PRs, they could affect if something is ijt-able. We can't go bother the JIT team for all those PRs, so if test coverage is lacking maybe we need some test procedure to verify anything that's touched in nontrivial ways has test coverage. Maybe something as simple as "insert an annotation known to not work with torchscript, see if any tests fail, and report that that was indeed the case on the PR"? And if we find untested functionality, include a new JIT test with the PR? Something like that would be a hassle, so I hope it's not necessary - but it'd still be better than introducing more regressions. |
|
I have a few other PRs opened which annotates others
|
I think so. Can you comment on each one to not merge until we've resolved the discussion here? |
This reverts commit 8af4568276943146fa804fcbc05e295dd345ce44.
@malfet Is the code posted on #47528 supposed to work? import torch
from torch import nn
class Pad(torch.nn.Module):
def forward(self, x):
pad_op = nn.ZeroPad2d(padding=(10, 20, 0, 0))
return pad_op(x)
m = torch.jit.script(Pad()) |
|
@guilhermeleobas it needs to be modified a bit, to something like (as class instance creation is not allowed during jit-compiled forward()/backward() methods): |
|
could you also add tests for the conv1/2/3d modules similar to #49494? thanks |
|
PR is ready for review. I've included tests for |
|
the change is failing jit test though |
|
closing & reopening PR to trigger CI again |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@walterddr merged this pull request in 0d981ee. |
Summary: closes pytorchgh-49563 Pull Request resolved: pytorch#49564 Reviewed By: albanD Differential Revision: D25917441 Pulled By: walterddr fbshipit-source-id: 491dc06cfc1bbf694dfd9ccefca4f55488a931b2
closes gh-49563