add type annotations to torch.nn.modules.normalization#49035
add type annotations to torch.nn.modules.normalization#49035guilhermeleobas wants to merge 6 commits intopytorch:masterfrom guilhermeleobas:normalization
Conversation
Codecov Report
@@ Coverage Diff @@
## master #49035 +/- ##
==========================================
- Coverage 80.71% 80.70% -0.01%
==========================================
Files 1904 1904
Lines 206598 206598
==========================================
- Hits 166750 166742 -8
- Misses 39848 39856 +8 |
rgommers
left a comment
There was a problem hiding this comment.
Thanks @guilhermeleobas. Just a couple of minor comments.
rgommers
left a comment
There was a problem hiding this comment.
LGTM now, thanks @guilhermeleobas
| input, self.normalized_shape, self.weight, self.bias, self.eps) | ||
|
|
||
| def extra_repr(self) -> Tensor: | ||
| def extra_repr(self) -> str: |
There was a problem hiding this comment.
wonder why it was not caught before.
|
|
||
|
|
||
| _shape_t = Union[int, List[int], Size] | ||
| _shape_t = Union[int, Sequence[int], Size] |
There was a problem hiding this comment.
do we have a test to validate JIT-ability? since we knew Sequence & Tuple doesnt work on JIT.
There was a problem hiding this comment.
Actually this is not necessary. I am reverting this particular change
|
Do not merge this PR until one checks if the annotations introduce any regression. See: |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Tests are covered by test cases in the |
walterddr
left a comment
There was a problem hiding this comment.
looks good. one minor comment.
also could you confirm the coverage for LayerNorm1d constructor arguments in torch.test._internal.common_nn actually consist of non-tuple type input? from what I see all of those constructor_args are lists. not sure if they collapse into single integer
| normalized_shape = (normalized_shape,) # type: ignore[assignment] | ||
| self.normalized_shape = tuple(normalized_shape) # type: ignore[arg-type] |
There was a problem hiding this comment.
normalized_
| normalized_shape = (normalized_shape,) # type: ignore[assignment] | |
| self.normalized_shape = tuple(normalized_shape) # type: ignore[arg-type] | |
| normalized_shape_ = (normalized_shape,) | |
| self.normalized_shape = tuple(normalized_shape_) |
There was a problem hiding this comment.
also wondering do we even need the second tuple() wrapper?
There was a problem hiding this comment.
I guess the second tuple() is to make sure self.normalized_shape is a tuple and not a list, for instance.
There was a problem hiding this comment.
Also, I do not think the suggested change works as expected. If the input arg normalized_shape is a list, then the if block will never be executed and normalized_shape_ will not be defined.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@walterddr merged this pull request in 4411b5a. |
Summary: Fixes pytorch#49034 Pull Request resolved: pytorch#49035 Test Plan: Imported from GitHub, without a `Test Plan:` line. Force rebased to deal with merge conflicts Reviewed By: zhangguanheng66 Differential Revision: D25767065 Pulled By: walterddr fbshipit-source-id: ffb904e449f137825824e3f43f3775a55e9b011b
Fixes #49034