Skip to content

Add type annotations to torch.nn.modules.padding#49494

Closed
guilhermeleobas wants to merge 7 commits intopytorch:masterfrom
guilhermeleobas:padding
Closed

Add type annotations to torch.nn.modules.padding#49494
guilhermeleobas wants to merge 7 commits intopytorch:masterfrom
guilhermeleobas:padding

Conversation

@guilhermeleobas
Copy link
Copy Markdown
Collaborator

Closes gh-49492

@guilhermeleobas guilhermeleobas added the module: typing Related to mypy type annotations label Dec 16, 2020
@guilhermeleobas guilhermeleobas self-assigned this Dec 16, 2020
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Dec 16, 2020

💊 CI failures summary and remediations

As of commit 9c15796 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 46 times.


"""
padding: _size_2_t
padding: Tuple[int, int]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch!

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before merging the change, please check that this code is still JIT compilable after the change and scalar to tuple promotions work as expected? (_size_N_t are special types which get automatic promotions from integer to list)
Requesting changes to just mention in the PR description which test covers that torch.nn.modules.padding is JITable after the change

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually in this particular class padding is always set by

self.padding = _pair(padding)

which always return a tuple of 2

@rgommers
Copy link
Copy Markdown
Collaborator

One CI failure is unrelated.

@rgommers rgommers requested review from ezyang and malfet December 19, 2020 13:05
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@malfet malfet requested a review from suo December 29, 2020 18:29
Copy link
Copy Markdown
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please specify in comment what test cover that padding classes are torch.jit.scriptable after the change.


"""
padding: _size_2_t
padding: Tuple[int, int]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before merging the change, please check that this code is still JIT compilable after the change and scalar to tuple promotions work as expected? (_size_N_t are special types which get automatic promotions from integer to list)
Requesting changes to just mention in the PR description which test covers that torch.nn.modules.padding is JITable after the change

@mruberry mruberry added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Dec 30, 2020
@guilhermeleobas guilhermeleobas marked this pull request as draft December 31, 2020 17:30
@guilhermeleobas
Copy link
Copy Markdown
Collaborator Author

Do not merge this PR until one checks if the annotations introduce any regression. See:
#49564 (comment)

Copy link
Copy Markdown
Contributor

@walterddr walterddr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good. let's wait for the test results

Comment thread test/test_jit.py Outdated
Comment on lines +904 to +905
m = Mod(nn.ConstantPad1d(2, 3.5))
inp = torch.randn(1, 2, 4)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was wondering if we can create list of tuple[nn.Module, Tensor] to simplify this. but we can probably do the refactoring in another pass

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@guilhermeleobas guilhermeleobas marked this pull request as ready for review January 8, 2021 17:18
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@walterddr
Copy link
Copy Markdown
Contributor

FYI on the failure for facebook internal not sure why the added padding test failed timeout internally. I am taking a look

Comment thread test/test_jit.py Outdated
(Mod(nn.ReflectionPad2d(2)), torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)),
(Mod(nn.ReplicationPad1d(2)), torch.arange(8, dtype=torch.float).reshape(1, 2, 4)),
(Mod(nn.ReplicationPad2d(2)), torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)),
(Mod(nn.ReplicationPad3d(3)), torch.randn(16, 3, 8, 320, 480)),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just checked and seems like this test is significantly longer comparing to other TestJIT items.
do we really need 320x480 size for the 3d replication padding test for JIT? or can we use a smaller size?

FYI I changed this to 32x48 and it passed internal test without timeout.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@walterddr has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 12, 2021

Codecov Report

Merging #49494 (9c15796) into master (5834438) will increase coverage by 0.00%.
The diff coverage is 100.00%.

@@           Coverage Diff           @@
##           master   #49494   +/-   ##
=======================================
  Coverage   80.70%   80.71%           
=======================================
  Files        1905     1905           
  Lines      206813   206817    +4     
=======================================
+ Hits       166916   166927   +11     
+ Misses      39897    39890    -7     

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@walterddr merged this pull request in 374951d.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Closes pytorchgh-49492

Pull Request resolved: pytorch#49494

Reviewed By: mruberry

Differential Revision: D25723837

Pulled By: walterddr

fbshipit-source-id: 92af0100f6d9e2bb25b259f5a7fe9d449ffb6443
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged module: typing Related to mypy type annotations open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Enable torch.nn.modules.padding typechecks during CI

7 participants