Skip to content

Fix in documentation of convolutional modules#30079

Closed
davda54 wants to merge 3 commits intopytorch:masterfrom
davda54:master
Closed

Fix in documentation of convolutional modules#30079
davda54 wants to merge 3 commits intopytorch:masterfrom
davda54:master

Conversation

@davda54
Copy link
Contributor

@davda54 davda54 commented Nov 19, 2019

I noticed the description of the initialization of convolutional modules is inconsistent with the actual implementation. There are two such cases:

  1. k in the initialization of ConvTranspose modules is not dependent on the input channels but on the output channels (kaiming_uniform_ uses the size of the second dimension of weight which is transposed in the first two dimensions).

  2. Both the normal convolutions and the transposed ones use k divided by groups.

Initialization of transposed convolutions is dependent on `out_channels` not `in_channels`.
The initialization is actually influenced by the number of groups, too.
@davda54 davda54 requested a review from apaszke as a code owner November 19, 2019 14:39
@ezyang ezyang added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 3, 2020
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in 00c6b90.

BowenBao pushed a commit to BowenBao/pytorch that referenced this pull request Feb 12, 2020
Summary:
I noticed the description of the initialization of convolutional modules is inconsistent with the actual implementation. There are two such cases:

1) `k` in the initialization of ConvTranspose modules is not dependent on the input channels but on the output channels (`kaiming_uniform_` uses the size of the second dimension of `weight` which is transposed in the first two dimensions).

2) Both the normal convolutions and the transposed ones use `k` divided by `groups`.
Pull Request resolved: pytorch#30079

Differential Revision: D19698511

Pulled By: ezyang

fbshipit-source-id: 1ba938fbbd97663eaf29fd1245872179d2761fff
ttumiel pushed a commit to ttumiel/pytorch that referenced this pull request Mar 4, 2020
Summary:
I noticed the description of the initialization of convolutional modules is inconsistent with the actual implementation. There are two such cases:

1) `k` in the initialization of ConvTranspose modules is not dependent on the input channels but on the output channels (`kaiming_uniform_` uses the size of the second dimension of `weight` which is transposed in the first two dimensions).

2) Both the normal convolutions and the transposed ones use `k` divided by `groups`.
Pull Request resolved: pytorch#30079

Differential Revision: D19698511

Pulled By: ezyang

fbshipit-source-id: 1ba938fbbd97663eaf29fd1245872179d2761fff
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants