fix doc for functional.dropout*#10417
Conversation
torch/nn/functional.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
b685fc2 to
b7181fa
Compare
torch/nn/modules/dropout.py
Outdated
|
|
||
| class Dropout2d(_DropoutNd): | ||
| r"""Randomly zeroes whole channels of the input tensor. | ||
| r"""Randomly zeroes whole channels (a channel is a (N, C) pair) of the input tensor. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
li-roy
left a comment
There was a problem hiding this comment.
looks good, couple of nits. only commented on dropout3d but applies to all of them
torch/nn/functional.py
Outdated
|
|
||
| Args: | ||
| p: probability of an element to be zeroed. Default: 0.5 | ||
| training: apply dropout if is True. Defualt: True |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/functional.py
Outdated
| def dropout3d(input, p=0.5, training=False, inplace=False): | ||
| def dropout3d(input, p=0.5, training=True, inplace=False): | ||
| r""" | ||
| Randomly zeroes whole channels (a channel is a 3D slice of dimensions D, H, W) of the input tensor. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/dropout.py
Outdated
| class Dropout2d(_DropoutNd): | ||
| r"""Randomly zeroes whole channels of the input tensor. | ||
| The channels to zero-out are randomized on every forward call. | ||
| r"""Randomly zero-out entire channels (a channel a 2D feature map of |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/dropout.py
Outdated
| r"""During training, randomly zeroes some of the elements of the input | ||
| tensor with probability :attr:`p` using samples from a Bernoulli | ||
| distribution. The elements to zero are randomized on every forward call. | ||
| distribution. ach channel will be zero-out indipendently on every forward |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
facebook-github-bot
left a comment
There was a problem hiding this comment.
weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@ssnl is this good to go? |
torch/nn/modules/dropout.py
Outdated
| r"""Randomly zeroes whole channels of the input tensor. | ||
| The channels to zero are randomized on every forward call. | ||
| r"""Randomly zero-out entire channels (a channel is a 3D feature map of | ||
| dimensions D, H, W) of the input tensor. Each channel will be zero-out |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/dropout.py
Outdated
| The channels to zero-out are randomized on every forward call. | ||
| r"""Randomly zero-out entire channels (a channel is a 2D feature map of | ||
| dimensions H, W) of the input tensor. Each channel will be zero-out | ||
| indipendently on every forward call. with probability :attr:`p` using |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/nn/modules/dropout.py
Outdated
| class Dropout2d(_DropoutNd): | ||
| r"""Randomly zeroes whole channels of the input tensor. | ||
| The channels to zero-out are randomized on every forward call. | ||
| r"""Randomly zero-out entire channels (a channel is a 2D feature map of |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
7b56fbb to
aa649a3
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
aa649a3 to
534085f
Compare
torch/nn/modules/dropout.py
Outdated
| The channels to zero-out are randomized on every forward call. | ||
| r"""Randomly zero out entire channels (a channel is a 2D feature map, | ||
| e.g., the :math:`j`-th channel of the :math:`i`-th sample in the | ||
| batched input is a 2D tensor :math:`input[i, j]`) of the input tensor). |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
837f517 to
cc98606
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
weiyangfb has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
* upstream/master: (26 commits) cudnn 7 upgrade with spatialBN fix (pytorch#11291) Ignore FuseGraph Call on Windows (pytorch#11015) defer resolution of mkl to a cmake wrapper library (pytorch#11298) Cleanup dependency of distributed flags (pytorch#11221) Move minimal wrapdim functionality to core, remove THTensor include i… (pytorch#11283) Change includes from ATen/Storage.h to ATen/core/Storage.h (pytorch#11217) Fix scalar tensor assert in fusion compiler (pytorch#10952) Add dead code elimination pass (pytorch#10101) Distributed Data Parallel CPU module for C10D (pytorch#11168) Back out "[pt1][tensor] Add strides to caffe2::Tensor" Fix conv gradient conversion (pytorch#11312) Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (pytorch#11050) Sparse tensor printing; add NotImplemented autograd fn (pytorch#10181) Add convertToCaffe2Proto to python API fix doc for functional.dropout* (pytorch#10417) typo fix Tranpose2D -> Transpose2D (pytorch#11281) Remove THFinalizer Forward declarations of needed curand functions (pytorch#10911) nomnigraph - simplify core graph API and test (pytorch#11256) Small fixes to cppdocs for sync script (pytorch#11300) ...
Summary: - fixes pytorch#4177 Pull Request resolved: pytorch#10417 Differential Revision: D9542876 Pulled By: weiyangfb fbshipit-source-id: 480ed973d1fe0364f4acb5cd596c2031895b82df
Uh oh!
There was an error while loading. Please reload this page.