Conversation
apaszke
commented
Dec 29, 2016
- Fixed all warnings from nn docs
- Fixed all warnings from optim docs
- Added autograd docs
torch/autograd/__init__.py
Outdated
| @@ -1,10 +1,41 @@ | |||
| """ | |||
| torch.autograd provides classes and functions implementing automatic | |||
| differentiation of arbitrary scalar valued functions. It requires only minimal | |||
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| """ | ||
| torch.autograd provides classes and functions implementing automatic | ||
| differentiation of arbitrary scalar valued functions. It requires only minimal | ||
| changes to the existing code - you only need to wrap all tensors in |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/variable.py
Outdated
|
|
||
|
|
||
| class Variable(_C._VariableBase): | ||
| """Wraps a tensor and records operations applied to it. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/variable.py
Outdated
| class Variable(_C._VariableBase): | ||
| """Wraps a tensor and records operations applied to it. | ||
|
|
||
| Variable is only a thin wrapper around a Tensor object, that also holds |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/variable.py
Outdated
| """Wraps a tensor and records operations applied to it. | ||
|
|
||
| Variable is only a thin wrapper around a Tensor object, that also holds | ||
| the gradient w.r.t. to it, and a reference to a function that output it. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/variable.py
Outdated
|
|
||
| Variable is only a thin wrapper around a Tensor object, that also holds | ||
| the gradient w.r.t. to it, and a reference to a function that output it. | ||
| This reference allows of retracing the whole chain of operations that |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/variable.py
Outdated
| the gradient w.r.t. to it, and a reference to a function that output it. | ||
| This reference allows of retracing the whole chain of operations that | ||
| created the data. If the Variable has been created by the user, its creator | ||
| will be ``None`` and we call such objects leaf Variables. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/variable.py
Outdated
|
|
||
| Parameters: | ||
| reward(Tensor): Tensor with per-element rewards. It has to match | ||
| the location and shape of Variable's data. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/autograd/function.py
Outdated
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
i fixed the typos and merged. |
* [UPDATED] AWS distributed training tutorial * Minor fix
* Get a crazy test example working. * Change problem size and tile size, still an issue with N > 32. * Add sync threads in loops that read from smem, to make sure we finish reading before writing. * Predicate off threads bound to a broadcast dim of an output when its in shared memory. * Predicate smem tiling writing based on broadcasted dims in consumer. * Cleanup example a bit. * Revert "Add sync threads in loops that read from smem, to make sure we finish reading before writing." This reverts commit dffaa76. Revert this in favor of pytorch#383 * Add _syncthreads for Write-After-Read Race (pytorch#383) * Basic Write-After-Read (WAR) check to add __syncthreads to end of for-loop * Enable Tiled GEMM example * Check that IterDomain iterates from zero to some positive integer Co-authored-by: Ryan Spring <rspring@nvidia.com> * Refactor thread predication for writes to smem Co-authored-by: Naoya Maruyama <nmaruyama@nvidia.com> Co-authored-by: Ryan Spring <rdspring1@gmail.com> Co-authored-by: Ryan Spring <rspring@nvidia.com>
* Get a crazy test example working. * Change problem size and tile size, still an issue with N > 32. * Add sync threads in loops that read from smem, to make sure we finish reading before writing. * Predicate off threads bound to a broadcast dim of an output when its in shared memory. * Predicate smem tiling writing based on broadcasted dims in consumer. * Cleanup example a bit. * Revert "Add sync threads in loops that read from smem, to make sure we finish reading before writing." This reverts commit dffaa76. Revert this in favor of pytorch#383 * Add _syncthreads for Write-After-Read Race (pytorch#383) * Basic Write-After-Read (WAR) check to add __syncthreads to end of for-loop * Enable Tiled GEMM example * Check that IterDomain iterates from zero to some positive integer Co-authored-by: Ryan Spring <rspring@nvidia.com> * Refactor thread predication for writes to smem Co-authored-by: Naoya Maruyama <nmaruyama@nvidia.com> Co-authored-by: Ryan Spring <rdspring1@gmail.com> Co-authored-by: Ryan Spring <rspring@nvidia.com>
* Fixes pytorch#320 Signed-off-by: Jason Plurad <pluradj@us.ibm.com> Co-authored-by: Vinitra Swamy <vinitras@gmail.com>
Summary: X-link: meta-pytorch/data#368 This is PR aims to expose the right data-relate API. There are two more changes made in this PR to convert public api to private api `check_lambda_fn` -> `_check_lambda_fn` `deprecation_warning` -> `_deprecation_warning` Pull Request resolved: #76143 Reviewed By: albanD, NivekT Differential Revision: D35798311 Pulled By: ejguan fbshipit-source-id: b13fded5c88a533c706702fb2070c918c839dca4
Summary: X-link: meta-pytorch/data#368 This is PR aims to expose the right data-relate API. There are two more changes made in this PR to convert public api to private api `check_lambda_fn` -> `_check_lambda_fn` `deprecation_warning` -> `_deprecation_warning` Pull Request resolved: #76143 Reviewed By: albanD, NivekT Differential Revision: D35798311 Pulled By: ejguan fbshipit-source-id: b13fded5c88a533c706702fb2070c918c839dca4 (cherry picked from commit 0b534b8)