Skip to content

Docs improvements#368

Merged
soumith merged 4 commits intomasterfrom
docs2
Dec 30, 2016
Merged

Docs improvements#368
soumith merged 4 commits intomasterfrom
docs2

Conversation

@apaszke
Copy link
Contributor

@apaszke apaszke commented Dec 29, 2016

  • Fixed all warnings from nn docs
  • Fixed all warnings from optim docs
  • Added autograd docs

@soumith soumith closed this Dec 29, 2016
@soumith soumith reopened this Dec 29, 2016
@@ -1,10 +1,41 @@
"""
torch.autograd provides classes and functions implementing automatic
differentiation of arbitrary scalar valued functions. It requires only minimal

This comment was marked as off-topic.

"""
torch.autograd provides classes and functions implementing automatic
differentiation of arbitrary scalar valued functions. It requires only minimal
changes to the existing code - you only need to wrap all tensors in

This comment was marked as off-topic.



class Variable(_C._VariableBase):
"""Wraps a tensor and records operations applied to it.

This comment was marked as off-topic.

class Variable(_C._VariableBase):
"""Wraps a tensor and records operations applied to it.

Variable is only a thin wrapper around a Tensor object, that also holds

This comment was marked as off-topic.

"""Wraps a tensor and records operations applied to it.

Variable is only a thin wrapper around a Tensor object, that also holds
the gradient w.r.t. to it, and a reference to a function that output it.

This comment was marked as off-topic.


Variable is only a thin wrapper around a Tensor object, that also holds
the gradient w.r.t. to it, and a reference to a function that output it.
This reference allows of retracing the whole chain of operations that

This comment was marked as off-topic.

the gradient w.r.t. to it, and a reference to a function that output it.
This reference allows of retracing the whole chain of operations that
created the data. If the Variable has been created by the user, its creator
will be ``None`` and we call such objects leaf Variables.

This comment was marked as off-topic.

This comment was marked as off-topic.


Parameters:
reward(Tensor): Tensor with per-element rewards. It has to match
the location and shape of Variable's data.

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith
Copy link
Collaborator

soumith commented Dec 30, 2016

i fixed the typos and merged.

@soumith soumith deleted the docs2 branch January 2, 2017 00:08
mrshenli pushed a commit to mrshenli/pytorch that referenced this pull request Apr 11, 2020
* [UPDATED] AWS distributed training tutorial

* Minor fix
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Sep 23, 2020
* Get a crazy test example working.

* Change problem size and tile size, still an issue with N > 32.

* Add sync threads in loops that read from smem, to make sure we finish reading before writing.

* Predicate off threads bound to a broadcast dim of an output when its in shared memory.

* Predicate smem tiling writing based on broadcasted dims in consumer.

* Cleanup example a bit.

* Revert "Add sync threads in loops that read from smem, to make sure we finish reading before writing."

This reverts commit dffaa76.

Revert this in favor of pytorch#383

* Add _syncthreads for Write-After-Read Race (pytorch#383)

* Basic Write-After-Read (WAR) check to add __syncthreads to end of for-loop

* Enable Tiled GEMM example

* Check that IterDomain iterates from zero to some positive integer

Co-authored-by: Ryan Spring <rspring@nvidia.com>

* Refactor thread predication for writes to smem

Co-authored-by: Naoya Maruyama <nmaruyama@nvidia.com>
Co-authored-by: Ryan Spring <rdspring1@gmail.com>
Co-authored-by: Ryan Spring <rspring@nvidia.com>
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Sep 24, 2020
* Get a crazy test example working.

* Change problem size and tile size, still an issue with N > 32.

* Add sync threads in loops that read from smem, to make sure we finish reading before writing.

* Predicate off threads bound to a broadcast dim of an output when its in shared memory.

* Predicate smem tiling writing based on broadcasted dims in consumer.

* Cleanup example a bit.

* Revert "Add sync threads in loops that read from smem, to make sure we finish reading before writing."

This reverts commit dffaa76.

Revert this in favor of pytorch#383

* Add _syncthreads for Write-After-Read Race (pytorch#383)

* Basic Write-After-Read (WAR) check to add __syncthreads to end of for-loop

* Enable Tiled GEMM example

* Check that IterDomain iterates from zero to some positive integer

Co-authored-by: Ryan Spring <rspring@nvidia.com>

* Refactor thread predication for writes to smem

Co-authored-by: Naoya Maruyama <nmaruyama@nvidia.com>
Co-authored-by: Ryan Spring <rdspring1@gmail.com>
Co-authored-by: Ryan Spring <rspring@nvidia.com>
KsenijaS pushed a commit to KsenijaS/pytorch that referenced this pull request Dec 14, 2020
* Fixes pytorch#320

Signed-off-by: Jason Plurad <pluradj@us.ibm.com>

Co-authored-by: Vinitra Swamy <vinitras@gmail.com>
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this pull request Sep 20, 2021
facebook-github-bot pushed a commit that referenced this pull request Apr 21, 2022
Summary:
X-link: meta-pytorch/data#368

This is PR aims to expose the right data-relate API.

There are two more changes made in this PR to convert public api to private api
`check_lambda_fn` -> `_check_lambda_fn`
`deprecation_warning` -> `_deprecation_warning`

Pull Request resolved: #76143

Reviewed By: albanD, NivekT

Differential Revision: D35798311

Pulled By: ejguan

fbshipit-source-id: b13fded5c88a533c706702fb2070c918c839dca4
malfet pushed a commit that referenced this pull request Apr 22, 2022
Summary:
X-link: meta-pytorch/data#368

This is PR aims to expose the right data-relate API.

There are two more changes made in this PR to convert public api to private api
`check_lambda_fn` -> `_check_lambda_fn`
`deprecation_warning` -> `_deprecation_warning`

Pull Request resolved: #76143

Reviewed By: albanD, NivekT

Differential Revision: D35798311

Pulled By: ejguan

fbshipit-source-id: b13fded5c88a533c706702fb2070c918c839dca4
(cherry picked from commit 0b534b8)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants