Skip to content

Update autograd docs#5907

Merged
ezyang merged 7 commits intopytorch:masterfrom
zou3519:update-autograd-docs
Mar 30, 2018
Merged

Update autograd docs#5907
ezyang merged 7 commits intopytorch:masterfrom
zou3519:update-autograd-docs

Conversation

@zou3519
Copy link
Copy Markdown
Contributor

@zou3519 zou3519 commented Mar 20, 2018

There are some FIXMEs right now that I'll make another pass through. Opening this PR for discussion.

Comment thread torch/autograd/__init__.py Outdated

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread torch/autograd/__init__.py Outdated

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread torch/autograd/__init__.py Outdated

This comment was marked as off-topic.

Comment thread torch/autograd/function.py Outdated

This comment was marked as off-topic.

Comment thread docs/source/autograd.rst Outdated

This comment was marked as off-topic.

@zou3519 zou3519 force-pushed the update-autograd-docs branch 2 times, most recently from 541c339 to 0cf136c Compare March 26, 2018 23:44
@zou3519 zou3519 force-pushed the update-autograd-docs branch from 0cf136c to deeee80 Compare March 27, 2018 00:03
@zou3519 zou3519 changed the title [WIP] Update autograd docs Update autograd docs Mar 27, 2018
Copy link
Copy Markdown
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally looks great! Have some very minor nits and suggestions. :)

Comment thread docs/source/autograd.rst Outdated
declare then with ``requires_grad=False`` or detach them from the computation graph with
:func:`torch.Tensor.detach`.
- Methods such as ``var.backward(), var.detach(), var.register_hook()`` now work on tensors
with the same name.

This comment was marked as off-topic.

outputs (sequence of Tensor): outputs of the differentiated function.
inputs (sequence of Tensor): Inputs w.r.t. which the gradient will be
returned (and not accumulated into ``.grad``).
grad_outputs (sequence of Tensor): Gradients w.r.t. each output.

This comment was marked as off-topic.

None values can be specified for scalar Tensors or ones that don't require
grad. If a None value would be acceptable for all grad_tensors, then this
argument is optional.
retain_graph (bool, optional): If ``False``, the graph used to compute the grad

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread docs/source/autograd.rst Outdated
a :class:`torch.Tensor`. Below please find a quick guide on what has changed:

- ``Variable(tensor, requires_grad=True)`` is now ``torch.tensor(tensor, requires_grad=True)``
- ``var.data`` is now just ``tensor``. If you don't want to record operations on tensors,

This comment was marked as off-topic.

Comment thread torch/autograd/grad_mode.py Outdated
r"""Context-manager that sets gradient calculation to on or off.

`set_grad_enabled` will enable or disable grads based on its argument `mode`.
``set_grad_enabled`` will enable or disable grads based on its argument ``mode``.

This comment was marked as off-topic.

@ssnl
Copy link
Copy Markdown
Collaborator

ssnl commented Mar 29, 2018

Will do another pass tomorrow after CI finishes

Comment thread docs/source/autograd.rst Outdated
``requires_grad`` set to ``True``. In addition, ``Variable(tensor)`` now returns
a :class:`torch.Tensor`. Below please find a quick guide on what has changed:

- ``Variable(tensor, requires_grad=True)`` is now ``torch.tensor(tensor, requires_grad=True)``

This comment was marked as off-topic.

Copy link
Copy Markdown
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Comment thread docs/source/autograd.rst Outdated
``requires_grad`` set to ``True``. In addition, ``Variable(tensor)`` now returns
a :class:`torch.Tensor`. Below please find a quick guide on what has changed:

- ``Variable(tensor, requires_grad=True)`` is now ``torch.tensor(tensor, requires_grad=True)``.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Comment thread torch/autograd/__init__.py Outdated
if grad_tensors is None:
grad_tensors = grad_variables
else:
warnings.warn("'grad_tensors' and 'grad_variables' (deprecated) arguments both "

This comment was marked as off-topic.

Comment thread torch/autograd/__init__.py Outdated
warnings.warn("'grad_tensors' and 'grad_variables' (deprecated) arguments both "
"passed to backward(). Using 'grad_tensors'.")

tensors = (tensors,) if isinstance(tensors, Variable) else tuple(tensors)

This comment was marked as off-topic.

@ezyang ezyang merged commit 1449c9f into pytorch:master Mar 30, 2018
@lianzhibin
Copy link
Copy Markdown

module 'torch' has no attribute 'no_grad'

@ssnl
Copy link
Copy Markdown
Collaborator

ssnl commented Apr 6, 2018

@lianzhibin if you are using 0.3.1 or earlier, then you are looking at the wrong doc. This only updates the doc for master.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
* Update autograd docs

* Deprecate 'grad_variables' in backward().

Advise to replace with 'grad_tensors'.

* Resolve saved_variables/saved_tensors

* Tensor section

* Address comments

* Address comments

* Address comments
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants