Skip to content

Add vectorize flag to torch.autograd.functional.{jacobian, hessian}#50915

Closed
zou3519 wants to merge 3 commits intogh/zou3519/345/basefrom
gh/zou3519/345/head
Closed

Add vectorize flag to torch.autograd.functional.{jacobian, hessian}#50915
zou3519 wants to merge 3 commits intogh/zou3519/345/basefrom
gh/zou3519/345/head

Conversation

@zou3519
Copy link
Copy Markdown
Contributor

@zou3519 zou3519 commented Jan 21, 2021

Stack from ghstack:

Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

Test Plan:

  • I updated all of the jacobian and hessian tests to also use
    vectorized=True
  • I added some simple sanity check tests that check e.g. jacobian with
    vectorized=False vs
    jacobian with vectorized=True.
  • The mechanism for vectorized=True goes through batched gradient
    computation. We have separate tests for those (see other PRs in this
    stack).

Differential Revision: D26057674

Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True.
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).

[ghstack-poisoned]
@zou3519 zou3519 requested a review from albanD as a code owner January 21, 2021 22:07
zou3519 added a commit that referenced this pull request Jan 21, 2021
Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True.
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).

ghstack-source-id: ff522f1
Pull Request resolved: #50915
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Jan 21, 2021

💊 CI failures summary and remediations

As of commit af22dac (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Comment thread test/test_autograd.py Outdated
Comment thread test/test_autograd.py Outdated
Comment thread test/test_autograd.py Outdated
Comment thread test/test_autograd.py Outdated
Comment thread torch/autograd/functional.py Outdated
Comment thread torch/autograd/functional.py Outdated
Comment thread torch/autograd/functional.py
…n, hessian}"

Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True.
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jan 25, 2021
Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

The details of this can be a little complicated. "NOTE: [Computing
jacobian with vmap and grad for multiple outputs]" in the code explains
what is going on.

Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- Unit testing for the `_test_construct_standard_basis_for` helper
function
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True. The sanity tests include:
  - functions with multiple inputs
  - functions with outputs(s) that are unrelated to their inputs
  - functions with multiple outputs (jacobian only)
  - functions with multiple inputs and multiple outputs (jacobian only)
  - inputs or outputs that are zero-dim (jacobian only)
  - outputs that are on different devices (jacobian only)
  - outputs that have different dtypes (jacobian only)
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).

ghstack-source-id: 6ff412d
Pull Request resolved: #50915
Comment thread torch/autograd/functional.py
Comment thread torch/autograd/functional.py
Comment thread torch/autograd/functional.py Outdated
Comment thread torch/autograd/functional.py Outdated
Comment thread torch/autograd/functional.py
…n, hessian}"

Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True.
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).

Differential Revision: [D26057674](https://our.internmc.facebook.com/intern/diff/D26057674)

[ghstack-poisoned]
zou3519 added a commit that referenced this pull request Jan 26, 2021
Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

The details of this can be a little complicated. "NOTE: [Computing
jacobian with vmap and grad for multiple outputs]" in the code explains
what is going on.

Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- Unit testing for the `_test_construct_standard_basis_for` helper
function
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True. The sanity tests include:
  - functions with multiple inputs
  - functions with outputs(s) that are unrelated to their inputs
  - functions with multiple outputs (jacobian only)
  - functions with multiple inputs and multiple outputs (jacobian only)
  - inputs or outputs that are zero-dim (jacobian only)
  - outputs that are on different devices (jacobian only)
  - outputs that have different dtypes (jacobian only)
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).

ghstack-source-id: bfabc65
Pull Request resolved: #50915
Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update!

@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 26, 2021

Codecov Report

Merging #50915 (af22dac) into gh/zou3519/345/base (05d4ac4) will increase coverage by 0.24%.
The diff coverage is 100.00%.

@@                   Coverage Diff                   @@
##           gh/zou3519/345/base   #50915      +/-   ##
=======================================================
+ Coverage                80.66%   80.91%   +0.24%     
=======================================================
  Files                     1924     1924              
  Lines                   210009   210043      +34     
=======================================================
+ Hits                    169414   169948     +534     
+ Misses                   40595    40095     -500     

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@zou3519 merged this pull request in 22ac4f3.

@facebook-github-bot facebook-github-bot deleted the gh/zou3519/345/head branch January 31, 2021 15:18
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…pytorch#50915)

Summary:
Pull Request resolved: pytorch#50915

Fixes pytorch#50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.

Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True.
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).

Reviewed By: heitorschueroff

Differential Revision: D26057674

Pulled By: zou3519

fbshipit-source-id: a8ae7ca0d2028ffb478abd1b377f5b49ee39e4a1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants