Add vectorize flag to torch.autograd.functional.{jacobian, hessian}#50915
Closed
zou3519 wants to merge 3 commits intogh/zou3519/345/basefrom
Closed
Add vectorize flag to torch.autograd.functional.{jacobian, hessian}#50915zou3519 wants to merge 3 commits intogh/zou3519/345/basefrom
vectorize flag to torch.autograd.functional.{jacobian, hessian}#50915zou3519 wants to merge 3 commits intogh/zou3519/345/basefrom
Conversation
Fixes #50584 Add a vectorize flag to torch.autograd.functional.jacobian and torch.autograd.functional.hessian (default: False). Under the hood, the vectorize flag uses vmap as the backend to compute the jacobian and hessian, respectively, providing speedups to users. Test Plan: - I updated all of the jacobian and hessian tests to also use vectorized=True - I added some simple sanity check tests that check e.g. jacobian with vectorized=False vs jacobian with vectorized=True. - The mechanism for vectorized=True goes through batched gradient computation. We have separate tests for those (see other PRs in this stack). [ghstack-poisoned]
This was referenced Jan 21, 2021
zou3519
added a commit
that referenced
this pull request
Jan 21, 2021
Fixes #50584 Add a vectorize flag to torch.autograd.functional.jacobian and torch.autograd.functional.hessian (default: False). Under the hood, the vectorize flag uses vmap as the backend to compute the jacobian and hessian, respectively, providing speedups to users. Test Plan: - I updated all of the jacobian and hessian tests to also use vectorized=True - I added some simple sanity check tests that check e.g. jacobian with vectorized=False vs jacobian with vectorized=True. - The mechanism for vectorized=True goes through batched gradient computation. We have separate tests for those (see other PRs in this stack). ghstack-source-id: ff522f1 Pull Request resolved: #50915
Contributor
💊 CI failures summary and remediationsAs of commit af22dac (more details on the Dr. CI page):
This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
albanD
reviewed
Jan 25, 2021
…n, hessian}" Fixes #50584 Add a vectorize flag to torch.autograd.functional.jacobian and torch.autograd.functional.hessian (default: False). Under the hood, the vectorize flag uses vmap as the backend to compute the jacobian and hessian, respectively, providing speedups to users. Test Plan: - I updated all of the jacobian and hessian tests to also use vectorized=True - I added some simple sanity check tests that check e.g. jacobian with vectorized=False vs jacobian with vectorized=True. - The mechanism for vectorized=True goes through batched gradient computation. We have separate tests for those (see other PRs in this stack). [ghstack-poisoned]
zou3519
added a commit
that referenced
this pull request
Jan 25, 2021
Fixes #50584 Add a vectorize flag to torch.autograd.functional.jacobian and torch.autograd.functional.hessian (default: False). Under the hood, the vectorize flag uses vmap as the backend to compute the jacobian and hessian, respectively, providing speedups to users. The details of this can be a little complicated. "NOTE: [Computing jacobian with vmap and grad for multiple outputs]" in the code explains what is going on. Test Plan: - I updated all of the jacobian and hessian tests to also use vectorized=True - Unit testing for the `_test_construct_standard_basis_for` helper function - I added some simple sanity check tests that check e.g. jacobian with vectorized=False vs jacobian with vectorized=True. The sanity tests include: - functions with multiple inputs - functions with outputs(s) that are unrelated to their inputs - functions with multiple outputs (jacobian only) - functions with multiple inputs and multiple outputs (jacobian only) - inputs or outputs that are zero-dim (jacobian only) - outputs that are on different devices (jacobian only) - outputs that have different dtypes (jacobian only) - The mechanism for vectorized=True goes through batched gradient computation. We have separate tests for those (see other PRs in this stack). ghstack-source-id: 6ff412d Pull Request resolved: #50915
zou3519
commented
Jan 25, 2021
albanD
reviewed
Jan 26, 2021
…n, hessian}" Fixes #50584 Add a vectorize flag to torch.autograd.functional.jacobian and torch.autograd.functional.hessian (default: False). Under the hood, the vectorize flag uses vmap as the backend to compute the jacobian and hessian, respectively, providing speedups to users. Test Plan: - I updated all of the jacobian and hessian tests to also use vectorized=True - I added some simple sanity check tests that check e.g. jacobian with vectorized=False vs jacobian with vectorized=True. - The mechanism for vectorized=True goes through batched gradient computation. We have separate tests for those (see other PRs in this stack). Differential Revision: [D26057674](https://our.internmc.facebook.com/intern/diff/D26057674) [ghstack-poisoned]
zou3519
added a commit
that referenced
this pull request
Jan 26, 2021
Fixes #50584 Add a vectorize flag to torch.autograd.functional.jacobian and torch.autograd.functional.hessian (default: False). Under the hood, the vectorize flag uses vmap as the backend to compute the jacobian and hessian, respectively, providing speedups to users. The details of this can be a little complicated. "NOTE: [Computing jacobian with vmap and grad for multiple outputs]" in the code explains what is going on. Test Plan: - I updated all of the jacobian and hessian tests to also use vectorized=True - Unit testing for the `_test_construct_standard_basis_for` helper function - I added some simple sanity check tests that check e.g. jacobian with vectorized=False vs jacobian with vectorized=True. The sanity tests include: - functions with multiple inputs - functions with outputs(s) that are unrelated to their inputs - functions with multiple outputs (jacobian only) - functions with multiple inputs and multiple outputs (jacobian only) - inputs or outputs that are zero-dim (jacobian only) - outputs that are on different devices (jacobian only) - outputs that have different dtypes (jacobian only) - The mechanism for vectorized=True goes through batched gradient computation. We have separate tests for those (see other PRs in this stack). ghstack-source-id: bfabc65 Pull Request resolved: #50915
Codecov Report
@@ Coverage Diff @@
## gh/zou3519/345/base #50915 +/- ##
=======================================================
+ Coverage 80.66% 80.91% +0.24%
=======================================================
Files 1924 1924
Lines 210009 210043 +34
=======================================================
+ Hits 169414 169948 +534
+ Misses 40595 40095 -500 |
Contributor
laurentdupin
pushed a commit
to laurentdupin/pytorch
that referenced
this pull request
Apr 24, 2026
…pytorch#50915) Summary: Pull Request resolved: pytorch#50915 Fixes pytorch#50584 Add a vectorize flag to torch.autograd.functional.jacobian and torch.autograd.functional.hessian (default: False). Under the hood, the vectorize flag uses vmap as the backend to compute the jacobian and hessian, respectively, providing speedups to users. Test Plan: - I updated all of the jacobian and hessian tests to also use vectorized=True - I added some simple sanity check tests that check e.g. jacobian with vectorized=False vs jacobian with vectorized=True. - The mechanism for vectorized=True goes through batched gradient computation. We have separate tests for those (see other PRs in this stack). Reviewed By: heitorschueroff Differential Revision: D26057674 Pulled By: zou3519 fbshipit-source-id: a8ae7ca0d2028ffb478abd1b377f5b49ee39e4a1
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
vectorizeflag to torch.autograd.functional.{jacobian, hessian} #50915 Addvectorizeflag to torch.autograd.functional.{jacobian, hessian}Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.
Test Plan:
vectorized=True
vectorized=False vs
jacobian with vectorized=True.
computation. We have separate tests for those (see other PRs in this
stack).
Differential Revision: D26057674