Small optimization for adam#12107
Closed
jma127 wants to merge 1 commit intopytorch:masterfrom
jma127:master
Closed
Small optimization for adam#12107jma127 wants to merge 1 commit intopytorch:masterfrom jma127:master
jma127 wants to merge 1 commit intopytorch:masterfrom
jma127:master
Conversation
soumith
approved these changes
Sep 26, 2018
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Collaborator
|
well... I would certainly expect .grad to not change after optimizer step. |
Contributor
Author
|
Hmm, then the SGD implementation should be fixed to satisfy that invariant. I'll leave it to you guys to determine whether or not this is a necessary invariant -- feel free to revert as you see fit. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Apply weight decay for Adam in-place instead of via copy.
Synced offline with @soumith , who mentioned that it should be OK. This is also consistent with other optimizers, e.g.
pytorch/torch/optim/sgd.py
Line 93 in eee0173