Skip to content

[dist_optim] add distributed functional Adadelta optimizer#50623

Closed
wanchaol wants to merge 6 commits intogh/wanchaol/155/basefrom
gh/wanchaol/155/head
Closed

[dist_optim] add distributed functional Adadelta optimizer#50623
wanchaol wants to merge 6 commits intogh/wanchaol/155/basefrom
gh/wanchaol/155/head

Conversation

@wanchaol
Copy link
Copy Markdown
Collaborator

@wanchaol wanchaol commented Jan 15, 2021

Stack from ghstack:

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: D25932772

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

[ghstack-poisoned]
Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

[ghstack-poisoned]
Copy link
Copy Markdown
Contributor

@rohan-varma rohan-varma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like there is a potentially related CI failure, could you confirm?

https://app.circleci.com/pipelines/github/pytorch/pytorch/260823/workflows/c7ded86c-e48d-4482-a8eb-a53c54a412a9/jobs/10202460

Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 422, in wrapper
    fn(*args, **kwargs)
  File "test_optim.py", line 380, in test_multi_tensor_optimizers
    self.assertEqual(p1, p2)
  File "/opt/conda/lib/python3.6/site-packages/torch/testing/_internal/common_utils.py", line 1179, in assertEqual
    super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg))
AssertionError: False is not true : Tensors failed to compare as equal!With rtol=1e-07 and atol=1e-07, found 6 element(s) (out of 6) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 0.14733898893893732 (0.020583703180005117 vs. 0.16792269211894245), which occurred at index (2, 1).
		

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
@rohan-varma rohan-varma self-requested a review January 21, 2021 09:24
Copy link
Copy Markdown
Contributor

@rohan-varma rohan-varma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI looks good now, LGTM

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Differential Revision: [D25932772](https://our.internmc.facebook.com/intern/diff/D25932772)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@wanchaol merged this pull request in 6c81b4d.

@facebook-github-bot facebook-github-bot deleted the gh/wanchaol/155/head branch January 26, 2021 15:21
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…0623)

Summary:
Pull Request resolved: pytorch#50623

Add TorchScript compatible Adadelta functional optimizer to distributed optimizer

Test Plan: Imported from OSS

Reviewed By: rohan-varma

Differential Revision: D25932772

Pulled By: wanchaol

fbshipit-source-id: d59b04e5f0b6bab7e0d1c5f68e66249a65958e0b
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: distributed Add this issue/PR to distributed oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants