Skip to content

[feat][OSS] elastic and pytorch compatible checkpoints#310

Merged
blefaudeux merged 42 commits intomasterfrom
oss_elastic_checkpoints
Feb 2, 2021
Merged

[feat][OSS] elastic and pytorch compatible checkpoints#310
blefaudeux merged 42 commits intomasterfrom
oss_elastic_checkpoints

Conversation

@blefaudeux
Copy link
Copy Markdown
Contributor

@blefaudeux blefaudeux commented Jan 13, 2021

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos, doc improvements)
  • Did you read the contributor guideline?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

Fixes #164, and make the saved state pytorch-compliant (no extra keyword). The number of ranks can change before and after the checkpoints, it will automatically adapt by repartitioning at load. Adding a new unit test which checks reproducibility (cc @joshim5)

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
@joshim5 @stas00 @SeanNaren this breaks compatibility with old checkpoints, is that a big issue ? I could add some scaffolding to move old checkpoints to the new form.

@mannatsingh I know you mentioned that a long time ago, finally there. Not sure how you would rate the complexity of doing this (masking the sharding when rebuilding a pytorch-compatible state dict), but it's now out of the box with this PR

Did you have fun?

Make sure you had fun coding 🙃

@blefaudeux blefaudeux requested review from joshim5 and min-xu-ai and removed request for min-xu-ai January 13, 2021 22:02
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 13, 2021
@blefaudeux blefaudeux requested review from anj-s, joshim5, min-xu-ai and msbaines and removed request for joshim5 and msbaines January 13, 2021 22:02
@blefaudeux blefaudeux marked this pull request as draft January 13, 2021 22:11
@joshim5
Copy link
Copy Markdown
Contributor

joshim5 commented Jan 13, 2021

Incompatibility with old Fairscale checkpoints is fine with me -- but if we could support checkpoints from any arbitrary ADAM optimizer, that would be a huge win! It doesn't even have to be in this class - a separate translation script could work if that's easier.

@blefaudeux
Copy link
Copy Markdown
Contributor Author

Incompatibility with old Fairscale checkpoints is fine with me -- but if we could support checkpoints from any arbitrary ADAM optimizer, that would be a huge win! It doesn't even have to be in this class - a separate translation script could work if that's easier.

should be doable with the same PR, I'll have a look at that

@anj-s
Copy link
Copy Markdown
Contributor

anj-s commented Jan 13, 2021

@blefaudeux Let me know when this is ready for review.

@stas00
Copy link
Copy Markdown
Contributor

stas00 commented Jan 14, 2021

Thank you for the ping, @blefaudeux!

@sgugger did the fairscale integration, so we should probably check with him. (I only wrote the docs and added some basic tests)

I had a quick look and I don't see we are doing anything special for fairscale - checkpoint-wise, just straight pytorch. On the other hand we don't really have any tests at the moment that test checkpointing w/ fairscale (or deepspeed for that matter) so it's possible that it has never worked properly in the first place and you're doing us a favor with this fix. Yay!

@blefaudeux
Copy link
Copy Markdown
Contributor Author

Thank you for the ping, @blefaudeux!

@sgugger did the fairscale integration, so we should probably check with him. (I only wrote the docs and added some basic tests)

I had a quick look and I don't see we are doing anything special for fairscale - checkpoint-wise, just straight pytorch. On the other hand we don't really have any tests at the moment that test checkpointing w/ fairscale (or deepspeed for that matter) so it's possible that it has never worked properly in the first place and you're doing us a favor with this fix. Yay!

thanks for the feedback ! To be clear it works right now, this is not really a fix but a change of things to (a) align with pytorch (a pytorch checkpoint should load) and (b) make it possible to change the number of ranks in between checkpoints (save with 8 ranks and reload with 16 for instance)

@sgugger
Copy link
Copy Markdown

sgugger commented Jan 14, 2021

I don't think the compatibility issue is a big matter on our side: for us checkpoints are used to resume training if it was interrupted for some reason, so I don't expect users will have updated their envs in-between.

@blefaudeux
Copy link
Copy Markdown
Contributor Author

I don't think the compatibility issue is a big matter on our side: for us checkpoints are used to resume training if it was interrupted for some reason, so I don't expect users will have updated their envs in-between.

alright, perfect, and thanks for dropping by ! I'll remember to keep you in the loop @sgugger, but no big change expected on OSS after this one

@blefaudeux blefaudeux changed the title [feat][OSS] elastic checkpoints [feat][OSS] elastic and pytorch compatible checkpoints Jan 14, 2021
@blefaudeux
Copy link
Copy Markdown
Contributor Author

the optimizer state contents changed in torch1.6+, I'm not handling 1.5 well right now (this change was new to me). Updating that and then putting the PR up for review

@blefaudeux blefaudeux marked this pull request as ready for review January 15, 2021 17:35
@blefaudeux
Copy link
Copy Markdown
Contributor Author

Ready for review, figuring out something not too ugly which would work for both torch 1.5 (some map indices are id(params)) and above (just plain geometric indices, 0, 1, 2) was interesting, I think that it's mostly ok now.

@blefaudeux
Copy link
Copy Markdown
Contributor Author

ping reviewers, happy to change anything if it means that this can land somehow, hard to keep old PRs alive

@blefaudeux
Copy link
Copy Markdown
Contributor Author

This fixes the linting that #342 is supposed to have broken, even if not touching the files

@blefaudeux blefaudeux requested a review from anj-s February 1, 2021 17:27
blefaudeux and others added 2 commits February 1, 2021 19:26
* adding fairseq's gha
* adding py3.9, removing submodules
* yaml linting
authored-by: Anjali Sridhar <anj@devfair0443.h2.fair>
@blefaudeux
Copy link
Copy Markdown
Contributor Author

This fixes the linting that #342 is supposed to have broken, even if not touching the files

solved with #348

Copy link
Copy Markdown
Contributor

@msbaines msbaines left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a test for multiple param_groups? There was a bug related to that in an earlier iteration of this change. Other than that, LGTM.

Copy link
Copy Markdown
Contributor

@msbaines msbaines left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a test for multiple param_groups before submitting.

@blefaudeux
Copy link
Copy Markdown
Contributor Author

Please add a test for multiple param_groups before submitting.

the ddp_parity test takes 2 groups now, with different learning rates, let me know if you had something else in mind

@msbaines
Copy link
Copy Markdown
Contributor

msbaines commented Feb 2, 2021

global_id_map was incorrect in an earlier iteration for multiple param_groups but no tests were failing. Would be nice to have a test that verifies that new load_state_dict and state_dict logic work when there are multiple param_groups.

Please add a test for multiple param_groups before submitting.

the ddp_parity test takes 2 groups now, with different learning rates, let me know if you had something else in mind

global_id_map was incorrect in an earlier iteration of the patch for the case of multiple param_groups but no tests were failing. Would be nice to have a test that verifies that new load_state_dict and state_dict logic work when there are multiple param_groups. Not sure if the ddp_parity test handles that.

optimizer_settings["momentum"] = 0.9

sharded_optimizer = optim.OSS(
params=oss_trainable_params,
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@blefaudeux
Copy link
Copy Markdown
Contributor Author

global_id_map was incorrect in an earlier iteration for multiple param_groups but no tests were failing. Would be nice to have a test that verifies that new load_state_dict and state_dict logic work when there are multiple param_groups.

Please add a test for multiple param_groups before submitting.

the ddp_parity test takes 2 groups now, with different learning rates, let me know if you had something else in mind

global_id_map was incorrect in an earlier iteration of the patch for the case of multiple param_groups but no tests were failing. Would be nice to have a test that verifies that new load_state_dict and state_dict logic work when there are multiple param_groups. Not sure if the ddp_parity test handles that.

yes it does, it checks that:

  • after a couple of steps you get the exact same in both cases
  • if you save dict on both sides, then cross load (ddp loads the oss state dict, and vice versa) nothing breaks
  • if you iterate after that, you still have the same results on both sides

There's also another test which checks that on OSS alone stepping, then rolling back (load an old state dict) and stepping again gives the same result, this one has one param group only, always possible to add more of course

@blefaudeux blefaudeux merged commit 9e8929e into master Feb 2, 2021
@blefaudeux blefaudeux deleted the oss_elastic_checkpoints branch February 2, 2021 23:41
@blefaudeux blefaudeux mentioned this pull request Feb 12, 2021
4 tasks
myleott pushed a commit that referenced this pull request Feb 22, 2021
* [chore] Fix lint errors that broke master (#348)

authored-by: Anjali Sridhar <anj@devfair0443.h2.fair>

* [fix] ShardedDDP - cpu testfix - remove Gloo/CPU (#350)

* no idea about the root issue, but it proved to be fairly narrowed (gloo+cpu+python3.8+no cuda installed) so I guess that's out of scope for fairscale

* [feat][OSS] elastic and pytorch compatible checkpoints (#310)

* adding a test to prove the inter operability with upstream pytorch
* updating the changelog
* eager state pruning
* pytorch 1.5 compat

* [fix] ShardedDDP - properly handle post device change (#353)

* adding the .to(device) support + unit testing
* doc update

* [feat] Add AdaScaleWrapper (#347)

* [feat] Add AdaScaleWrapper

- This enables a different API for wrapping an optimizer with AdaScale.
- This also enables AdaScale to be wrapped by OSS.
- However, OSS wrapping AdaScale results in different optimization,
  which future research will be needed to study its effects.

testing: add unit tests.

* addressed comment: typo

* [refactor] Refactor and enable multiprocess nn.Pipe benchmarks. (#319)

* mp cleanup

* round of multiprocess refactoring

* test golden run

* print cuda stats

* fix lint errors

* enable multiprocess pipe benchmarks

* set world size to be available gpus

* more changes

* use synthetic loaders for intermediate pipeline stages

* merged master

* fix for the devices property

* dataloader fix

* modify rank check

* print wps stats

* enable verification

* fix logging

* fix flag name

* fix flag name

* check for rank

* fix indent

* pass args

* pass args

* modify golden data

* remove unused print messsage

* fix lint errors

* add comments

* fix benchmarks

Co-authored-by: Anjali Sridhar <anj@devfair0443.h2.fair>

* [refactor] pipe: simplify balance and module checks (#346)

* [chore] v0.1.5 (#355)

* [chore] disheartening switch off of a OSS cpu test (#356)

* precise skip, only if agent has only cpu

* [feat][minor] OSS Benchmark - regression test + background testing new optims (#352)

* restoring the regression test, adding a test of the for_each optims
* fix the regression test on circleci
* removing unused flags

* [refactor] multiprocess_pipe: cleanup __init__ (#357)

* [refactor] multiprocess_pipe: remove retain_graph __init__ param (#358)

It is not currently being used so we can simplify the interface
by removing it.

* [refactor] multiprocess_pipe: focus on LazyModule usage (#360)

* [feat] ShardedDDP : Adding a proper DDP parity / AMP unit test, overdue (#361)

* Adding a proper ddp parity / AMP unit test, overdue
* catch non-AMP pytorch

* [perf][OSS] Clip grad norm : minor obvious speedup (#363)

cache this iterator, easy speed up

* [refactor] multiprocess_pipe: remove pipelined_backward (#362)

* [perf] ShardedDDP - small memory use reduction - minor speedup (#366)

* minor

* minor

* [fix] repro+fix (#365)

fix a broken earlier commit, only worked for the first step

* [refactor] OSS only use flat buffers (#371)

* flat params all along, way simpler
* updating the docstring

* [refactor] AsyncPipe: do not sub-class MultiProcessPipe (#370)

* [refactor] remove multiprocess dependency on async (#373)

* [fix] Workaround need for pip --no-build-isolation (#375)

* Add fairscale.nn.misc.checkpoint_activations (#376)

* Add fairscale.utils.containers

Co-authored-by: Min Xu <24926999+min-xu-ai@users.noreply.github.com>

* Add fairscale.nn.misc.checkpoint_activations

Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

Co-authored-by: Min Xu <24926999+min-xu-ai@users.noreply.github.com>
Co-authored-by: Sam Shleifer <sshleifer@gmail.com>

* [chore] v0.1.6 (#377)

* v0.1.6

Co-authored-by: anj-s <32556631+anj-s@users.noreply.github.com>
Co-authored-by: Benjamin Lefaudeux <blefaudeux@users.noreply.github.com>
Co-authored-by: Anjali Sridhar <anj@devfair0443.h2.fair>
Co-authored-by: msbaines <35972327+msbaines@users.noreply.github.com>
Co-authored-by: Leonard Lausen <leonard@lausen.nl>
Co-authored-by: Myle Ott <myleott@fb.com>
Co-authored-by: Sam Shleifer <sshleifer@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OSS: Make the checkpoints partition-agnostic

8 participants