deprecate sharded_ddp training argument#24825
Conversation
b6b5d63 to
96fe20d
Compare
sgugger
left a comment
There was a problem hiding this comment.
That makes sense thanks! Note that while removing it entirely from the doc is fine, we can't abruptly remove it from the library like this. We will need to properly deprecate it first and in two-tree minor versions we can fully remove it.
96fe20d to
d8eb514
Compare
@sgugger Thanks for pointing this out. I have rolled back the code changes and added a warning that |
sharded_ddp training argument
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. |
What does this PR do?
This PR deprecates the
sharded_ddptraining arguments, since ShardedDDP has been upstreamded to PyTorch, so users can use thefsdptraning parameter instead.According to fairscale(see), PyTorch FSDP is the recommended method for scaling to large NN models. I think Sharded-DDP is dead and it's time to say goodbye to this library.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.