Distributed prompting/inference utility#1410
Merged
Conversation
|
The documentation is not available anymore as the PR was closed or merged. |
0c484a8 to
866fec0
Compare
sgugger
reviewed
May 11, 2023
Collaborator
sgugger
left a comment
There was a problem hiding this comment.
Thanks for the PR! I think this should contain an option to pad the splits by looping back to the beginning since if users try to gather predictions not of the same shapes, they will get a hang.
sgugger
approved these changes
May 17, 2023
Collaborator
sgugger
left a comment
There was a problem hiding this comment.
Some comments on the doc but otherwise LGTM!
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR introduces a new utility in
Accelerator,AcceleratorState, andPartialState:Accelerator.split_between_processes.It is often useful when performing distributed inference in applications such as stable diffusion to send one prompt to GPU A, another prompt to GPU B, and so forth. This PR introduces a new context manager that let's the user send some data in and split it evenly across all instances for them to use. An example application might look like such:
On a two process system, GPU A would receive
"a dog"and GPU B would receive"a cat".This is also especially useful for cases where using a
DataLoaderto perform the task is too much code, and the user just wants to send in strings or already preprocessed dictionaries and split them.