Skip to content

vLLM model handler efficiency improvements#32687

Merged
damccorm merged 5 commits intomasterfrom
users/damccorm/vllmEfficiency
Oct 15, 2024
Merged

vLLM model handler efficiency improvements#32687
damccorm merged 5 commits intomasterfrom
users/damccorm/vllmEfficiency

Conversation

@damccorm
Copy link
Copy Markdown
Contributor

@damccorm damccorm commented Oct 7, 2024

Because of how vLLM does batching, it is much more efficient if you're able to push as many requests to it as possible so that it can make the appropriate batching decision. Right now, per worker thread we send requests serially; this fixes that inefficiency by sending all data asynchronously to the vLLM server and then awaiting the response.

Fixes #32528


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@damccorm damccorm marked this pull request as ready for review October 11, 2024 18:27
@github-actions
Copy link
Copy Markdown
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @tvalentyn for label python.
R: @Abacn for label build.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@damccorm damccorm merged commit 06ecee9 into master Oct 15, 2024
@damccorm damccorm deleted the users/damccorm/vllmEfficiency branch October 15, 2024 18:19
reeba212 pushed a commit to reeba212/beam that referenced this pull request Dec 4, 2024
* vLLM model handler efficiency improvements

* fmt

* Remove bad exceptions

* lint

* lint
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request]: vLLM model handler

2 participants