[benchmark] refactor bench (part 1)#10409
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @XucSh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request initiates a significant refactoring of the existing bench_serving module to enhance its modularity and extensibility. The changes introduce a clear separation of concerns, organizing the benchmarking logic into dedicated components for backends, datasets, metrics, and utility functions. This foundational work aims to simplify the integration of new inference engines and datasets, making the benchmarking framework more robust and easier to maintain. The initial implementation focuses on integrating the SGLang backend and the ShareGPT dataset, setting the stage for broader support in subsequent updates.
Highlights
- Benchmark Refactoring: The
bench_servingmodule has been refactored into a more modular architecture, separating concerns into distinctbackends,datasets,metrics, andutilscomponents. - Initial Backend and Dataset Support: Currently, the refactored benchmark supports the SGLang backend and the ShareGPT dataset, with plans for additional backends and datasets in future pull requests.
- Command-Line Argument Management: New modules introduce a structured way to define and manage command-line arguments for dataset configuration, serving parameters, and common benchmark options using the
clicklibrary. - Asynchronous Request Handling: The benchmark runner now leverages asynchronous programming (
asyncio) for efficient request generation and dispatching to the backend, including warmup requests and concurrency control.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This PR does a great job of refactoring the benchmarking logic into a more modular and extensible structure. The separation into backends, datasets, metrics, and a runner is clean. I've found a few critical issues related to dataset downloading and incorrect function signatures that will cause runtime errors. I've also pointed out some areas for improvement in terms of code correctness and maintainability, such as using async network calls in async functions and avoiding dynamic attribute assignment to dataclasses. Once these issues are addressed, this will be a solid foundation for future benchmarking work.
75cf8a1 to
ded7907
Compare
Refactor the bench_serving, spilt it into multi parts. Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
a032072 to
d7aff61
Compare
Signed-off-by: Xuchun Shang <xuchun.shang@linux.alibaba.com>
|
@hnyls2002 @zhyncs @stmatengss Please take a look |
|
QQ @CatherineSue do we still need this if we can make genai-bench great again :) |
Great!Let‘s MGGA! |
|
yes the original script should be refactored or replaced... |
Refactor the bench_serving, spilt it into multi parts.
see issue 10177
This PR first refactors bench_serving by splitting its architecture into logical components, including backends, datasets, metrics, and utils.
Currently, only the SGLang backend and the ShareGPT dataset are implemented.
You can now run a test using the following command:
python3 -m sglang.benchmark.serving --backend sglang --num-prompts 10 --dataset-path /root/.cache/modelscope/hub/datasets/gliang1001/ShareGPT_V3_unfiltered_cleaned_split/ShareGPT_V3_unfiltered_cleaned_split.json --host 127.0.0.1 --port 30000Support for other datasets and backends will be added in subsequent PRs.
Once the refactor is done, the bench_serving should be depreacted.
Cc @stmatengss @hnyls2002
Checklist