fix: bench_serving ITL calculation when using spec-decoding#12064
Conversation
Summary of ChangesHello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses an issue in the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request addresses an issue in the bench_serving.py file related to the calculation of Inter-Token Latency (ITL) when using speculative decoding. The changes introduce an accept_length parameter to the calculate_metrics function and modify the ITL calculation logic to account for this parameter when the backend is 'sglang-oai' or 'sglang-oai-chat'. The code has been reviewed and suggestions have been made to improve the clarity and correctness of the ITL calculation.
Motivation
This PR addresses an inconsistency in the bench_serving results, specifically regarding the Inter-Token Latency (ITL) metric when speculative decoding is enabled for SGLang's OpenAI-compatible backend. Previously, ITL was calculated per speculative decoding verify step (chunk) rather than per token, leading to an inflated ITL value that did not align with the reported output token throughput.
Before this PR a bench result could like:
Where the real ITL should around 1000/292.29≈3.42 .
After this PR we could get:
Which is more reasonable.
Modifications
For
sglang-oaiandsglang-oai-chatbackends, if accept_length is provided (indicating speculative decoding is active), the collected ITL values are divided by thisaccept_length. This converts the chunk-level latencies to estimated per-token latencies.Accuracy Tests
Benchmarking and Profiling
Checklist