Skip to content

fix: bench_serving ITL calculation when using spec-decoding#12064

Merged
hnyls2002 merged 1 commit intosgl-project:mainfrom
JustinTong0323:fix-benchmark-discrepancies-for-spec-decoding
Oct 24, 2025
Merged

fix: bench_serving ITL calculation when using spec-decoding#12064
hnyls2002 merged 1 commit intosgl-project:mainfrom
JustinTong0323:fix-benchmark-discrepancies-for-spec-decoding

Conversation

@JustinTong0323
Copy link
Copy Markdown
Collaborator

@JustinTong0323 JustinTong0323 commented Oct 24, 2025

Motivation

This PR addresses an inconsistency in the bench_serving results, specifically regarding the Inter-Token Latency (ITL) metric when speculative decoding is enabled for SGLang's OpenAI-compatible backend. Previously, ITL was calculated per speculative decoding verify step (chunk) rather than per token, leading to an inflated ITL value that did not align with the reported output token throughput.

Before this PR a bench result could like:

Output token throughput (tok/s):         292.29
Median ITL (ms):                         8.72

Where the real ITL should around 1000/292.29≈3.42 .

After this PR we could get:

Output token throughput (tok/s):         293.48
Median ITL (ms):                         3.20

Which is more reasonable.

Modifications

For sglang-oai and sglang-oai-chat backends, if accept_length is provided (indicating speculative decoding is active), the collected ITL values are divided by this accept_length. This converts the chunk-level latencies to estimated per-token latencies.

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue in the bench_serving utility by refining how Iteration Latency (ITL) is calculated, particularly when speculative decoding is employed with specific SGLang backends. The change introduces a mechanism to normalize ITL values based on the accepted token length, leading to more accurate and representative benchmark results for these advanced decoding strategies.

Highlights

  • New Parameter for Metric Calculation: Introduced an optional accept_length parameter to the calculate_metrics function in bench_serving.py to support more nuanced ITL calculations.
  • Conditional ITL Normalization: Modified the Iteration Latency (ITL) calculation logic to normalize ITL values by accept_length when using 'sglang-oai' or 'sglang-oai-chat' backends and accept_length is provided and positive. This ensures accurate metrics for speculative decoding scenarios.
  • Parameter Propagation: The accept_length parameter is now passed from the limited_request_func to the calculate_metrics function, allowing the new calculation logic to be utilized during benchmarking.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an issue in the bench_serving.py file related to the calculation of Inter-Token Latency (ITL) when using speculative decoding. The changes introduce an accept_length parameter to the calculate_metrics function and modify the ITL calculation logic to account for this parameter when the backend is 'sglang-oai' or 'sglang-oai-chat'. The code has been reviewed and suggestions have been made to improve the clarity and correctness of the ITL calculation.

Comment thread python/sglang/bench_serving.py
Comment thread python/sglang/bench_serving.py
@hnyls2002 hnyls2002 merged commit b9fb74f into sgl-project:main Oct 24, 2025
16 of 103 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants