Skip to content

feature: adding openai compatible API request to bench_serving#17219

Merged
Kangyan-Zhou merged 5 commits intomainfrom
bench_serving_openai
Jan 24, 2026
Merged

feature: adding openai compatible API request to bench_serving#17219
Kangyan-Zhou merged 5 commits intomainfrom
bench_serving_openai

Conversation

@dougyster
Copy link
Copy Markdown
Collaborator

@dougyster dougyster commented Jan 16, 2026

Motivation

Supports jsonl files in OpenAI Compatible API format for bench serving, and also supporting extra params to the payload. Note that default ignore_eos is now False instead of true.

Modifications

Added openai as a valid dataset type and added conversion to DatasetRow objects for benchmarking.

Accuracy Tests

Successful benchmarking on nda jsonl data.

Benchmarking and Profiling

n/a

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @dougyster, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly extends the bench_serving utility by introducing native support for datasets formatted according to the OpenAI-compatible API specification. This enhancement enables users to directly utilize JSONL files containing chat completion requests for benchmarking, thereby simplifying the evaluation of model serving performance against a widely adopted industry standard.

Highlights

  • OpenAI API Request Support: Added support for processing OpenAI-compatible API request formats within the bench_serving utility.
  • New Dataset Type: Introduced a new openai dataset type, allowing bench_serving to consume JSONL files structured with OpenAI chat completion requests.
  • Request Parsing Function: Implemented the sample_openai_requests function to parse JSONL files, extract messages, calculate prompt and output lengths, and convert them into DatasetRow objects suitable for benchmarking.
  • Argument Expansion: Extended the command-line argument choices for dataset_name to include 'openai', making the new functionality accessible.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for OpenAI-compatible API request format to bench_serving, which is a great feature. The changes are logical and well-contained. I've provided a few suggestions to improve efficiency, robustness, and code clarity in the new sample_openai_requests function. These include optimizing file reading, fixing a misleading docstring, improving token calculation efficiency, and addressing a type hint inconsistency.

Comment thread python/sglang/bench_serving.py Outdated
Comment on lines +1325 to +1331
with open(dataset_path, "r") as f:
for line in f:
if line.strip():
dataset.append(json.loads(line))

if num_requests > 0:
dataset = dataset[:num_requests]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The file reading logic can be improved in two ways:

  1. Efficiency: It currently reads the entire file into memory, which is inefficient for large datasets, especially when num_requests is small.
  2. Robustness: It doesn't handle malformed JSON lines, which could cause the benchmark to crash.

Here is a suggested change that addresses both points by reading only the necessary lines and safely parsing JSON.

Suggested change
with open(dataset_path, "r") as f:
for line in f:
if line.strip():
dataset.append(json.loads(line))
if num_requests > 0:
dataset = dataset[:num_requests]
with open(dataset_path, "r") as f:
for line in f:
if num_requests > 0 and len(dataset) >= num_requests:
break
if line.strip():
try:
dataset.append(json.loads(line))
except json.JSONDecodeError:
# Consider logging a warning about the invalid line
continue

Comment thread python/sglang/bench_serving.py Outdated
Each line should be a JSON object with:
- "messages": list of {"role": str, "content": str}
- "max_tokens": int (used as output_len if fixed_output_len not set)
- Optional: "tools", "temperature", "top_p" (passed through)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring mentions that optional parameters like "tools", "temperature", and "top_p" are passed through. However, the current implementation does not use these parameters; they are loaded from the JSON but not used when creating the DatasetRow. This can be misleading for users.

To fix this, you should either implement the pass-through logic (which would require changes to DatasetRow and other parts of the code) or remove this line from the docstring to reflect the actual behavior.

Comment thread python/sglang/bench_serving.py Outdated
Comment on lines +1343 to +1346
prompt_text = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
prompt_len = len(tokenizer.encode(prompt_text))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

You can calculate the prompt length more efficiently by tokenizing directly within apply_chat_template instead of generating the full string prompt first and then encoding it. This avoids creating a potentially large intermediate string.

Suggested change
prompt_text = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
prompt_len = len(tokenizer.encode(prompt_text))
prompt_len = len(
tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True
)
)

# Pass messages list directly - bench_serving handles List[Dict] prompts
filtered_dataset.append(
DatasetRow(
prompt=messages,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

You are assigning messages (a List[Dict[str, str]]) to DatasetRow.prompt. However, the type hint for DatasetRow.prompt is str. While this works at runtime due to dynamic typing, it creates a type inconsistency that can be confusing and may hide potential bugs. Other parts of the code also assign different types (e.g., List[str], List[int]) to this field.

To improve maintainability and type safety, consider updating the DatasetRow class definition to reflect the various types it can hold. For example:

from typing import Dict, List, Union

@dataclass
class DatasetRow:
    prompt: Union[str, List[int], List[str], List[Dict[str, str]]]
    ...

Since the definition of DatasetRow is not part of this diff, I am pointing this out here for you to address.

@dougyster dougyster closed this Jan 16, 2026
@dougyster dougyster reopened this Jan 17, 2026
@Kangyan-Zhou Kangyan-Zhou merged commit 4c7136b into main Jan 24, 2026
60 of 67 checks passed
@Kangyan-Zhou Kangyan-Zhou deleted the bench_serving_openai branch January 24, 2026 00:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants