feature: adding openai compatible API request to bench_serving#17219
feature: adding openai compatible API request to bench_serving#17219Kangyan-Zhou merged 5 commits intomainfrom
Conversation
Summary of ChangesHello @dougyster, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly extends the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for OpenAI-compatible API request format to bench_serving, which is a great feature. The changes are logical and well-contained. I've provided a few suggestions to improve efficiency, robustness, and code clarity in the new sample_openai_requests function. These include optimizing file reading, fixing a misleading docstring, improving token calculation efficiency, and addressing a type hint inconsistency.
| with open(dataset_path, "r") as f: | ||
| for line in f: | ||
| if line.strip(): | ||
| dataset.append(json.loads(line)) | ||
|
|
||
| if num_requests > 0: | ||
| dataset = dataset[:num_requests] |
There was a problem hiding this comment.
The file reading logic can be improved in two ways:
- Efficiency: It currently reads the entire file into memory, which is inefficient for large datasets, especially when
num_requestsis small. - Robustness: It doesn't handle malformed JSON lines, which could cause the benchmark to crash.
Here is a suggested change that addresses both points by reading only the necessary lines and safely parsing JSON.
| with open(dataset_path, "r") as f: | |
| for line in f: | |
| if line.strip(): | |
| dataset.append(json.loads(line)) | |
| if num_requests > 0: | |
| dataset = dataset[:num_requests] | |
| with open(dataset_path, "r") as f: | |
| for line in f: | |
| if num_requests > 0 and len(dataset) >= num_requests: | |
| break | |
| if line.strip(): | |
| try: | |
| dataset.append(json.loads(line)) | |
| except json.JSONDecodeError: | |
| # Consider logging a warning about the invalid line | |
| continue |
| Each line should be a JSON object with: | ||
| - "messages": list of {"role": str, "content": str} | ||
| - "max_tokens": int (used as output_len if fixed_output_len not set) | ||
| - Optional: "tools", "temperature", "top_p" (passed through) |
There was a problem hiding this comment.
The docstring mentions that optional parameters like "tools", "temperature", and "top_p" are passed through. However, the current implementation does not use these parameters; they are loaded from the JSON but not used when creating the DatasetRow. This can be misleading for users.
To fix this, you should either implement the pass-through logic (which would require changes to DatasetRow and other parts of the code) or remove this line from the docstring to reflect the actual behavior.
| prompt_text = tokenizer.apply_chat_template( | ||
| messages, tokenize=False, add_generation_prompt=True | ||
| ) | ||
| prompt_len = len(tokenizer.encode(prompt_text)) |
There was a problem hiding this comment.
You can calculate the prompt length more efficiently by tokenizing directly within apply_chat_template instead of generating the full string prompt first and then encoding it. This avoids creating a potentially large intermediate string.
| prompt_text = tokenizer.apply_chat_template( | |
| messages, tokenize=False, add_generation_prompt=True | |
| ) | |
| prompt_len = len(tokenizer.encode(prompt_text)) | |
| prompt_len = len( | |
| tokenizer.apply_chat_template( | |
| messages, tokenize=True, add_generation_prompt=True | |
| ) | |
| ) |
| # Pass messages list directly - bench_serving handles List[Dict] prompts | ||
| filtered_dataset.append( | ||
| DatasetRow( | ||
| prompt=messages, |
There was a problem hiding this comment.
You are assigning messages (a List[Dict[str, str]]) to DatasetRow.prompt. However, the type hint for DatasetRow.prompt is str. While this works at runtime due to dynamic typing, it creates a type inconsistency that can be confusing and may hide potential bugs. Other parts of the code also assign different types (e.g., List[str], List[int]) to this field.
To improve maintainability and type safety, consider updating the DatasetRow class definition to reflect the various types it can hold. For example:
from typing import Dict, List, Union
@dataclass
class DatasetRow:
prompt: Union[str, List[int], List[str], List[Dict[str, str]]]
...Since the definition of DatasetRow is not part of this diff, I am pointing this out here for you to address.
Motivation
Supports jsonl files in OpenAI Compatible API format for bench serving, and also supporting extra params to the payload. Note that default ignore_eos is now False instead of true.
Modifications
Added openai as a valid dataset type and added conversion to DatasetRow objects for benchmarking.
Accuracy Tests
Successful benchmarking on nda jsonl data.
Benchmarking and Profiling
n/a
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci