[Bug Fix] Return HTTP 400 for streaming requests with validation errors (#19996)#20027
[Bug Fix] Return HTTP 400 for streaming requests with validation errors (#19996)#20027Wfd567 wants to merge 2 commits intosgl-project:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves an inconsistency in error handling for OpenAI-compatible streaming API requests within SGLang. Previously, validation errors for streaming requests would result in an HTTP 200 status with an error payload, unlike non-streaming requests which correctly returned an HTTP 400. The changes introduce a dedicated validation step for streaming requests, standardizing the error response to HTTP 400 for all validation failures, thereby improving API consistency and user experience. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aims to improve error handling for streaming requests by adding a pre-validation step. However, the implementation of the validate_request method in TokenizerManager introduces critical security vulnerabilities, specifically a LoRA adapter reference leak leading to potential resource exhaustion (DoS) and a KeyError causing all streaming requests to fail with a 500 Internal Server Error. Additionally, the implementation contains an incorrect method call and significant code duplication, which should be addressed for stability and maintainability.
| await self.is_pause_cond.wait_for(lambda: not self.is_pause) | ||
|
|
||
| async with self.model_update_lock.reader_lock: | ||
| await self._validate_and_resolve_lora(obj) |
There was a problem hiding this comment.
The validate_request method calls _validate_and_resolve_lora(obj) but does not release the acquired LoRA adapter reference. This leads to a reference leak for each streaming request with a LoRA adapter, potentially causing resource exhaustion and Denial of Service. This issue is compounded by the significant code duplication between validate_request and generate_request (lines 485-518), which could be refactored into a shared helper method to improve maintainability and ensure consistent resource management.
| if obj.is_single: | ||
| await self._tokenize_one_request(obj) | ||
| else: | ||
| await self._batch_tokenize_and_process(obj) |
There was a problem hiding this comment.
The validate_request method calls _tokenize_one_request(obj) or _batch_tokenize_and_process(obj), which leads to a KeyError because self.rid_to_state is not initialized in validate_request. This causes all streaming requests to fail with a 500 Internal Server Error, resulting in a functional Denial of Service. Additionally, there's an incorrect call to _batch_tokenize_and_process where obj is passed as batch_size, which would cause a TypeError at runtime. Both issues need to be addressed to ensure the stability of the streaming endpoint.
kpham-sgl
left a comment
There was a problem hiding this comment.
I don't think simply adding validation to _handle_streaming_request() is viable for the reasons mentioned above. We should explore alternative approaches that make more targeted changes to _handle_streaming_request() and/or generate_request()
| async for response in self._handle_batch_request(obj, request): | ||
| yield response | ||
|
|
||
| async def validate_request_for_streaming( |
There was a problem hiding this comment.
Tokenization and multimodal processing will happen later in tokenizer_manager.generate_request. Doing it once more prior will be very costly (especially mm_data_processor.process() which downloads image an do some preprocessing)
| """ | ||
| self.auto_create_handle_loop() | ||
|
|
||
| obj.normalize_batch_and_arguments() |
There was a problem hiding this comment.
obj.normalize_batch_and_arguments() not idempotent and will modify obj states. We currently call it once in generate_request path, so calling it once prior to that point may lead to incorrectness
|
@Wfd567 Can you take a look at these comments? |
Thank you, I have received these replies and feedback. I will think about how to further address the issues based on the feedback. |
For oversized prompts (input tokens exceed context length), SGLang OpenAI-compatible /v1/chat/completions behaves inconsistently between streaming and non-streaming:
This is inconsistent with vLLM which returns HTTP 400 for both cases.
Added a validate_request method to TokenizerManager that performs request validation without sending the request to the scheduler. This method is called in _handle_streaming_request before returning the StreamingResponse, ensuring validation errors return HTTP 400 instead of HTTP 200.
Fixes #19996