Skip to content

[Bug Fix] Return HTTP 400 for streaming requests with validation errors (#19996)#20027

Closed
Wfd567 wants to merge 2 commits intosgl-project:mainfrom
Wfd567:fix_19996
Closed

[Bug Fix] Return HTTP 400 for streaming requests with validation errors (#19996)#20027
Wfd567 wants to merge 2 commits intosgl-project:mainfrom
Wfd567:fix_19996

Conversation

@Wfd567
Copy link
Copy Markdown

@Wfd567 Wfd567 commented Mar 6, 2026

For oversized prompts (input tokens exceed context length), SGLang OpenAI-compatible /v1/chat/completions behaves inconsistently between streaming and non-streaming:

  • stream=true : Returns HTTP 200 with an error payload (error.code=400)
  • stream=false : Returns HTTP 400

This is inconsistent with vLLM which returns HTTP 400 for both cases.
Added a validate_request method to TokenizerManager that performs request validation without sending the request to the scheduler. This method is called in _handle_streaming_request before returning the StreamingResponse, ensuring validation errors return HTTP 400 instead of HTTP 200.
Fixes #19996

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves an inconsistency in error handling for OpenAI-compatible streaming API requests within SGLang. Previously, validation errors for streaming requests would result in an HTTP 200 status with an error payload, unlike non-streaming requests which correctly returned an HTTP 400. The changes introduce a dedicated validation step for streaming requests, standardizing the error response to HTTP 400 for all validation failures, thereby improving API consistency and user experience.

Highlights

  • Inconsistent Error Handling Fixed: Addressed a bug where streaming OpenAI-compatible /v1/chat/completions requests returned HTTP 200 with an error payload for validation errors (e.g., oversized prompts), while non-streaming requests correctly returned HTTP 400.
  • Unified Validation Logic: Introduced a validate_request method in TokenizerManager to perform early request validation, ensuring that validation errors are caught before processing and return the appropriate HTTP 400 status.
  • Streaming Request Integration: Integrated the new validate_request method into the _handle_streaming_request function, ensuring that streaming requests now consistently return HTTP 400 for validation failures, aligning behavior with non-streaming requests and vLLM.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/entrypoints/openai/serving_chat.py
    • Called self.tokenizer_manager.validate_request within _handle_streaming_request to perform early validation.
  • python/sglang/srt/managers/tokenizer_manager.py
    • Added a new asynchronous method validate_request to encapsulate request validation logic.
    • Implemented checks for batch normalization, priority setting, request ID validation, routed_dp_rank range, and tokenization within validate_request.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to improve error handling for streaming requests by adding a pre-validation step. However, the implementation of the validate_request method in TokenizerManager introduces critical security vulnerabilities, specifically a LoRA adapter reference leak leading to potential resource exhaustion (DoS) and a KeyError causing all streaming requests to fail with a 500 Internal Server Error. Additionally, the implementation contains an incorrect method call and significant code duplication, which should be addressed for stability and maintainability.

await self.is_pause_cond.wait_for(lambda: not self.is_pause)

async with self.model_update_lock.reader_lock:
await self._validate_and_resolve_lora(obj)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The validate_request method calls _validate_and_resolve_lora(obj) but does not release the acquired LoRA adapter reference. This leads to a reference leak for each streaming request with a LoRA adapter, potentially causing resource exhaustion and Denial of Service. This issue is compounded by the significant code duplication between validate_request and generate_request (lines 485-518), which could be refactored into a shared helper method to improve maintainability and ensure consistent resource management.

Comment on lines +557 to +560
if obj.is_single:
await self._tokenize_one_request(obj)
else:
await self._batch_tokenize_and_process(obj)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The validate_request method calls _tokenize_one_request(obj) or _batch_tokenize_and_process(obj), which leads to a KeyError because self.rid_to_state is not initialized in validate_request. This causes all streaming requests to fail with a 500 Internal Server Error, resulting in a functional Denial of Service. Additionally, there's an incorrect call to _batch_tokenize_and_process where obj is passed as batch_size, which would cause a TypeError at runtime. Both issues need to be addressed to ensure the stability of the streaming endpoint.

Copy link
Copy Markdown
Collaborator

@kpham-sgl kpham-sgl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think simply adding validation to _handle_streaming_request() is viable for the reasons mentioned above. We should explore alternative approaches that make more targeted changes to _handle_streaming_request() and/or generate_request()

async for response in self._handle_batch_request(obj, request):
yield response

async def validate_request_for_streaming(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tokenization and multimodal processing will happen later in tokenizer_manager.generate_request. Doing it once more prior will be very costly (especially mm_data_processor.process() which downloads image an do some preprocessing)

"""
self.auto_create_handle_loop()

obj.normalize_batch_and_arguments()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

obj.normalize_batch_and_arguments() not idempotent and will modify obj states. We currently call it once in generate_request path, so calling it once prior to that point may lead to incorrectness

@hnyls2002
Copy link
Copy Markdown
Collaborator

@Wfd567 Can you take a look at these comments?

@Wfd567
Copy link
Copy Markdown
Author

Wfd567 commented Mar 12, 2026

@Wfd567 Can you take a look at these comments?

Thank you, I have received these replies and feedback. I will think about how to further address the issues based on the feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] /v1/chat/completions with stream=true returns HTTP 200 for context overflow error (vLLM returns 400)

4 participants