Skip to content

[BUGFIX] Replace assert with ValueError for response_format validation in chat completions endpoint#35443

Closed
antonovsergey93 wants to merge 23 commits intovllm-project:mainfrom
antonovsergey93:fix-assert-error-json-schema
Closed

[BUGFIX] Replace assert with ValueError for response_format validation in chat completions endpoint#35443
antonovsergey93 wants to merge 23 commits intovllm-project:mainfrom
antonovsergey93:fix-assert-error-json-schema

Conversation

@antonovsergey93
Copy link
Copy Markdown

@antonovsergey93 antonovsergey93 commented Feb 26, 2026

Purpose

When the /v1/chat/completions endpoint receives a request with response_format type json_schema but without the required json_schema field, the server crashes with an AssertionError, resulting in a 500 Internal Server Error.

Fixes #35438

This is the same class of issue addressed in #35456 for the /v1/completions endpoint

Test Plan

pytest tests/entrypoints/openai/test_chat_error.py -v

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Sergey Antonov <antonovsergey93@gmail.com>
@github-actions
Copy link
Copy Markdown

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added frontend bug Something isn't working labels Feb 26, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug where an assert was used for validating the response_format for json_schema, which could lead to a 500 error. The change correctly replaces this with a ValueError, ensuring a proper 400 Bad Request is returned for invalid requests. The added test case verifies this behavior. I've added one suggestion to further improve the validation logic for json_schema to provide more specific error messages to the user.

umut-polat added a commit to umut-polat/vllm that referenced this pull request Feb 26, 2026
…letions endpoint

When the completions endpoint receives a request with
response_format type 'json_schema' but without the required
json_schema field, the server crashes with an AssertionError
resulting in a 500 Internal Server Error. This is the same
issue fixed for chat completions in vllm-project#35443, but for the
/v1/completions endpoint.

Replace assert statements with explicit ValueError raises so
that the error is caught by create_error_response and returned
as a proper 400 Bad Request.

Signed-off-by: umut-polat <52835619+umut-polat@users.noreply.github.com>
@antonovsergey93 antonovsergey93 changed the title [BUGFIX] Change assert to ValueError in response_format json_schema validation [BUGFIX] Replace assert with ValueError for response_format validation in chat completions endpoint Feb 27, 2026
askliar and others added 13 commits February 27, 2026 14:54
…ion (vllm-project#34687)

Signed-off-by: Andrii <askliar@nvidia.com>
Co-authored-by: Andrii <askliar@nvidia.com>
Signed-off-by: angelayi <yiangela7@gmail.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: Daniel Huang <daniel1.huang@intel.com>
…5369)

Signed-off-by: Zhu, Zufang <zufang.zhu@intel.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
umut-polat and others added 5 commits February 27, 2026 14:54
…n in completions endpoint (vllm-project#35456)

Signed-off-by: umut-polat <52835619+umut-polat@users.noreply.github.com>
Signed-off-by: Max Hu <maxhu@nvidia.com>
Signed-off-by: Max Hu <hyoung2991@gmail.com>
Co-authored-by: Max Hu <maxhu@nvidia.com>
Co-authored-by: Shang Wang <shangw@nvidia.com>
)

Signed-off-by: tibG <naps@qubes.milou>
Co-authored-by: tibG <naps@qubes.milou>
…parallelism (vllm-project#35410)

Signed-off-by: jasonlizhengjian <jasonlizhengjian@gmail.com>
Signed-off-by: Sergey Antonov <antonovsergey93@gmail.com>
@mergify
Copy link
Copy Markdown

mergify bot commented Feb 27, 2026

Documentation preview: https://vllm--35443.org.readthedocs.build/en/35443/

@mergify mergify bot added documentation Improvements or additions to documentation ci/build deepseek Related to DeepSeek models qwen Related to Qwen models nvidia v1 labels Feb 27, 2026
@antonovsergey93
Copy link
Copy Markdown
Author

antonovsergey93 commented Feb 27, 2026

Closing PR in favor of #35514

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working ci/build deepseek Related to DeepSeek models documentation Improvements or additions to documentation frontend nvidia qwen Related to Qwen models v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

[Bug]: Invalid response_format leads in 500 errors