Skip to content

vlm: Refactor engine vlm params and support precessor output as input#14091

Merged
mickqian merged 51 commits intosgl-project:mainfrom
minleminzui:refactor-engine-vlm-params
Dec 20, 2025
Merged

vlm: Refactor engine vlm params and support precessor output as input#14091
mickqian merged 51 commits intosgl-project:mainfrom
minleminzui:refactor-engine-vlm-params

Conversation

@minleminzui
Copy link
Copy Markdown
Collaborator

@minleminzui minleminzui commented Nov 28, 2025

Summary

This PR refactors how the offline Engine handles multimodal / VLM inputs and exposes a consistent image_data API across models.


Motivation

Previously, each VLM had its own ad-hoc way to pass images (raw pixel values, custom dicts, etc.), and multi-image requests could easily break the engine path.

We want a single, well-defined image_data contract that works for:

  • raw images (e.g., PIL, numpy, torch)
  • HuggingFace processor outputs
  • precomputed visual embeddings

What’s Changed

Engine API

  • Clarified and unified the supported formats for image_data in Engine.generate / async_generate, including:

    • Plain images:
      image_data=[image] or image_data=[[image1, image2, ...]]
    • HuggingFace processor outputs:
      image_data=[dict(processor_output, format="processor_output")]
    • Precomputed embeddings:
      image_data=[dict(processor_output, format="precomputed_embedding", feature=precomputed_embeddings)]
  • Centralized multimodal validation / normalization in the scheduler (mm_utils, schedule_batch)
    so batched and multi-image requests follow the same path.


Model / Processor Side

  • Updated Qwen2.5-VL model wrapper and multimodal processors to use the new MultimodalInputFormat helper instead of model-specific dicts.
  • Fixed the engine multi-image bug reproduced in refactor-engine-vlm-params — the engine now correctly handles multiple images per request.

Tests

Added test_vlm_input_format.py, which verifies that both Qwen2.5-VL and Gemma-3-VLM work with:

  • direct image_data=[PIL.Image]
  • processor output as image_data
  • precomputed embeddings passed via format="precomputed_embedding"

It also checks that the model can correctly understand a 2-image input (e.g., taxi + SGL logo) for all three code paths.


Docs

Updated docs/advanced_features/vlm_query.ipynb to:

  • describe the three supported calling patterns (Basic / Processor Output / Precomputed Embeddings)
  • show concrete examples for Qwen2.5-VL using the offline Engine API

Testing

pytest test/srt/test_vlm_input_format.py

✅ All 6 tests pass on Qwen2.5-VL-3B-Instruct and Gemma-3-4B-IT with multimodal enabled.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @minleminzui, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the engine's VLM parameter handling to introduce more flexible and performant multimodal input options. By supporting raw images, processor outputs, and precomputed embeddings, it allows users to optimize VLM queries based on their specific needs, from quick prototyping to high-throughput serving. The changes are thoroughly documented and tested, ensuring robust integration with existing VLM models.

Highlights

  • Enhanced Multimodal Input Formats: The engine now supports three distinct ways to provide image data to Vision Language Models (VLMs): raw images (for simplicity), processor output (for custom preprocessing), and precomputed embeddings (for performance optimization).
  • Updated VLM Query Documentation: The vlm_query.ipynb tutorial has been significantly expanded to explain and demonstrate the usage of these new input formats with Qwen2.5-VL and Llama 4 models, including detailed code examples.
  • API and Internal Logic Refinement: The Engine.generate and async_generate methods' docstrings have been updated, and internal multimodal data handling logic in mm_utils.py, schedule_batch.py, and base_processor.py has been refactored to properly process and validate the new input formats.
  • Comprehensive Unit Testing: New and updated unit tests in test_vlm_input_format.py ensure the correct functionality of the new input formats across different VLM models, including a new test class for Llava.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@minleminzui
Copy link
Copy Markdown
Collaborator Author

@zhaochenyang20
rebase main for #10532

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the VLM parameters to support three different input formats for image data: raw images, processor outputs, and precomputed embeddings. The changes are well-implemented across the documentation, core logic, and tests. The new Jupyter notebook tutorial is a great addition, providing clear examples for the new features. My review includes a few suggestions to improve the consistency of the documentation and enhance the user-friendliness of error messages in the processor logic.

Comment thread docs/advanced_features/vlm_query.ipynb Outdated
Comment thread docs/advanced_features/vlm_query.ipynb Outdated
Comment thread python/sglang/srt/multimodal/processors/base_processor.py Outdated
Comment thread python/sglang/srt/multimodal/processors/base_processor.py Outdated
Comment thread python/sglang/srt/multimodal/processors/base_processor.py
@minleminzui minleminzui force-pushed the refactor-engine-vlm-params branch from 5049fc2 to dd1dbf9 Compare November 28, 2025 08:13
@zhaochenyang20 zhaochenyang20 changed the title Refactor engine vlm params and rebase main for #14080 vlm: Refactor engine vlm params and support precessor output as input Dec 1, 2025
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

@ BenYao21 Please take a review and rewrite the description.

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

move this #10532 to here

@BenYao21 BenYao21 requested a review from yuan-luo as a code owner December 8, 2025 21:40
@minleminzui minleminzui force-pushed the refactor-engine-vlm-params branch 4 times, most recently from 9e4c798 to ba327ab Compare December 11, 2025 03:14
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

1 similar comment
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@BenYao21
Copy link
Copy Markdown
Contributor

The LLaVA models on HF (lmms-lab/LLaVA-OneVision-1.5-8B-Instruct, liuhaotian/llava-v1.5-7b) currently encounter weight loading issues (in config.json), which cause failures in test_vision_openai_server_a and test_chunked_prefill. We are skipping these tests until the weight information is updated in the upstream repositories. @zhaochenyang20

minleminzui and others added 2 commits December 18, 2025 15:04
Summary:
Replace implicit boolean check using `or` with explicit `None` check to prevent RuntimeError when `second_per_grid_ts` is a multi-element tensor.

Details:
- In `process_mm_data_async`, `getattr(ret, "second_per_grid_ts", None)` can return a tensor.
- Using `or` triggers a boolean evaluation of the tensor, causing "RuntimeError: Boolean value of Tensor with more than one value is ambiguous".
- Fixed by explicitly checking if the value is `None` before falling back to `video_second_per_grid`.
@minleminzui minleminzui force-pushed the refactor-engine-vlm-params branch from 83b6aa4 to 54e28e8 Compare December 18, 2025 07:27
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

zhaochenyang20 commented Dec 18, 2025

/rerun-failed-ci try again

1 similar comment
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci try again

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

zhaochenyang20 commented Dec 19, 2025

/rerun-failed-ci

1 similar comment
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@minleminzui minleminzui force-pushed the refactor-engine-vlm-params branch from 628554f to 5740488 Compare December 19, 2025 12:26
@minleminzui
Copy link
Copy Markdown
Collaborator Author

/rerun-failed-ci

1 similar comment
@minleminzui
Copy link
Copy Markdown
Collaborator Author

/rerun-failed-ci

@mickqian mickqian merged commit 1f1f05a into sgl-project:main Dec 20, 2025
311 of 352 checks passed
@zhaochenyang20
Copy link
Copy Markdown
Collaborator

Great job!

@BenYao21 BenYao21 mentioned this pull request Dec 22, 2025
6 tasks
Prozac614 pushed a commit to Prozac614/sglang that referenced this pull request Dec 23, 2025
…sgl-project#14091)

Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: zhaochenyang20 <zhaochenyang20@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: BenYao21 <cyao22@asu.edu>
Co-authored-by: minleminzui <minleminzui@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: 赵晨阳 <zhaochen20@outlook.com>
jiaming1130 pushed a commit to zhuyijie88/sglang that referenced this pull request Dec 25, 2025
…sgl-project#14091)

Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: zhaochenyang20 <zhaochenyang20@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: BenYao21 <cyao22@asu.edu>
Co-authored-by: minleminzui <minleminzui@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: 赵晨阳 <zhaochen20@outlook.com>
YChange01 pushed a commit to YChange01/sglang that referenced this pull request Jan 13, 2026
…sgl-project#14091)

Co-authored-by: Mick <mickjagger19@icloud.com>
Co-authored-by: zhaochenyang20 <zhaochenyang20@gmail.com>
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
Co-authored-by: BenYao21 <cyao22@asu.edu>
Co-authored-by: minleminzui <minleminzui@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: 赵晨阳 <zhaochen20@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation Multi-modal multi-modal language model npu run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants