Skip to content

vlm: Refactor engine vlm params and support precessor output as input#10532

Closed
zhaochenyang20 wants to merge 18 commits intomainfrom
refactor-engine-vlm-params
Closed

vlm: Refactor engine vlm params and support precessor output as input#10532
zhaochenyang20 wants to merge 18 commits intomainfrom
refactor-engine-vlm-params

Conversation

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

@zhaochenyang20 zhaochenyang20 commented Sep 16, 2025

Summary

This PR refactors how the offline Engine handles multimodal / VLM inputs and exposes a consistent image_data API across models.


Motivation

Previously, each VLM had its own ad-hoc way to pass images (raw pixel values, custom dicts, etc.), and multi-image requests could easily break the engine path.

We want a single, well-defined image_data contract that works for:

  • raw images (e.g., PIL, numpy, torch)
  • HuggingFace processor outputs
  • precomputed visual embeddings

What’s Changed

Engine API

  • Clarified and unified the supported formats for image_data in Engine.generate / async_generate, including:

    • Plain images:
      image_data=[image] or image_data=[[image1, image2, ...]]
    • HuggingFace processor outputs:
      image_data=[dict(processor_output, format="processor_output")]
    • Precomputed embeddings:
      image_data=[dict(processor_output, format="precomputed_embedding", feature=precomputed_embeddings)]
  • Centralized multimodal validation / normalization in the scheduler (mm_utils, schedule_batch)
    so batched and multi-image requests follow the same path.


Model / Processor Side

  • Updated Qwen2.5-VL model wrapper and multimodal processors to use the new MultimodalInputFormat helper instead of model-specific dicts.
  • Fixed the engine multi-image bug reproduced in refactor-engine-vlm-params — the engine now correctly handles multiple images per request.

Tests

Added test_vlm_input_format.py, which verifies that both Qwen2.5-VL and Gemma-3-VLM work with:

  • direct image_data=[PIL.Image]
  • processor output as image_data
  • precomputed embeddings passed via format="precomputed_embedding"

It also checks that the model can correctly understand a 2-image input (e.g., taxi + SGL logo) for all three code paths.


Docs

Updated docs/advanced_features/vlm_query.ipynb to:

  • describe the three supported calling patterns (Basic / Processor Output / Precomputed Embeddings)
  • show concrete examples for Qwen2.5-VL using the offline Engine API

Testing

pytest test/srt/test_vlm_input_format.py

✅ All 6 tests pass on Qwen2.5-VL-3B-Instruct and Gemma-3-4B-IT with multimodal enabled.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @zhaochenyang20, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces significant refactoring to the engine's Vision-Language Model (VLM) parameter handling. The primary goal is to enhance flexibility and efficiency by allowing users to provide multimodal data in more advanced, pre-processed forms, such as direct processor outputs or precomputed embeddings. This change streamlines the VLM inference pipeline, reduces redundant processing, and provides clearer interfaces for multimodal input, as reflected in updated API docstrings and expanded test coverage.

Highlights

  • Expanded VLM Input Formats: The engine now explicitly supports passing pre-processed multimodal data, including raw processor outputs and precomputed embeddings, directly to the generate API, enhancing flexibility for VLM tasks.
  • Refactored Multimodal Data Handling: Core logic for processing and embedding multimodal data has been updated to accommodate these new input formats, including the introduction of a MultimodalInputFormat enum and a format field in MultimodalDataItem for clearer categorization.
  • Improved Documentation and Testing: Docstrings for generate functions are updated to reflect the new input capabilities, and VLM input format tests are expanded to cover multiple image inputs and the new processor output and precomputed embedding formats.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors how Vision Language Model (VLM) parameters are handled, introducing new formats for image_data like processor_output and precomputed_embedding. The changes are extensive, touching documentation, engine entrypoints, multimodal utilities, and tests. My review focuses on ensuring the correctness and robustness of these changes. I've identified a few issues: some debugging print statements left in the code, a case of poor exception handling that could hide bugs, and a misleading comment and docstring. Overall, the refactoring seems to be going in the right direction, but these points should be addressed to improve code quality.

Comment thread python/sglang/srt/managers/mm_utils.py Outdated
Comment thread docs/advanced_features/vlm_query.ipynb Outdated
Comment thread python/sglang/srt/entrypoints/engine.py Outdated
# - Single image for a single request
# - List of images (one per request in a batch)
# - List of lists of images (multiple images per request)
# - List of preprocessed pixel values, each as a dict containing field `format`: 'processor_output' and `feature`: the preprocessed pixel values
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The documentation for the processor_output format is misleading. It states that the dictionary should contain a feature key, but the implementation and examples show that the entire processor_output dictionary is passed directly (e.g., image_data=[dict(processor_output, format="processor_output")]). Please update the docstring to accurately reflect this usage.

Suggested change
# - List of preprocessed pixel values, each as a dict containing field `format`: 'processor_output' and `feature`: the preprocessed pixel values
# - List of preprocessed outputs from a Huggingface processor, each as a dict containing `format`: 'processor_output' and other data.

Comment thread python/sglang/srt/models/qwen2_5_vl.py Outdated
Comment thread python/sglang/srt/multimodal/processors/base_processor.py
Comment thread python/sglang/srt/multimodal/processors/base_processor.py Outdated
mickqian and others added 3 commits September 17, 2025 10:39
Co-authored-by: Xinyuan Tong <115166877+JustinTong0323@users.noreply.github.com>
@mickqian mickqian force-pushed the refactor-engine-vlm-params branch from 3049d02 to 8459da7 Compare September 17, 2025 14:41
@zhaochenyang20
Copy link
Copy Markdown
Collaborator Author

@JustinTong0323 @mickqian what's left for this PR 🤔

@mickqian mickqian changed the title [WIP] Refactor engine vlm params vlm: Refactor engine vlm params and support precessor output as input Sep 21, 2025
@zhyncs zhyncs closed this Nov 4, 2025
@zhyncs zhyncs deleted the refactor-engine-vlm-params branch November 4, 2025 23:49
@zhaochenyang20 zhaochenyang20 restored the refactor-engine-vlm-params branch November 5, 2025 00:55
@zhaochenyang20 zhaochenyang20 reopened this Nov 5, 2025
@zhaochenyang20
Copy link
Copy Markdown
Collaborator Author

Yep. This PR is still valid. I will find someone to make it through.

@minleminzui
Copy link
Copy Markdown
Collaborator

minleminzui commented Nov 6, 2025

I opened a follow-up PR #12755 to relax the test assertions in test_vlm_input_format.py.
It makes the VLM image understanding tests less brittle while keeping the semantic checks.

@zhaochenyang20 please review it

@zhaochenyang20
Copy link
Copy Markdown
Collaborator Author

c190d1a1647f806e68522fcddeef1bd5

nice done, I see your comment!

@github-actions github-actions Bot added documentation Improvements or additions to documentation Multi-modal multi-modal language model labels Nov 6, 2025
@zhaochenyang20
Copy link
Copy Markdown
Collaborator Author

@minleminzui I think the change is almost correct? Could you please check wether we should modify the document (I updated it two months before, I think there shall be something further now).

Also, rebase it. After the CI, let me merge it! thanks so much!

@minleminzui
Copy link
Copy Markdown
Collaborator

I opened a follow-up PR #12831 to update vlm_query.ipynb to include a Qwen2.5-VL example that passes HuggingFace processor_output into Engine.generate, aligning the docs
with the three supported VLM input formats (basic, processor_output,
precomputed_embedding).

@zhaochenyang20 please review it

@zhaochenyang20
Copy link
Copy Markdown
Collaborator Author

Also, this #12831

@minleminzui
Copy link
Copy Markdown
Collaborator

@zhaochenyang20 Previously, the CI job unit-test-backend-1-gpu (0), which runs pytest test/srt/test_vision_openai_server_a.py, failed.
I have fixed the related issues in this patch. See #13069 for details.

@minleminzui
Copy link
Copy Markdown
Collaborator

@zhaochenyang20
image
The issue where unit-test-backend-1-gpu (0) failed has been fixed
#14080

…-test-backend-1-gpu (0), (#14080)

Co-authored-by: BenYao21 <cyao22@asu.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation Multi-modal multi-modal language model npu run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants