Skip to content

[Model] Add Ernie4.5 VL model support#15679

Merged
Kangyan-Zhou merged 17 commits intosgl-project:mainfrom
CSWYF3634076:ernie-vl
Jan 26, 2026
Merged

[Model] Add Ernie4.5 VL model support#15679
Kangyan-Zhou merged 17 commits intosgl-project:mainfrom
CSWYF3634076:ernie-vl

Conversation

@CSWYF3634076
Copy link
Copy Markdown
Contributor

@CSWYF3634076 CSWYF3634076 commented Dec 23, 2025

Motivation

Add Baidu Ernie4.5 VL model support

Modifications

ernie45_moe_vl.py the text backbone
ernie45_vl.py the vit
processors/ernie45_vl.py the processor
rotary_embedding.py::Ernie4_5_VLRotaryEmbedding the 3d_rope (hwhwhw...ttt..)

Accuracy Tests

python3 -m sglang.launch_server --model-path baidu/ERNIE-4.5-VL-28B-A3B-PT \
 --served-model-name ERNIE-45-VL-28B \ 
 --port 8301 \ 
 --trust-remote-code 
  • ocrbench 88.60
  • countbench 87.37
  • docvqa 93.69
  • realworldqa 68.89

test case

curl --location --request POST 'http://127.0.0.1:8301/v1/chat/completions' \
--header 'Authorization: Bearer $OPENAI_API_KEY' \
--header 'Content-Type: application/json' \
--data-raw '{
  "model": "ERNIE-45-VL-28B",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Describe the content of the image"
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://paddlenlp.bj.bcebos.com/datasets/paddlemix/demo_images/example1.jpg"
          }
        }
      ]
    }
  ],
  "temperature": 1,
  "top_p": 1,
  "max_tokens": 1024,
  "skip_special_tokens": false,
  "chat_template_kwargs": {
    "enable_thinking": false
  }
}'

Benchmarking and Profiling

Checklist

Others

I think the function process_mm_data should be rewritten by each Processor class, similar to the _call_hf_processor in vllm. I don't know why the current code doesn't do this

Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @CSWYF3634076, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive support for the Baidu Ernie4.5 VL model. It includes the necessary architectural components for both the text backbone (leveraging a Mixture-of-Experts approach) and the vision transformer, along with a dedicated multimodal processor. The changes enable the system to handle and process visual and textual inputs for the Ernie4.5 VL model, utilizing a specialized 3D rotary embedding for advanced multimodal positional encoding.

Highlights

  • Ernie4.5 VL Model Support: Added comprehensive support for the Baidu Ernie4.5 VL model, encompassing its text backbone, vision transformer, and multimodal processor.
  • New Chat Template: Introduced and registered a new chat template specifically designed for Ernie4.5 VL models, ensuring proper formatting of multimodal conversations.
  • 3D Rotary Positional Embedding: Implemented a specialized 3D Rotary Positional Embedding (Ernie4_5_VLRotaryEmbedding) to effectively handle the complex positional encoding required for multimodal inputs (image/video and text).
  • Multimodal Architecture Integration: Integrated Ernie4_5_VLMoeForConditionalGeneration into the list of supported multimodal model architectures, allowing the system to recognize and utilize this new model.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the Ernie4.5 VL model. The changes include new model definition files, a vision processor, and modifications to existing chat templates and model configurations. Overall, the implementation looks solid, but I've identified a few areas for improvement, including potential division-by-zero errors in the processor, inefficient tensor operations, and some code clarity issues. Addressing these points will enhance the robustness and performance of the new model integration.

Comment thread python/sglang/lang/chat_template.py Outdated
Comment thread python/sglang/srt/models/ernie45_moe_vl.py
min_pixels: int = MIN_PIXELS,
max_pixels: int = MAX_PIXELS,
):
if max(height, width) / min(height, width) > MAX_RATIO:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is a potential ZeroDivisionError here if min(height, width) is 0. An image could theoretically have a height or width of 0. It's safer to add a check to prevent this.

    min_dim = min(height, width)
    if min_dim == 0:
        return 0, 0
    if max(height, width) / min_dim > MAX_RATIO:

max_frames = floor_by_factor(
ele.get("max_frames", min(FPS_MAX_FRAMES, total_frames)), FRAME_FACTOR
)
nframes = total_frames / video_fps * fps
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

A ZeroDivisionError can occur here if video_fps is 0. It's important to add a check to handle this case to prevent the program from crashing.

        if video_fps == 0:
            nframes = 0
        else:
            nframes = total_frames / video_fps * fps

device=input_ids.device,
)
image_index, video_index = 0, 0
for i, input_ids in enumerate(total_input_ids):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The loop variable input_ids shadows the function parameter with the same name defined on line 2196. This can be confusing and lead to bugs. It's a good practice to use a different name for the loop variable to improve code clarity.

Suggested change
for i, input_ids in enumerate(total_input_ids):
for i, current_input_ids in enumerate(total_input_ids):

input_type_group.append((key, start_index, end_index))

llm_pos_ids_list = []
video_frame_num = 1
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The variable video_frame_num is initialized here and updated in the following loop (lines 2294, 2334, 2342), but it is never used. This appears to be dead code and should be removed to improve clarity.

Comment on lines +60 to +61
rope_scaling: Optional[Dict[str, Any]] = None,
rope_is_neox_style: bool = True,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The parameters rope_scaling and rope_is_neox_style are defined in the __init__ method's signature but are not used within the method. When creating Ernie4_5_VLRotaryEmbedding, is_neox_style is hardcoded to False and rope_scaling is not passed. This can be misleading and may cause issues if the model's configuration changes. Please either use these parameters or remove them if they are not needed.

Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
@yuan-luo
Copy link
Copy Markdown
Collaborator

yuan-luo commented Jan 2, 2026

"I think the function process_mm_data should be rewritten by each Processor class, similar to the _call_cf_processor in vllm. I don't know why the current code doesn't do this"

I think it's because most VLMs are using process_mm_data_async which has been rewritten by each Processor class.

@yuan-luo yuan-luo self-requested a review January 2, 2026 09:11
@CSWYF3634076
Copy link
Copy Markdown
Contributor Author

CSWYF3634076 commented Jan 9, 2026

"I think the function process_mm_data should be rewritten by each Processor class, similar to the _call_cf_processor in vllm. I don't know why the current code doesn't do this"

I think it's because most VLMs are using process_mm_data_async which has been rewritten by each Processor class.

In the process_mm_data_async function, the process_and_combine_mm_data method ultimately invokes the process_mm_data function, which contains extensive if-else logic pertaining to the model name

@CSWYF3634076
Copy link
Copy Markdown
Contributor Author

@yuan-luo May I ask, besides resolving conflicts, is there anything else that needs to be updated in this PR

Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
@github-actions github-actions Bot added documentation Improvements or additions to documentation Multi-modal multi-modal language model labels Jan 12, 2026
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
@CSWYF3634076
Copy link
Copy Markdown
Contributor Author

@yuan-luo Hi, could you please review this PR?

@yuan-luo
Copy link
Copy Markdown
Collaborator

@yuan-luo Hi, could you please review this PR?

Sure.

@yuan-luo
Copy link
Copy Markdown
Collaborator

yuan-luo commented Jan 14, 2026

"I think the function process_mm_data should be rewritten by each Processor class, similar to the _call_cf_processor in vllm. I don't know why the current code doesn't do this"
I think it's because most VLMs are using process_mm_data_async which has been rewritten by each Processor class.

In the process_mm_data_async function, the process_and_combine_mm_data method ultimately invokes the process_mm_data function, which contains extensive if-else logic pertaining to the model name

I guess you mean _call_hf_processor in vllm. Writing a _call_hf_processor in each model is a huge refactor which would introduce massive duplicated code segment. Probably the current mechanism is a concise and reusable implementation.

Signed-off-by: wangyafeng <wangyafeng@baidu.com>
@CSWYF3634076
Copy link
Copy Markdown
Contributor Author

CSWYF3634076 commented Jan 16, 2026

@yuan-luo Hi, could you please review this PR?

Sure.

@yuan-luo
These are some implementation descriptions for easy review.

  1. The ViT component is largely similar to the Qwen-VL ViT. The main difference is the use of self.resampler_model, which plays a role similar to self.merger in Qwen-VL. There are also some differences in weight naming.
  2. In Ernie4_5_VLMoeForConditionalGeneration, the primary difference is that self.visual_token_mask is passed to the text model, allowing it to distinguish between experts during the forward pass.
  3. In Ernie4_5_VLMoeModel, the main difference lies in the MoE stage, where routing decisions are made based on the token distribution (visual_token_mask) to select either text experts or visual experts. In addition, the Attention module differs by adopting 3D positional encoding (H, W, H, W, H, W …, T, T, T …).
  4. In the processor component, the main difference is the reimplementation of process_mm_data, which includes additional post-processing steps to adapt to the fields required by the framework. In Ernie-VL, both images and videos share the same token_id, and MRotaryEmbedding.get_rope_index_ernie45 is used to generate positions.

@yuan-luo
Copy link
Copy Markdown
Collaborator

@yuan-luo Hi, could you please review this PR?

Sure.

@yuan-luo These are some implementation descriptions for easy review.

  1. The ViT component is largely similar to the Qwen-VL ViT. The main difference is the use of self.resampler_model, which plays a role similar to self.merger in Qwen-VL. There are also some differences in weight naming.
  2. In Ernie4_5_VLMoeForConditionalGeneration, the primary difference is that self.visual_token_mask is passed to the text model, allowing it to distinguish between experts during the forward pass.
  3. In Ernie4_5_VLMoeModel, the main difference lies in the MoE stage, where routing decisions are made based on the token distribution (visual_token_mask) to select either text experts or visual experts. In addition, the Attention module differs by adopting 3D positional encoding (H, W, H, W, H, W …, T, T, T …).
  4. In the processor component, the main difference is the reimplementation of process_mm_data, which includes additional post-processing steps to adapt to the fields required by the framework. In Ernie-VL, both images and videos share the same token_id, and MRotaryEmbedding.get_rope_index_ernie45 is used to generate positions.

I actually did exactly the same task on my own side, implementing this model, but encountered some error in correctness. So I can understand what obstacles you have pulled through, which is awesome. I'll review it ASAP.

Copy link
Copy Markdown
Collaborator

@yuan-luo yuan-luo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@CSWYF3634076
Copy link
Copy Markdown
Contributor Author

@JustinTong0323 Hello, the review has been approved. Can you help trigger CI and subsequent merges

Comment thread python/sglang/lang/chat_template.py
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
@JustinTong0323
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

@Kangyan-Zhou Kangyan-Zhou merged commit 1a19b39 into sgl-project:main Jan 26, 2026
156 of 172 checks passed
Chen-0210 pushed a commit to Chen-0210/sglang that referenced this pull request Jan 30, 2026
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Johnsonms pushed a commit to Johnsonms/sglang that referenced this pull request Feb 14, 2026
Signed-off-by: CSWYF3634076 <wangyafeng@baidu.com>
Signed-off-by: wangyafeng <wangyafeng@baidu.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation Multi-modal multi-modal language model run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants