Skip to content

Bugfix for Llama4#15929

Merged
iforgetmyname merged 1 commit intosgl-project:ifmn/eagle-dp-attnfrom
khalil2ji3mp6:ifmn/eagle-dp-attn
Dec 27, 2025
Merged

Bugfix for Llama4#15929
iforgetmyname merged 1 commit intosgl-project:ifmn/eagle-dp-attnfrom
khalil2ji3mp6:ifmn/eagle-dp-attn

Conversation

@khalil2ji3mp6
Copy link
Copy Markdown
Contributor

Motivation

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @khalil2ji3mp6, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a bug within the Llama4 model implementation. The core change ensures that a critical parameter, reduce_results, is properly propagated during the model's internal component setup, which is essential for its correct configuration and operation.

Highlights

  • Llama Model Initialization: The reduce_results parameter is now correctly passed during the initialization of a component within the Llama model's architecture in python/sglang/srt/models/llama.py.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@iforgetmyname iforgetmyname merged commit ebb5b6d into sgl-project:ifmn/eagle-dp-attn Dec 27, 2025
1 check passed
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a reduce_results parameter to the LlamaMLP class, which is then passed to the RowParallelLinear layer. This change allows for more granular control over the reduction of results, which is particularly useful for optimizing communication in tensor-parallel setups. The implementation is correct, and the use of a default value ensures backward compatibility. The change is a good step towards more flexible and efficient model implementations.

Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 29, 2025
…glang into eagle-sche

* 'ifmn/eagle-dp-attn' of https://github.com/sgl-project/sglang: (22 commits)
  dp scheduler enhance support with chunked prefill (sgl-project#16071)
  modify suffix decoding
  CI dependency update (sgl-project#16063)
  fix rotary_embedding init npu (sgl-project#16011)
  feat: bugfix and accuracy fix for stablelm2_1_6b (sgl-project#15932)
  Update model and feature support for Ascend NPU (sgl-project#16005)
  Bugfix for Llama4 (sgl-project#15929)
  Bugfix for ds-vl2 (sgl-project#15894)
  gme qwen vl runners fix (sgl-project#15899)
  add profiling in scheduler (sgl-project#15876)
  llama use triton rope op (sgl-project#15855)
  suffix decoding adapt npu
  suffix decoding adapt npu
  Add suffix decoding speculative algorithm from feature 13553
  cherry sgl-project#15434: qwen3 vl performance update
  cherry sgl-project#15597: fix Qwen3-VL-30B-A3B-Instruct accuracy loss
  [Schedule] bug fix for schedule enhancer (sgl-project#15834)
  minilb support roundrobin (sgl-project#15824)
  fix torchair compile issue
  cherry sgl-project#15187: lora fix
  ...

# Conflicts:
#	python/sglang/srt/managers/scheduler.py
#	python/sglang/srt/managers/scheduler_enhancer.py
@khalil2ji3mp6 khalil2ji3mp6 deleted the ifmn/eagle-dp-attn branch February 10, 2026 02:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants