Skip to content

[Diffusion] Fix Sana corrupted output by removing spurious QK norm layers#20656

Merged
mickqian merged 7 commits intosgl-project:mainfrom
ChangyiYang:sana-fix-remove-spurious-qk-norm
Mar 22, 2026
Merged

[Diffusion] Fix Sana corrupted output by removing spurious QK norm layers#20656
mickqian merged 7 commits intosgl-project:mainfrom
ChangyiYang:sana-fix-remove-spurious-qk-norm

Conversation

@ChangyiYang
Copy link
Copy Markdown
Contributor

@ChangyiYang ChangyiYang commented Mar 16, 2026

Problem

The native SGLang Sana pipeline produced corrupted/garbled images. The output looked like a mess of random shapes with no semantic content.

Root Cause

SanaLinearAttention and SanaCrossAttention both had norm_q / norm_k RMSNorm layers that do not exist in the Sana checkpoint. These layers were initialized with random weights and applied to query/key tensors before attention computation, completely corrupting the attention outputs.

Verified by inspecting the checkpoint: no norm_q or norm_k keys exist in diffusion_pytorch_model.safetensors for Efficient-Large-Model/Sana_600M_1024px_diffusers.

Fix

Remove norm_q and norm_k from SanaLinearAttention and SanaCrossAttention to match the actual Sana model architecture.

Test

Ran pytest python/sglang/multimodal_gen/test/server/test_server_a.py -k sana_image_t2i — PASSED.

🤖 Generated with Claude Code

…yers

SanaLinearAttention and SanaCrossAttention had norm_q/norm_k RMSNorm
layers that are absent from the Sana checkpoint weights. These layers
ran with random initialization, corrupting attention outputs and
producing garbled images.

Remove norm_q/norm_k from both attention classes to match the actual
Sana model architecture.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug in the SGLang Sana pipeline that caused corrupted image generation. The problem was traced to the erroneous inclusion of RMSNorm layers (norm_q and norm_k) within the SanaLinearAttention and SanaCrossAttention modules, which were not part of the official Sana model checkpoint. By removing these superfluous normalization layers, the model now correctly produces high-quality images, ensuring the proper functionality of the Sana pipeline.

Highlights

  • Corrupted Sana Output: The native SGLang Sana pipeline produced garbled images, appearing as a mess of random shapes with no semantic content.
  • Root Cause Identified: The issue stemmed from SanaLinearAttention and SanaCrossAttention incorrectly including norm_q and norm_k RMSNorm layers that do not exist in the actual Sana checkpoint, leading to corrupted attention outputs.
  • Fix Implemented: The norm_q and norm_k layers were removed from SanaLinearAttention and SanaCrossAttention to align with the correct Sana model architecture.
  • Verification and Testing: The fix was verified by running pytest python/sglang/multimodal_gen/test/server/test_server_a.py -k sana_image_t2i, which now passes with correct, high-quality image output (e.g., Doraemon eating dorayaki), contrasting with the garbled images produced before the fix.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/multimodal_gen/runtime/models/dits/sana.py
    • Removed the qk_norm_dim parameter from the SanaLinearAttention constructor.
    • Removed the initialization of self.norm_q and self.norm_k in SanaLinearAttention.
    • Removed the application of self.norm_q and self.norm_k within the forward method of SanaLinearAttention.
    • Removed the initialization of self.norm_q and self.norm_k in SanaCrossAttention.
    • Removed the application of self.norm_q and self.norm_k within the forward method of SanaCrossAttention.
    • Removed the qk_norm_dim argument during the instantiation of SanaLinearAttention.
Activity
  • No human activity has occurred on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses the issue of corrupted image output from the Sana model by removing the spurious norm_q and norm_k RMSNorm layers from SanaLinearAttention and SanaCrossAttention. The changes are consistent with the pull request description, which states these layers do not exist in the model checkpoint. The modifications are confined to deleting the incorrect layer initializations and their applications, which is the right approach to fix this bug. The code is now aligned with the actual model architecture, and I find no further issues with the changes.

@github-actions github-actions Bot added the documentation Improvements or additions to documentation label Mar 16, 2026
Copy link
Copy Markdown
Collaborator

@mickqian mickqian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...amazed by these modern art

Comment thread docs/sana-after.jpg Outdated
@mickqian
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

5 similar comments
@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@yhyang201
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@mickqian mickqian merged commit c1794e2 into sgl-project:main Mar 22, 2026
70 of 73 checks passed
OrangeRedeng pushed a commit to OrangeRedeng/sglang that referenced this pull request Mar 22, 2026
…rm layers (sgl-project#20656)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
0-693 pushed a commit to 0-693/sglang that referenced this pull request Mar 25, 2026
…rm layers (sgl-project#20656)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
dutsc pushed a commit to dutsc/sglang that referenced this pull request Mar 30, 2026
…rm layers (sgl-project#20656)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
JustinTong0323 pushed a commit to JustinTong0323/sglang that referenced this pull request Apr 7, 2026
…rm layers (sgl-project#20656)

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Mick <mickjagger19@icloud.com>
yhyang201 pushed a commit to yhyang201/sglang that referenced this pull request Apr 22, 2026
…rm layers (sgl-project#20656)

Co-authored-by: Mick <mickjagger19@icloud.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

diffusion SGLang Diffusion documentation Improvements or additions to documentation run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants