Fix wrong weight reference in dynamic EPLB#6818
Conversation
There was a problem hiding this comment.
Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! Gemini or gemini-code-assist here, providing a summary of this pull request.
This PR aims to address a potential issue or improve efficiency related to the handling of expert weights in the dynamic Expert Parallel Load Balancing (EPLB) mechanism, specifically for the DeepseekV2 model. The core change introduces a lazy loading mechanism for the routed_experts_weights_of_layer attribute by utilizing a new LazyValue utility class. Previously, these weights were computed eagerly during the post_load_weights phase. This change defers the computation until the weights are actually accessed, which could improve initialization time or memory usage.
The PR also includes some cleanup in the Qwen3 MoE model file, removing unused imports and adding a comment indicating that a similar lazy loading approach might be beneficial there in the future.
Highlights
- Lazy Loading: Introduced a
LazyValueclass to defer the computation or loading of values until they are first accessed. - DeepseekV2 MoE Weights: Applied the lazy loading mechanism to the
routed_experts_weights_of_layerin the DeepseekV2 model, moving the computation out of thepost_load_weightsmethod. - Qwen3 MoE Cleanup: Removed several unused imports from the Qwen3 MoE model file and added a TODO comment about potentially adopting a similar lazy loading pattern.
Changelog
- python/sglang/srt/models/deepseek_v2.py
- Imported
LazyValueutility. - Replaced eager computation of
routed_experts_weights_of_layerinpost_load_weightswith a lazy initialization usingLazyValue. - Added a
@propertygetter forrouted_experts_weights_of_layerto access the lazily computed value.
- Imported
- python/sglang/srt/models/qwen3_moe.py
- Removed unused imports:
dataclass,Enum,auto,partial,F,PretrainedConfig. - Added a TODO comment suggesting future implementation of lazy loading for expert weights, similar to Deepseek.
- Removed unused imports:
- python/sglang/srt/utils.py
- Added a new class
LazyValuethat takes a creator function and computes the value only upon first access via the.valueproperty.
- Added a new class
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Weights wait in the code,
Loaded only when they're called,
Lazy, fast, and smart.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a LazyValue utility to defer the computation of routed_experts_weights_of_layer in the DeepSeekV2 model, likely to address an issue with weight referencing in dynamic EPLB. The changes also include minor import cleanups in the Qwen3MoE model and a TODO to align its MoE weight handling with the DeepSeekV2 changes.
The introduction of LazyValue is a good pattern for deferred initialization. However, there are a few points to consider regarding the timing of this lazy computation and the handling of the is_nextn flag in deepseek_v2.py.
The PR description is minimal, and the checklist items (like unit tests and documentation updates) are unchecked. It would be beneficial to expand on the motivation and ensure tests cover the changes, especially concerning the dynamic EPLB and nextn scenarios.
Summary of Findings
- Correctness of Lazy Initialization in DeepSeekV2: The timing of the lazy computation for
routed_experts_weights_of_layerand the unconditional initialization (removal ofis_nextnguard) needs careful review to ensure correctness, especially concerning when model weights are loaded and hownextnscenarios are handled. - Utility Class Enhancements: The new
LazyValueclass inutils.pyis a good addition but could be improved with a docstring and more specific type hints for better maintainability and understanding. - Consistency in Qwen3MoE: A TODO comment in
qwen3_moe.pysuggests mimicking the DeepSeekV2 changes. Applying theLazyValuepattern there could improve consistency and potentially defer computation beneficially. - Code Cleanup: Unused imports were removed from
python/sglang/srt/models/qwen3_moe.py, which improves code hygiene. (Severity: Low - Not added as a comment due to review settings)
Merge Readiness
The pull request introduces a potentially beneficial lazy initialization pattern. However, there are significant concerns regarding the correctness of the routed_experts_weights_of_layer initialization in deepseek_v2.py, particularly around the timing of the lazy computation relative to weight loading and the implications of removing the is_nextn conditional. These high-severity concerns should be addressed before merging.
Additionally, minor improvements to the LazyValue utility and applying the pattern consistently to qwen3_moe.py (as per the TODO) would enhance the codebase. I am unable to approve this pull request; please ensure these points are discussed and resolved, and have other reviewers approve the changes before merging.
| self._routed_experts_weights_of_layer = LazyValue( | ||
| lambda: { | ||
| layer_id: layer.mlp.get_moe_weights() | ||
| for layer_id, layer in enumerate(self.model.layers) | ||
| if isinstance(layer.mlp, DeepseekV2MoE) | ||
| } | ||
| ) |
There was a problem hiding this comment.
The initialization of _routed_experts_weights_of_layer has been moved from post_load_weights (where it was conditional on not is_nextn) to __init__ (unconditional, using LazyValue). This change raises a couple of questions:
-
Timing of Lazy Computation: The
get_moe_weights()method, called by theLazyValue's creator, likely depends on the model's weights being fully loaded.__init__is typically called before weights are loaded, while the original initialization was inpost_load_weights. Although the lambda forLazyValueexecutes only on first access, is it guaranteed thatself.routed_experts_weights_of_layerwill always be accessed afterself.model.layersis fully populated andlayer.mlp.get_moe_weights()can correctly access loaded weights? -
Handling of
is_nextn: The original code explicitly skipped settingself.routed_experts_weights_of_layerifis_nextnwas true. Now, with the unconditionalLazyValueinitialization, this property will always be computable.- What is the expected behavior for
routed_experts_weights_of_layerwhenis_nextnis true? - If
self.model.layersis empty orlayer.mlpis notDeepseekV2MoEin thenextncase, the lambda will produce an empty dictionary{}. Is this the desired outcome, or should there be specific logic or an error if accessed whenis_nextnis true (aligning with the previous behavior of the attribute not being set)? - The
TODO support nextn latercomment (line 1876 in the old code) is still relevant. How does this change affectnextnsupport?
- What is the expected behavior for
Could you clarify if these scenarios have been considered and tested, especially in the context of "dynamic EPLB" mentioned in the PR title?
| class LazyValue: | ||
| def __init__(self, creator: Callable): | ||
| self._creator = creator | ||
| self._value = None | ||
|
|
||
| @property | ||
| def value(self): | ||
| if self._creator is not None: | ||
| self._value = self._creator() | ||
| self._creator = None | ||
| return self._value |
There was a problem hiding this comment.
The LazyValue class is a useful utility for deferred initialization. To improve its maintainability and clarity, could you consider adding a docstring explaining its purpose and usage? Also, type hinting for self._value (e.g., Optional[Any] or making the class generic LazyValue[T]) could be beneficial.
For example:
from typing import Callable, Optional, TypeVar
T = TypeVar("T")
class LazyValue(Generic[T]):
"""A utility class for lazy initialization of a value.
The value is computed by the creator function only when accessed
for the first time and then cached.
"""
def __init__(self, creator: Callable[[], T]):
self._creator: Optional[Callable[[], T]] = creator
self._value: Optional[T] = None
@property
def value(self) -> T:
if self._creator is not None:
self._value = self._creator()
self._creator = None
return self._value # type: ignore(The type: ignore might be needed if _value remains Optional[T] but the property guarantees non-None after creation, or adjust types accordingly.)
| else: | ||
| logger.warning(f"Parameter {name} not found in params_dict") | ||
|
|
||
| # TODO mimic deepseek |
There was a problem hiding this comment.
Regarding the TODO mimic deepseek: If the goal is to apply a similar lazy initialization pattern for routed_experts_weights_of_layer as done in deepseek_v2.py, you could consider the following:
- Import
LazyValuefromsglang.srt.utils. - In the
Qwen3MoeForCausalLM.__init__method, initialize a private attribute:from sglang.srt.utils import LazyValue # Add this import # ... other imports and class definition ... class Qwen3MoeForCausalLM(nn.Module): def __init__(self, config: Qwen3MoeConfig, quant_config: Optional[QuantizationConfig] = None, prefix: str = ""): super().__init__() # ... other initializations ... self._routed_experts_weights_of_layer = LazyValue( lambda: { layer_id: self.model.layers[layer_id].mlp.get_moe_weights() for layer_id in range(self.start_layer, self.end_layer) if isinstance(self.model.layers[layer_id].mlp, Qwen3MoeSparseMoeBlock) } ) # ... rest of __init__ ...
- Add a property to access the value:
@property def routed_experts_weights_of_layer(self): return self._routed_experts_weights_of_layer.value
- Then, you can remove the direct assignment of
self.routed_experts_weights_of_layerfrom theload_weightsmethod (lines 810-814).
This would make the initialization consistent with the deepseek_v2.py changes and defer the computation until first access. Ensure that self.model.layers, self.start_layer, and self.end_layer are correctly initialized before the first access to this property.
Merge branch 'sgl_20250610_sync_tag047 of git@code.alipay.com:Theta/SGLang.git into main https://code.alipay.com/Theta/SGLang/pull_requests/52 Reviewed-by: 剑川 <jianchuan.gys@antgroup.com> * [Bugfix] Fix slice operation when chunk size mismatch (sgl-project#6697) * [Bugfix] Fix ChatCompletion endpoint of mini_lb when stream is set (sgl-project#6703) * [CI] Fix setup of disaggregation with different tp (sgl-project#6706) * [PD] Remove Unnecessary Exception Handling for FastQueue.get() (sgl-project#6712) * Fuse routed_scaling_factor in DeepSeek (sgl-project#6710) * Overlap two kernels in DeepSeek with communication (sgl-project#6711) * Minor refactor two-batch overlap (sgl-project#6682) * Speed up when having padding tokens two-batch overlap (sgl-project#6668) * [Feature] Support Flashinfer fp8 blockwise GEMM kernel on Blackwell (sgl-project#6479) * Fix LoRA bench (sgl-project#6719) * temp * Fix PP for Qwen3 MoE (sgl-project#6709) * [feat] triton kernel for get_last_loc (sgl-project#6676) * [fix] more mem for draft_extend cuda_graph (sgl-project#6726) * [PD] bug fix: Update status if nixl receiver send a a dummy req. (sgl-project#6720) * Tune memory arguments on B200 (sgl-project#6718) * Add DeepSeek-R1-0528 function call chat template (sgl-project#6725) * refactor(tool call): Fix BaseFormatDetector tool_index issue and refactor `parse_streaming_increment` (sgl-project#6715) * Add draft extend CUDA graph for Triton backend (sgl-project#6705) * refactor apply_w8a8_block_fp8_linear in fp (sgl-project#6545) * [PD] Support completion endpoint (sgl-project#6729) * PD Rust LB (PO2) (sgl-project#6437) * Super tiny enable sole usage of expert distribution metrics and update doc (sgl-project#6680) * Support picking variants of EPLB algorithms (sgl-project#6728) * Support tuning DeepEP configs (sgl-project#6742) * [test] add ut and bm for get_last_loc (sgl-project#6746) * Fix mem_fraction_static for AMD CI (sgl-project#6748) * [fix][RL] Fix DeepSeekV3ForCausalLM.post_load_weights for multiple update weight (sgl-project#6265) * Improve EPLB logical to physical dispatch map (sgl-project#6727) * Update DeepSeek-R1-0528 function call chat template (sgl-project#6765) * [PD] Optimize time out logic and add env var doc for mooncake (sgl-project#6761) * Fix aiohttp 'Chunk too big' in bench_serving (sgl-project#6737) * Support sliding window in triton backend (sgl-project#6509) * Fix shared experts fusion error (sgl-project#6289) * Fix one bug in the grouped-gemm triton kernel (sgl-project#6772) * update llama4 chat template and pythonic parser (sgl-project#6679) * feat(tool call): Enhance Llama32Detector for improved JSON parsing in non-stream (sgl-project#6784) * Support token-level quantization for EP MoE (sgl-project#6782) * Temporarily lower mmlu threshold for triton sliding window backend (sgl-project#6785) * ci: relax test_function_call_required (sgl-project#6786) * Add intel_amx backend for Radix Attention for CPU (sgl-project#6408) * Fix incorrect LoRA weight loading for fused gate_up_proj (sgl-project#6734) * fix(PD-disaggregation): Can not get local ip (sgl-project#6792) * [FIX] mmmu bench serving result display error (sgl-project#6525) (sgl-project#6791) * Bump torch to 2.7.0 (sgl-project#6788) * chore: bump sgl-kernel v0.1.5 (sgl-project#6794) * Improve profiler and integrate profiler in bench_one_batch_server (sgl-project#6787) * chore: upgrade sgl-kernel v0.1.5 (sgl-project#6795) * [Minor] Always append newline after image token when parsing chat message (sgl-project#6797) * Update CI tests for Llama4 models (sgl-project#6421) * [Feat] Enable PDL automatically on Hopper architecture (sgl-project#5981) * chore: update blackwell docker (sgl-project#6800) * misc: cache is_hopper_arch (sgl-project#6799) * Remove contiguous before Flashinfer groupwise fp8 gemm (sgl-project#6804) * Correctly abort the failed grammar requests & Improve the handling of abort (sgl-project#6803) * [EP] Add cuda kernel for moe_ep_pre_reorder (sgl-project#6699) * Add draft extend CUDA graph for flashinfer backend (sgl-project#6805) * Refactor CustomOp to avoid confusing bugs (sgl-project#5382) * Tiny log prefill time (sgl-project#6780) * Tiny fix EPLB assertion about rebalancing period and recorder window size (sgl-project#6813) * Add simple utility to dump tensors for debugging (sgl-project#6815) * Fix profiles do not have consistent names (sgl-project#6811) * Speed up rebalancing when using non-static dispatch algorithms (sgl-project#6812) * [1/2] Add Kernel support for Cutlass based Fused FP4 MoE (sgl-project#6093) * [Router] Fix k8s Service Discovery (sgl-project#6766) * Add CPU optimized kernels for topk and rope fusions (sgl-project#6456) * fix new_page_count_next_decode (sgl-project#6671) * Fix wrong weight reference in dynamic EPLB (sgl-project#6818) * Minor add metrics to expert location updater (sgl-project#6816) * [Refactor] Rename `n_share_experts_fusion` as `num_fused_shared_experts` (sgl-project#6735) * [FEAT] Add transformers backend support (sgl-project#5929) * [fix] recover auto-dispatch for rmsnorm and rope (sgl-project#6745) * fix ep_moe_reorder kernel bugs (sgl-project#6858) * [Refactor] Multimodal data processing for VLM (sgl-project#6659) * Decoder-only Scoring API (sgl-project#6460) * feat: add dp-rank to KV events (sgl-project#6852) * Set `num_fused_shared_experts` as `num_shared_experts` when shared_experts fusion is not disabled (sgl-project#6736) * Fix one missing arg in DeepEP (sgl-project#6878) * Support LoRA in TestOpenAIVisionServer and fix fused kv_proj loading bug. (sgl-project#6861) * support 1 shot allreduce in 1-node and 2-node using mscclpp (sgl-project#6277) * Fix Qwen3MoE missing token padding optimization (sgl-project#6820) * Tiny update error hints (sgl-project#6846) * Support layerwise rebalancing experts (sgl-project#6851) * Tiny allow profiler API to auto create directory (sgl-project#6865) * Support Blackwell DeepEP docker images (sgl-project#6868) * [EP] Add cuda kernel for moe_ep_post_reorder (sgl-project#6837) * [theta]merge 0605 * oai: fix openAI client error with single request via batch api (sgl-project#6170) * [PD] Fix potential perf spike caused by tracker gc and optimize doc (sgl-project#6764) * Use deepgemm instead of triton for fused_qkv_a_proj_with_mqa (sgl-project#6890) * [CUTLASS-FP4-MOE] Introduce CutlassMoEParams class for easy initialization of Cutlass Grouped Gems Metadata (sgl-project#6887) * bugfix(OAI): Fix image_data processing for jinja chat templates (sgl-project#6877) * [CPU] enable CI for PRs, add Dockerfile and auto build task (sgl-project#6458) * AITER backend extension and workload optimizations (sgl-project#6838) * [theta]merge * [theta]merge * [Feature] Support Flashinfer fmha on Blackwell (sgl-project#6930) * Fix a bug in abort & Improve docstrings for abort (sgl-project#6931) * Tiny support customize DeepEP max dispatch tokens per rank (sgl-project#6934) * Sync the changes on cuda graph runners (sgl-project#6932) * [PD] Optimize transfer queue forward logic for dummy rank (sgl-project#6922) * [Refactor] image data process in bench_serving (sgl-project#6879) * [fix] logical_to_all_physical_map index 256 is out of bounds in EP parallel. (sgl-project#6767) * Add triton fused moe kernel config for E=257 on B200 (sgl-project#6939) * [sgl-kernel] update deepgemm (sgl-project#6942) * chore: bump sgl-kernel v0.1.6 (sgl-project#6943) * Minor compile fused topk (sgl-project#6944) * [Bugfix] pipeline parallelism and Eagle Qwen2 (sgl-project#6910) * Tiny re-introduce profile id logging (sgl-project#6912) * Add triton version as a fused_moe_triton config search key to avoid performace decrease in different Triton version (sgl-project#5955) * reduce torch.zeros overhead in moe align block size kernel (sgl-project#6369) * chore: upgrade sgl-kernel v0.1.6 (sgl-project#6945) * add fbgemm moe grouped gemm kernel benchmark (sgl-project#6924) * [Docker] Add docker file for SGL Router (sgl-project#6915) * Disabling mixed chunked prefill when eagle is enabled (sgl-project#6874) * Add canary for EPLB rebalancing (sgl-project#6895) * Refactor global_server_args_dict (sgl-project#6866) * Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220) * Update server timeout time in AMD CI. (sgl-project#6953) * [misc] add is_cpu() (sgl-project#6950) * Add H20 fused MoE kernel tuning configs for DeepSeek-R1/V3 (sgl-project#6885) * Add a CUDA kernel for fusing mapping and weighted sum for MoE. (sgl-project#6916) * chore: bump sgl-kernel v0.1.6.post1 (sgl-project#6955) * chore: upgrade sgl-kernel v0.1.6.post1 (sgl-project#6957) * [DeepseekR1-FP4] Add Support for nvidia/DeepSeekR1-FP4 model (sgl-project#6853) * Revert "Fuse routed scaling factor in topk_reduce kernel (sgl-project#6220)" (sgl-project#6968) * [AMD] Add more tests to per-commit-amd (sgl-project#6926) * chore: bump sgl-kernel v0.1.7 (sgl-project#6963) * Slightly improve the sampler to skip unnecessary steps (sgl-project#6956) * rebase h20 fused_moe config (sgl-project#6966) * Fix CI and triton moe Configs (sgl-project#6974) * Remove unnecessary kernels of num_token_non_padded (sgl-project#6965) * Extend cuda graph capture bs for B200 (sgl-project#6937) * Fuse routed scaling factor in deepseek (sgl-project#6970) * Sync cuda graph runners (sgl-project#6976) * Fix draft extend ut stability with flush cache (sgl-project#6979) * Fix triton sliding window test case (sgl-project#6981) * Fix expert distribution dumping causes OOM (sgl-project#6967) * Minor remove one kernel for DeepSeek (sgl-project#6977) * [perf][sgl-kernel] extend cutlass_mla_decode to support num_head < 128 (sgl-project#6929) * Enable more unit tests for AMD CI. (sgl-project#6983) * Use torch.compile to fuse flash attention decode metadata preparation (sgl-project#6973) * Eliminate stream sync to speed up LoRA batch init (sgl-project#6960) * support qwen3 emebedding (sgl-project#6990) * Fix torch profiler bugs for bench_offline_throughput.py (sgl-project#6557) * chore: upgrade flashinfer v0.2.6.post1 jit (sgl-project#6958) * cleanup tmp dir (sgl-project#7007) * chore: update pr test xeon (sgl-project#7008) * Fix cutlass MLA gets almost zero accuracy (sgl-project#6998) * Update amd nightly models CI. (sgl-project#6992) * feat: add direct routing strategy to DP worker (sgl-project#6884) * Fallback to lower triton version for unfound fused moe configs (sgl-project#7013) * Fix torchvision version for Blackwell (sgl-project#7015) * Simplify prepare_extend_after_decode (sgl-project#6987) * Migrate to assertEqual (sgl-project#6741) * Fix torch version in blackwell dockerfile (sgl-project#7017) * chore: update pr test xeon (sgl-project#7018) * Update default settings for blackwell (sgl-project#7023) * Support both approximate and exact expert distribution collection (sgl-project#6964) * Add decode req pool (sgl-project#6980) * [theta]merge 0610 * [theta]merge 0610 * [CI] Add CI workflow for sgl-router docker build (sgl-project#7027) * Fix fused_moe triton configs (sgl-project#7029) * CPU: map changes from developing branch in sgl-kernel (sgl-project#6833) * chore: bump v0.4.7 (sgl-project#7038) * Update README.md (sgl-project#7040)
Motivation
will need to test locally later
Modifications
Checklist