Skip to content

[Refactor] Split rotary_embedding.py into a modular package#19144

Merged
Fridge003 merged 5 commits intomainfrom
refactor_rope
Feb 23, 2026
Merged

[Refactor] Split rotary_embedding.py into a modular package#19144
Fridge003 merged 5 commits intomainfrom
refactor_rope

Conversation

@BBuf
Copy link
Copy Markdown
Collaborator

@BBuf BBuf commented Feb 22, 2026

Motivation

Follow #19064

rotary_embedding.py had grown to ~3,800 lines, mixing utilities, base classes, a dozen RoPE variants, Triton kernels, multimodal indexing logic, and factory functions all in a single file. This made it hard to navigate, review, and extend.

I mainly use claude4.6 to refactor and do some structure guide for it. Thanks for @DarkSharpness advices.

Changes

Replaced rotary_embedding.py with a rotary_embedding/ package split into
focused modules:

Replaced rotary_embedding.py with a rotary_embedding/ package split into
focused modules:

  • utils.py: Low-level helpers including rotate_neox, rotate_gptj,
    apply_rotary_emb, and apply_rotary_pos_emb variants.
  • base.py: Core RotaryEmbedding class and LinearScalingRotaryEmbedding.
  • yarn.py: YaRN helper functions and YaRNScalingRotaryEmbedding.
  • rope_variant.py: Scaling variants including Phi3LongRoPEScaledRotaryEmbedding,
    FourierRotaryEmbedding, DeepseekScalingRotaryEmbedding, Llama3RotaryEmbedding,
    Llama4VisionRotaryEmbedding, DynamicNTKScalingRotaryEmbedding,
    DynamicNTKAlphaRotaryEmbedding, and DualChunkRotaryEmbedding.
  • triton_kernels.py: Triton JIT kernels and their Python wrappers for multimodal
    RoPE (triton_mrope_fused, triton_ernie45_rope_fused_inplace).
  • mrope.py: Multimodal RoPE classes (MRotaryEmbedding,
    YaRNScalingMRotaryEmbedding, Ernie4_5_VLRotaryEmbedding) and
    apply_interleaved_rope.
  • mrope_rope_index.py: Position index generation for multimodal models,
    including get_rope_index, get_rope_index_qwen3_omni, get_rope_index_glm4v,
    and get_rope_index_ernie45.
  • factory.py: Factory functions get_rope, get_rope_cpu, and
    get_rope_wrapper for constructing RoPE instances.
  • __init__.py: Public API re-exports as a drop-in replacement for the old module.

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @BBuf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the architecture of the rotary embedding implementation by decomposing a single, large file into a well-structured Python package. This reorganization aims to streamline development, making it simpler to manage and expand the diverse range of rotary positional embedding techniques and their multimodal extensions.

Highlights

  • Modular Refactoring: The monolithic rotary_embedding.py file, which had grown to ~3,800 lines, has been split into a modular rotary_embedding/ package. This improves navigation, reviewability, and extensibility.
  • New Module Structure: The new package is organized into focused modules: _utils.py for low-level helpers, _base.py for core classes and common scaling variants, _yarn.py for YaRN-specific logic, _special.py for specialized RoPE variants, _mrope.py for multimodal RoPE and Triton kernels, _mrope_rope_index.py for multimodal position index generation, and _factory.py for factory functions. The __init__.py re-exports public APIs for backward compatibility.
  • Improved Maintainability: By separating concerns into distinct files, the codebase becomes easier to understand, debug, and extend with new rotary embedding variants or optimizations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/rotary_embedding/init.py
    • Defined the public API for the new rotary_embedding package, re-exporting key classes and functions from submodules.
  • python/sglang/srt/layers/rotary_embedding/_base.py
    • Implemented the core RotaryEmbedding class, along with LinearScalingRotaryEmbedding and DynamicNTKScalingRotaryEmbedding.
    • Included platform-specific forward methods for native, NPU, CPU, and CUDA environments.
  • python/sglang/srt/layers/rotary_embedding/_factory.py
    • Provided factory functions get_rope, get_rope_cpu, and get_rope_wrapper to instantiate various RotaryEmbedding types based on configuration parameters.
  • python/sglang/srt/layers/rotary_embedding/_mrope.py
    • Implemented MRotaryEmbedding for multimodal scenarios, YaRNScalingMRotaryEmbedding, and Ernie4_5_VLRotaryEmbedding.
    • Included Triton kernels (_triton_mrope_forward_fused, _triton_ernie45_rope_qk_fused) for fused multimodal RoPE operations.
  • python/sglang/srt/layers/rotary_embedding/_mrope_rope_index.py
    • Added functions for generating position indices for multimodal models, including get_rope_index, get_rope_index_qwen3_omni, get_rope_index_glm4v, and get_rope_index_ernie45.
  • python/sglang/srt/layers/rotary_embedding/_special.py
    • Introduced specialized Rotary Embedding variants: Phi3LongRoPEScaledRotaryEmbedding, FourierRotaryEmbedding, DeepseekScalingRotaryEmbedding, Llama3RotaryEmbedding, Llama4VisionRotaryEmbedding, DynamicNTKAlphaRotaryEmbedding, and DualChunkRotaryEmbedding.
  • python/sglang/srt/layers/rotary_embedding/_utils.py
    • Provided low-level utility functions for rotary embedding operations, such as _rotate_neox, _rotate_gptj, and _apply_rotary_emb.
    • Implemented platform-specific apply_rotary_pos_emb for native and NPU environments.
  • python/sglang/srt/layers/rotary_embedding/_yarn.py
    • Defined helper functions for YaRN (Yet another RoPE N-dimensional) scaling, including _yarn_find_correction_dim, _yarn_find_correction_range, _yarn_linear_ramp_mask, and _yarn_get_mscale.
    • Implemented the YaRNScalingRotaryEmbedding class.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@BBuf
Copy link
Copy Markdown
Collaborator Author

BBuf commented Feb 22, 2026

/tag-and-rerun-ci

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The refactor successfully modularizes the large rotary_embedding.py file into a package, which significantly improves the maintainability and readability of the codebase. The separation into base, factory, multimodal, and specialized modules is a great step forward. However, there are a few issues introduced by the split, most notably a leaky abstraction in the base RotaryEmbedding class which now depends on multimodal-specific attributes defined in subclasses. There are also some performance concerns regarding buffer management and device movements inside forward passes, and some inconsistencies in the factory functions.

Comment thread python/sglang/srt/layers/rotary_embedding/_base.py Outdated
Comment thread python/sglang/srt/layers/rotary_embedding/mrope.py
Comment thread python/sglang/srt/layers/rotary_embedding/rope_variant.py
Comment thread python/sglang/srt/layers/rotary_embedding/rope_variant.py
Comment thread python/sglang/srt/layers/rotary_embedding/factory.py
@Fridge003 Fridge003 merged commit 2717393 into main Feb 23, 2026
280 of 306 checks passed
@Fridge003 Fridge003 deleted the refactor_rope branch February 23, 2026 12:05
michaelzhang-ai added a commit to michaelzhang-ai/sglang that referenced this pull request Feb 23, 2026
…project#19144)

PR sgl-project#19144 split rotary_embedding.py into a package and renamed the yarn
helper functions (removed leading underscores, renamed _yarn_get_mscale
to yarn_get_mscale_simple). The Grok model (grok.py) imports the old
names, causing an ImportError that prevents Grok1ForCausalLM from being
registered, which crashes both Grok1 and Grok2 tests.

Re-export the old underscore-prefixed names as aliases in __init__.py
for backward compatibility.
xiaobaicxy added a commit to xiaobaicxy/sglang that referenced this pull request Feb 24, 2026
…o xverse_moe

* 'xverse_moe' of https://github.com/xiaobaicxy/sglang: (275 commits)
  fix: add missing blank line after docstring in serving_transcription.py (sgl-project#19206)
  Whisper model support & `/v1/audio/transcriptions` endpoint & benchmark (sgl-project#16983)
  fix: patch docker image fixes (sgl-project#19100)
  [PD-Disagg] Unify prefill info data transition flow, all with `PrefillServerInfo` (sgl-project#19195)
  [CI] Tiny enhance the dp attention load blance benchmark (sgl-project#19194)
  add new ci user (sgl-project#19133)
  [CI] fix the teardown output of disaggregation test (sgl-project#19193)
  [PD-Disagg] Support query dp rank from bootstrap server. (sgl-project#19168)
  [Kernel Slimming] Migrate AWQ marlin repack kernel to JIT (sgl-project#18949)
  [Diffusion] Match rotary_embedding module name style (sgl-project#19179)
  [Refactor] Split rotary_embedding.py into a modular package (sgl-project#19144)
  [NPU] bump sgl-kernel-npu to 2026.02.01.post2 (sgl-project#19178)
  Use single mma warp group for short q_len in FA to optimize decoding performance (sgl-project#18985)
  Reorganize topk logic to clean up code and expose logical experts (sgl-project#16945)
  [ROCm] Use unreg path for custom all-reduce during CUDA graph capture (sgl-project#19162)
  [diffusion] feat: detect Flux2 custom VAE path from component_paths (sgl-project#19170)
  [AMD] ENV flags tuning and cleanup (sgl-project#19176)
  Fix bench_one_batch_server by moving the print statements (sgl-project#19175)
  Update rocm7.2 Dockerfile to install amdsmi for QuickReduce Initialization (sgl-project#19091)
  Revert "Refactor graph input buffers (sgl-project#18991)" (sgl-project#19173)
  ...
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
JustinTong0323 pushed a commit to JustinTong0323/sglang that referenced this pull request Apr 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants