Skip to content

Fix issue: ENABLE_BELOW_SM90 cannot be enabled on aarch64 CPU#12967

Merged
FlamingoPg merged 3 commits intosgl-project:mainfrom
MarcoDWei:fix-arm-sm89
Dec 18, 2025
Merged

Fix issue: ENABLE_BELOW_SM90 cannot be enabled on aarch64 CPU#12967
FlamingoPg merged 3 commits intosgl-project:mainfrom
MarcoDWei:fix-arm-sm89

Conversation

@MarcoDWei
Copy link
Copy Markdown
Contributor

@MarcoDWei MarcoDWei commented Nov 10, 2025

Motivation

ENABLE_BELOW_SM90 is used to enabled for GPUs with a compute capability lower than sm_90. It is disabled in aarch64 CPU by #6380. The intention is to disable it by default to make the compile more fast. But this makes the option always disabled even if user enable it explicitly.

python -m uv build --wheel -Cbuild-dir=build -Ccmake.define.ENABLE_BELOW_SM90=ON -color=always --no-build-isolation

The issue occurs on an aarch64 CPU with a 4090D GPU (SM89). There is an error report even if we have enabled "ENABLE_BELOW_SM90"

RuntimeError: RMSNorm failed with error code no kernel image is available for execution on the device
[2025-10-22 06:53:14] SIGQUIT received. signum=None, frame=None. It usually means one child failed.
Killed

Modifications

Select proper DEFAULT_ENABLE_BELOW_SM90 via CPU aarch. The default value does not affect the option explicitly configured by the user.

Accuracy Tests

Benchmarking and Profiling

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @MarcoDWei, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a bug in the build system where a specific GPU gencode compilation option (ENABLE_BELOW_SM90) could not be overridden by users on aarch64 architectures. It refactors the CMake logic to correctly establish a default value for this option based on the system's CPU, thereby restoring the ability for users to explicitly enable or disable it as needed.

Highlights

  • Fix ENABLE_BELOW_SM90 on aarch64: Corrected an issue where the ENABLE_BELOW_SM90 CMake option was always disabled on aarch64 systems, preventing users from explicitly enabling it.
  • Refactored default gencode configuration: Modified the CMake logic to properly set the default value for ENABLE_BELOW_SM90 based on the CPU architecture (aarch64 vs. others), while ensuring user-defined overrides are respected.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an issue where the ENABLE_BELOW_SM90 option was unconditionally disabled on aarch64 systems, ignoring user-provided values. The change introduces a platform-dependent default value for the option, which can now be correctly overridden by the user. My review includes a suggestion to improve the conciseness and consistency of the new CMake logic.

Comment thread sgl-kernel/CMakeLists.txt
Comment on lines +137 to 142
if(CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64")
set(DEFAULT_ENABLE_BELOW_SM90 OFF)
message(STATUS "For aarch64, disable gencode below SM90 by default")
else()
set(DEFAULT_ENABLE_BELOW_SM90 ON)
endif()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic can be made more concise by setting a default value and then overriding it for the specific aarch64 case. This removes the need for an else block. Also, for consistency with other checks in this file, it's better to use STREQUAL for an exact string comparison instead of MATCHES.

set(DEFAULT_ENABLE_BELOW_SM90 ON)
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "aarch64")
    set(DEFAULT_ENABLE_BELOW_SM90 OFF)
    message(STATUS "For aarch64, disable gencode below SM90 by default")
endif()

@FlamingoPg
Copy link
Copy Markdown
Collaborator

@MarcoDWei Looks great. Could you share the error and the fix details?

@MarcoDWei
Copy link
Copy Markdown
Contributor Author

@MarcoDWei Looks great. Could you share the error and the fix details?

It happens on aarch64 CPU + 4090D GPU(SM89). There is an error report even if we have enabled "ENABLE_BELOW_SM90"

RuntimeError: RMSNorm failed with error code no kernel image is available for execution on the device
[2025-10-22 06:53:14] SIGQUIT received. signum=None, frame=None. It usually means one child failed.
Killed

After applying the patch, the issue is resolved.

@MarcoDWei
Copy link
Copy Markdown
Contributor Author

@FlamingoPg The description is update for the detailed information.

@FlamingoPg FlamingoPg merged commit ef7c29a into sgl-project:main Dec 18, 2025
140 of 146 checks passed
Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 19, 2025
…n3_pp

* 'main' of https://github.com/sgl-project/sglang: (74 commits)
  [bug fix][pp] fix inconsistent latency between tp (sgl-project#15379)
  Fix warp illegal instruction in kimi k2 thinking PCG (sgl-project#15306)
  Fix gpt-oss yarn with `truncate` argument (sgl-project#14270)
  Monkey patch deepseek-ocr's `v_head_dim` (sgl-project#15384)
  [model-gateway] Replace PolicyRegistry RwLock with DashMap for lock-free policy lookups (sgl-project#15361)
  [PP] Fix dynamic chunking strategy for PP (sgl-project#15372)
  Fix issue: ENABLE_BELOW_SM90 cannot be enabled on aarch64 CPU (sgl-project#12967)
  Split test_piecewise_cuda_graph.py to optimize CI resource usage (sgl-project#15290)
  unified management of environment variables for vlm cuda ipc transport  (sgl-project#14501)
  Mistral Large 3 NVFP4 TRTLLM MoE support (sgl-project#15049)
  fix: adjust time for test_epd_disaggregation.py (sgl-project#15354)
  Add doc for qwen3 next (sgl-project#15337)
  feat: DeepSeek-V3.2 Streaming tool call output (sgl-project#15278)
  Feature/trtllm mha workspace size configurable sgl-project#15089 (sgl-project#15131)
  [VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V (sgl-project#15205)
  [Deepseek V3.2] Support Overlap Spec + NSA (sgl-project#15307)
  Add request-level timestamp for when prefill finishes (sgl-project#14860)
  [CI] Migrate LoRA tests to test/registered/lora/ (sgl-project#15176)
  Reserve more memory for DeepSeekOCR model and adjust server start timeout for DeepGEMM to reduce flakiness (sgl-project#15277)
  Fix condition check for require_gathered_buffer (sgl-project#15328)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants