[feature] Ascend NPU graph support#9399
Conversation
[feature] accelerate ACLGraph with parallel processing of input taskupdate
There was a problem hiding this comment.
Summary of Changes
Hello @VDV1985, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces comprehensive support for Ascend NPU graph execution, aiming to significantly enhance performance by reducing operator call overhead. It achieves this by adding a dedicated NPU graph runner that leverages existing CUDA graph infrastructure, generalizing device-specific operations across the codebase, and integrating NPU graph capabilities into the attention backend. The changes allow the system to dynamically utilize NPU graphs for optimized inference on Ascend hardware, building upon a previous attempt to enable this functionality.
Highlights
- NPU Graph Runner Implementation: Introduced a new
NPUGraphRunnerclass, specifically designed for Huawei Ascend NPUs, which inherits from the existingCudaGraphRunnerto maintain a consistent interface while providing NPU-specific optimizations. - Device Agnostic Abstraction: Generalized the underlying device-specific operations (e.g., stream management, graph capture, memory pooling) within
parallel_state.pyandcuda_graph_runner.pyto support both CUDA and NPU devices dynamically. - Attention Backend Integration: Modified the attention backend (
ascend_backend.py) to integrate NPU graph capture and replay mechanisms, including new methods for managing graph-specific forward metadata and utilizing NPU-optimized fused attention operations. - Dynamic Graph Runner Selection: Updated the
ModelRunnerto intelligently select and initialize either theCudaGraphRunneror the newNPUGraphRunnerbased on the detected hardware, ensuring optimal performance for the respective device. - New NPU Graph Tests: Added new unit tests (
test_ascend_graph_tp1_bf16.py,test_ascend_graph_tp2_bf16.py) to validate the functionality and performance of NPU graph support across different tensor parallelism configurations and precision settings.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces support for Ascend NPU graphs, which is a significant feature for performance improvement. The changes primarily involve generalizing the existing CUDA graph runner to be device-agnostic and implementing the NPU-specific logic in a new NPUGraphRunner class that inherits from CudaGraphRunner. The overall approach is sound. However, I've identified a critical issue with a duplicated method definition that needs to be resolved. Additionally, there are a few opportunities for refactoring to reduce code duplication and improve maintainability, as well as a missing import that could cause a runtime error.
Alcanderian
left a comment
There was a problem hiding this comment.
This commit satisfies the requirements of minimizing code changes and not affecting history and downstream forks, and can now be approved
@wangkeya could you please share "npu-smi info" output and exact test parameters, i.e. sglang.launch_server params and sglang.bench_serving params (or Engine params if you use other type of testing)? |
|
|
@wangkeya could you please try to set torch_npu environment variable "export STREAMS_PER_DEVICE=32" before running server? Also if you experience issues with memory allocation you can try to increase mem-fraction-static to 0.8 or 0.9. |
|
Does this MR's merged code support running the DeepSeek model in ACLGraph mode? |
I'm sorry to tell you that it's not supported yet. However, we will submit a new PR by 30/08. Thanks for your attention. |
thank you very much ,it works |
@VDV1985 hello, when i test qwen3-235b-a22b, occure this error
|
|
@wangkeya could you please send the error log and server/bench command lines? I suppose you have something like tensor.to("cpu") that causes stream synchronize command that is not supported by graph capturing. Also we have limited support of MOE models with graph at the moment and working on full support. |
Co-authored-by: ronnie_zheng <zl19940307@163.com> Co-authored-by: yezhifeng (D) <y00897525@china.huawei.com> Co-authored-by: anon189Ty <Stari_Falcon@outlook.com> Co-authored-by: Maksim <makcum888e@mail.ru> Co-authored-by: ssshinigami <44640852+ssshinigami@users.noreply.github.com>













Motivation
NPUGraph can give significant improvement of perfrormance by reducing overhead of operators calling
Closes #8030
Second attempt to enable NPU graph functionality, first one #8027 was reverted
Modifications
Added NPU graph runner, that inherited from CudaGraphRunner. It is customers request to preserve CudaGraphRunner and not use base class.
cuda_graph_runner.py history preserved.
Added attention operation that supports by NPUGraph.update()
NPU graph runner uses the same SGLang server options as CUDA graph with combination of 'device="npu"', i.e. "disable_cuda_graph" and enabled by default.
Tests for graph added
NOTICE: it's supposed to use "export STREAMS_PER_DEVICE=32" to enable npugraph now. and you won't need it after we release the new version of torch_npu
Accuracy Tests
Benchmarking and Profiling
Checklist