Motivation
Hi, SGLang folks! This is Mingfei from intel pytorch team, our team helps optimize PyTorch performance on CPU. I am also the PyTorch module maintainer for cpu performance. We would like to contribute to SGLang for CPU enabling and performance optimization.
Targets
Our primary target is to optimize SGLang performance on Intel Xeon Scalable Processors (x86 server CPUs).
- Optimization will be focusing on Xeon with Intel® Advanced Matrix Extensions support, including Sapphire Rapids(4th gen), Emerald Rapids(5th gen), Granite Rapids(6th gen).
- Native implementations or fallbacks will be provided for CPUs with other ISA to make it functional.
- Providing good performance per dollar.
Limitations
- Kernels written in avx512 and amx-bf16, requires GCC11 or above.
- BFloat16/Float16 will be enabled at the same time on CPU, but we only focus on BFloat16 performance optimization at the current stage, Float16 optimization will be added later on.
Schedule for 25Q1
We will focusing on DeepSeek series at the moment to align with our internal development requirements and extend the model coverage later on.
Generic enabling/optimizations for sglang
DeepSeek performance optimizations
(we are currently mapping the work from DeepSeek Multi-head Latent Attention (MLA) Throughput Optimizations)
Tensor Parallel
We hope to help more customers to build better user experience with deploying with sglang on CPU devices. Welcome any feedbacks, thanks!
Motivation
Hi, SGLang folks! This is Mingfei from intel pytorch team, our team helps optimize PyTorch performance on CPU. I am also the PyTorch module maintainer for cpu performance. We would like to contribute to SGLang for CPU enabling and performance optimization.
Targets
Our primary target is to optimize SGLang performance on Intel Xeon Scalable Processors (x86 server CPUs).
Limitations
Schedule for 25Q1
We will focusing on DeepSeek series at the moment to align with our internal development requirements and extend the model coverage later on.
Generic enabling/optimizations for sglang
rms_norm,silu_and_mul, sampling and so on.DeepSeek performance optimizations
(we are currently mapping the work from DeepSeek Multi-head Latent Attention (MLA) Throughput Optimizations)
Tensor Parallel
We hope to help more customers to build better user experience with deploying with sglang on CPU devices. Welcome any feedbacks, thanks!