[Kernel] fp4 marlin kernel#17687
Conversation
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
|
FYI @tms the wheel size only grows by 1 MB |
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
|
|
||
| template <> | ||
| __device__ inline void dequant<nv_bfloat162, vllm::kU4.id()>( | ||
| __device__ inline void dequant<nv_bfloat162, vllm::kU4.id(), true>( |
There was a problem hiding this comment.
Nice! We can also start using these functions for the fp4 scaled_mm tests!
|
Fused marlin moe test is failing https://buildkite.com/vllm/ci/builds/19608/steps?jid=0196b131-b028-4fad-95bd-bba7cdaf133d |
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
mgoin
left a comment
There was a problem hiding this comment.
Excellent work here! I need to run another smoke test since the scales change to fp8, but I think this is all good to go
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com> Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com> Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
This PR adds nvfp4 support for marlin kernel, both dense and moe.
In addition to standard FP4 support, I fuse the floating-point operations in the dequantization process with the subsequent sub-zero-point and scaling steps to reduce kernel computation. This currently provides significant speedups for FP4/FP8 and modest acceleration for AWQ-INT4.