Skip to content

[Feature] Faster Custom Paged Attention kernels#385

Closed
tjtanaa wants to merge 18 commits intoROCm:llama_fp8_12062024from
EmbeddedLLM:pg_attn_to_llama_fp8
Closed

[Feature] Faster Custom Paged Attention kernels#385
tjtanaa wants to merge 18 commits intoROCm:llama_fp8_12062024from
EmbeddedLLM:pg_attn_to_llama_fp8

Conversation

@tjtanaa
Copy link

@tjtanaa tjtanaa commented Jan 24, 2025

Description

This PR implements a faster Custom Paged Attention (CPA) kernel based on mfma16x16x16 instructions.
This feature is from ROCm/vllm (#372).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants