Skip to content

vulkan: scalar flash attention implementation#13324

Merged
0cc4m merged 10 commits intoggml-org:masterfrom
jeffbolznv:scalar_fa_3
May 10, 2025
Merged

vulkan: scalar flash attention implementation#13324
0cc4m merged 10 commits intoggml-org:masterfrom
jeffbolznv:scalar_fa_3

Conversation

@jeffbolznv
Copy link
Collaborator

With so many issues like #13217 being due to lack of FA support, I ported the FA shader to use scalar math. Perf is pretty good for cases where there are few rows (e.g. during token gen), but it will still be slower than -fa 0 for cases where -fa 0 uses KHR_coopmat.

I'd appreciate some help testing (including perf testing) on non-NVIDIA GPUs. And if anybody knows a good placeholder value for shader_core_count for Intel or how to query it, that would be good too.

@jeffbolznv jeffbolznv requested a review from 0cc4m May 6, 2025 00:53
@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels May 6, 2025
@nalf3in
Copy link

nalf3in commented May 6, 2025

Umm unfortunately it doesn't look like there's an improvement for my setup, even for the rtx 2070 gpu.

Short version:

With Flash Attention (-fa)

GPU Prompt Eval Time (ms) ms/token Tokens/sec Eval Time (ms) ms/token Tokens/sec
RX 480 7402 5.5 182 19191 48 21
RTX 2070 4953 3.7 272 11552 31 33

Without Flash Attention

GPU Prompt Eval Time (ms) ms/token Tokens/sec Eval Time (ms) ms/token Tokens/sec
RX 480 4892 3.64 275 17451 48 21
RTX 2070 4907 3.65 274 8579 29 34
Long version: (click to show)

Hardware Configuration

  • CPU: Intel Xeon E5-2620 v3 (6 cores, 12 threads, Haswell)
  • Memory: Quad-channel DDR4 @ 1866 MHz
  • GPUs:
    • CUDA0: NVIDIA RTX 2070 (CUDA backend)
    • VULKAN0: AMD RX 480 8GB (Vulkan backend)
    • VULKAN1: NVIDIA RTX 2070 (Vulkan backend)

Repository Status (Sanity Check)

git status
# On branch master
# Your branch is ahead of 'origin/master' by 1 commit.

git log -1
# commit 6c7443cbcfc34c9247166a3f9ed9cfe762441a43 (HEAD -> master)
# vulkan: scalar flash attention implementation

Test Prompt

  • Length: 1984 tokens
  • Source: Default sillytavern conversation prompt

Performance Results


1. Normal Setup

1.1 With Flash Attention (-fa)

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host -fa :: RX 480 (VULKAN0) 7402.49 ms 1345 5.50 181.70 19191.17 ms 400 47.98 20.84
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host :: -fa -ctk q4_0 -ctv q4_0 RX 480 (VULKAN0) 37511.76 ms 1345 27.89 35.86 45465.21 ms 391 116.28 8.60
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 --host :: -fa RTX 2070 (VULKAN1) 4952.98 ms 1345 3.68 271.55 11552.00 ms 377 30.64 32.64
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 --host :: -fa -ctk q4_0 -ctv q4_0 RTX 2070 (VULKAN1) 41505.56 ms 1345 30.86 32.41 58276.78 ms 400 145.69 6.86

1.2 Without Flash Attention

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -m /share/Qwen -dev Vulkan0 -ngl 99 -c 8192 --host :: RX 480 (VULKAN0) 4891.92 ms 1345 3.64 274.94 17451.06 ms 362 48.21 20.74
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 --host :: RTX 2070 (VULKAN1) 4906.50 ms 1345 3.65 274.13 8579.09 ms 292 29.38 34.04

2. Experimental Setup (Patch for Issue #13164)

  • Patch applied to increase matrix multiplication size limit from 3072 to 8192
--- a/ggml/src/ggml-vulkan/ggml-vulkan.cpp
+++ b/ggml/src/ggml-vulkan/ggml-vulkan.cpp
-    GGML_ASSERT(nei0 * nei1 <= 3072);
+    GGML_ASSERT(nei0 * nei1 <= 8192);
--- a/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm.comp
+++ b/ggml/src/ggml-vulkan/vulkan-shaders/mul_mm.comp
- shared u16vec2 row_ids[3072];
+ shared u16vec2 row_ids[8192];

2.1 With Flash Attention (-fa)

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -dev CUDA0,Vulkan0 -ngl 99 -c 8192 -m /share/Qwen3-30B-A3B-UD-Q3_K_XL.gguf -fa --batch-size 1200 --host :: RTX 2070 (CUDA0) + RX 480 (VULKAN0) 47275.80 ms 1306 36.20 27.63 19982.30 ms 400 49.96 20.02
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN0 -ngl 99 -c 8192 --host :: -fa RX 480 (VULKAN0) 7390.74 ms 1345 5.49 181.98 18178.16 ms 380 47.84 20.90

2.2 Without Flash Attention

Command GPU Prompt Eval Time Tokens ms/token Tokens/sec Eval Time Tokens ms/token Tokens/sec
./build/bin/llama-server -dev CUDA0,Vulkan0 -ngl 99 -c 8192 -m /share/Qwen3-30B-A3B-UD-Q3_K_XL.gguf --batch-size 1200 --host :: RTX 2070 (CUDA0) + RX 480 (VULKAN0) 46707.90 ms 1345 34.73 28.80 17597.43 ms 400 43.99 22.73
./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN0 -ngl 99 -c 8192 --host :: RX 480 (VULKAN0) 4915.57 ms 1345 3.65 273.62 11767.99 ms 244 48.23 20.73

@jeffbolznv
Copy link
Collaborator Author

On the 2070 system, does it report coopmat1 or coopmat2 support? If it's coopmat2 then FA is already accelerated.

-ctk q4_0 -ctv q4_0

I didn't add support for quantized KV yet (it's probably not a ton of work, just didn't think it was critical for the first version), so these tests will continue to fall back to CPU.

@nalf3in
Copy link

nalf3in commented May 6, 2025

does it report coopmat1 or coopmat2 support?

Not sure how to confirm this. Coopmat is not mentioned in stdout when running llama-server. llama-cpp was built with
cmake -B build -DGGML_VULKAN=ON -DGGML_CUDA=ON

I didn't add support for quantized KV

Ah I see, good idea

@jeffbolznv
Copy link
Collaborator Author

llama-server should print something like this when using the vulkan backend:

ggml_vulkan: 0 = NVIDIA GeForce RTX 4070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2

What does yours say for matrix cores?

@daniandtheweb
Copy link
Contributor

daniandtheweb commented May 6, 2025

I've just tested it on both the Radeon RX 7800 XT and the Radeon RX 5700 XT and the performance is pretty close to non FA.

RX 7800 XT

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (RADV NAVI32) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 1241.50 ± 15.37
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 111.19 ± 0.59
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1248.76 ± 5.05
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 110.08 ± 0.12

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2090.51 ± 7.00
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 96.95 ± 2.41
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1729.53 ± 5.07
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 76.70 ± 0.11

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2074.83 ± 6.27
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.45 ± 0.36
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1727.79 ± 8.97
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 76.40 ± 0.16

RX 5700 XT

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (RADV NAVI10) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 470.39 ± 0.35
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 65.21 ± 0.21
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 422.59 ± 0.26
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 64.23 ± 0.06

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 441.33 ± 0.45
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 64.52 ± 0.74
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 377.49 ± 0.14
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 52.63 ± 0.01

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 565.87 ± 0.53
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 71.49 ± 0.04
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 543.53 ± 0.31
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 66.61 ± 0.02

I'm not sure if there are specific FA tests in test-backend-ops (haven't used it in a while) but if you need more performance data from I can run the tests too.

@nalf3in
Copy link

nalf3in commented May 6, 2025

llama-server should print something like this when using the vulkan backend

It looks like it's not enabled:

ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none
ggml_vulkan: 1 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

build: 5288 (6c7443cb) with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu

Looking a bit closer I think that's coming from the build config. I'm using a standard debian server installation. From what I understand the libvulkan version shipped with debian bookworm(1.3.239) is probably too old to support these "new" extensions.

cmake -B build -DGGML_VULKAN=ON ...
-- Found Vulkan: /usr/lib/x86_64-linux-gnu/libvulkan.so (found version "1.3.239") found components: glslc glslangValidator 
-- Vulkan found
-- GL_KHR_cooperative_matrix not supported by glslc
-- GL_NV_cooperative_matrix2 not supported by glslc
-- GL_EXT_integer_dot_product not supported by glslc
-- GL_EXT_bfloat16 not supported by glslc
-- Including Vulkan backend

In any case I see that it`s working on my desktop with arch linux and a 3080 ti:

-- Found Vulkan: /lib/libvulkan.so (found version "1.4.309") found components: glslc glslangValidator
-- Vulkan found
-- GL_KHR_cooperative_matrix supported by glslc
-- GL_NV_cooperative_matrix2 supported by glslc
-- GL_EXT_integer_dot_product supported by glslc
-- GL_EXT_bfloat16 not supported by glslc
-- Including Vulkan backend
Full debian server cmake -B build -DGGML_VULKAN=ON output cmake -B build -DGGML_VULKAN=ON -- The C compiler identification is GNU 12.2.0 -- The CXX compiler identification is GNU 12.2.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Found Git: /usr/bin/git (found version "2.39.5") -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success -- Found Threads: TRUE -- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF. -- CMAKE_SYSTEM_PROCESSOR: x86_64 -- Including CPU backend -- Found OpenMP_C: -fopenmp (found version "4.5") -- Found OpenMP_CXX: -fopenmp (found version "4.5") -- Found OpenMP: TRUE (found version "4.5") -- x86 detected -- Adding CPU backend variant ggml-cpu: -march=native -- Found Vulkan: /usr/lib/x86_64-linux-gnu/libvulkan.so (found version "1.3.239") found components: glslc glslangValidator -- Vulkan found -- GL_KHR_cooperative_matrix not supported by glslc -- GL_NV_cooperative_matrix2 not supported by glslc -- GL_EXT_integer_dot_product not supported by glslc -- GL_EXT_bfloat16 not supported by glslc -- Including Vulkan backend -- Found CURL: /usr/lib/x86_64-linux-gnu/libcurl.so (found version "7.88.1") -- Configuring done -- Generating done -- Build files have been written to: /home/joe/ai/temp/llama.cpp/build
Full debian server llama-server output ./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev VULKAN1 -ngl 99 -c 8192 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 2070, compute capability 7.5, VMM: yes ggml_vulkan: Found 2 Vulkan devices: ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none ggml_vulkan: 1 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none build: 5288 (6c7443cb) with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu system info: n_threads = 6, n_threads_batch = 6, total_threads = 12

system_info: n_threads = 6 (n_threads_batch = 6) / 12 | CUDA : ARCHS = 750 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |

main: binding port with default address family
main: HTTP server is listening, hostname: 127.0.0.1, port: 8080, http threads: 11
main: loading model
srv load_model: loading model '/share/Qwen3-4B-UD-Q4_K_XL.gguf'
llama_model_load_from_file_impl: using device Vulkan1 (NVIDIA GeForce RTX 2070) - 8192 MiB free
llama_model_loader: loaded meta data with 32 key-value pairs and 398 tensors from /share/Qwen3-4B-UD-Q4_K_XL.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Qwen3-4B
llama_model_loader: - kv 3: general.basename str = Qwen3-4B
llama_model_loader: - kv 4: general.quantized_by str = Unsloth
llama_model_loader: - kv 5: general.size_label str = 4B
llama_model_loader: - kv 6: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 7: qwen3.block_count u32 = 36
llama_model_loader: - kv 8: qwen3.context_length u32 = 40960
llama_model_loader: - kv 9: qwen3.embedding_length u32 = 2560
llama_model_loader: - kv 10: qwen3.feed_forward_length u32 = 9728
llama_model_loader: - kv 11: qwen3.attention.head_count u32 = 32
llama_model_loader: - kv 12: qwen3.attention.head_count_kv u32 = 8
llama_model_loader: - kv 13: qwen3.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 14: qwen3.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 15: qwen3.attention.key_length u32 = 128
llama_model_loader: - kv 16: qwen3.attention.value_length u32 = 128
llama_model_loader: - kv 17: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 18: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 19: tokenizer.ggml.tokens arr[str,151936] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 20: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 21: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 22: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 23: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 24: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 25: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 26: general.quantization_version u32 = 2
llama_model_loader: - kv 27: general.file_type u32 = 15
llama_model_loader: - kv 28: quantize.imatrix.file str = Qwen3-4B-GGUF/imatrix_unsloth.dat
llama_model_loader: - kv 29: quantize.imatrix.dataset str = unsloth_calibration_Qwen3-4B.txt
llama_model_loader: - kv 30: quantize.imatrix.entries_count i32 = 252
llama_model_loader: - kv 31: quantize.imatrix.chunks_count i32 = 32
llama_model_loader: - type f32: 145 tensors
llama_model_loader: - type q4_K: 154 tensors
llama_model_loader: - type q5_K: 30 tensors
llama_model_loader: - type q6_K: 49 tensors
llama_model_loader: - type iq4_xs: 20 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_K - Medium
print_info: file size = 2.37 GiB (5.05 BPW)
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3
print_info: vocab_only = 0
print_info: n_ctx_train = 40960
print_info: n_embd = 2560
print_info: n_layer = 36
print_info: n_head = 32
print_info: n_head_kv = 8
print_info: n_rot = 128
print_info: n_swa = 0
print_info: n_swa_pattern = 1
print_info: n_embd_head_k = 128
print_info: n_embd_head_v = 128
print_info: n_gqa = 4
print_info: n_embd_k_gqa = 1024
print_info: n_embd_v_gqa = 1024
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 9728
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 40960
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = 4B
print_info: model params = 4.02 B
print_info: general.name = Qwen3-4B
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 11 ','
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151654 '<|vision_pad|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 36 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 37/37 layers to GPU
load_tensors: Vulkan1 model buffer size = 2422.70 MiB
load_tensors: CPU_Mapped model buffer size = 304.28 MiB
...............................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 8192
llama_context: n_ctx_per_seq = 8192
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 0
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (8192) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
llama_context: Vulkan_Host output buffer size = 0.58 MiB
llama_kv_cache_unified: kv_size = 8192, type_k = 'f16', type_v = 'f16', n_layer = 36, can_shift = 1, padding = 32
llama_kv_cache_unified: Vulkan1 KV buffer size = 1152.00 MiB
llama_kv_cache_unified: KV self size = 1152.00 MiB, K (f16): 576.00 MiB, V (f16): 576.00 MiB
llama_context: Vulkan1 compute buffer size = 554.00 MiB
llama_context: Vulkan_Host compute buffer size = 21.01 MiB
llama_context: graph nodes = 1374
llama_context: graph splits = 2
common_init_from_params: setting dry_penalty_last_n to ctx_size = 8192
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv init: initializing slots, n_slots = 1
slot init: id 0 | task -1 | new slot n_ctx_slot = 8192
main: model loaded
main: chat template, chat_template: {%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0].role == 'system' %}
{{- messages[0].content + '\n\n' }}
{%- endif %}
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within XML tags:\n" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": , "arguments": }\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0].role == 'system' %}
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for forward_message in messages %}
{%- set index = (messages|length - 1) - loop.index0 %}
{%- set message = messages[index] %}
{%- set tool_start = '<tool_response>' %}
{%- set tool_start_length = tool_start|length %}
{%- set start_of_message = message.content[:tool_start_length] %}
{%- set tool_end = '</tool_response>' %}
{%- set tool_end_length = tool_end|length %}
{%- set start_pos = (message.content|length) - tool_end_length %}
{%- if start_pos < 0 %}
{%- set start_pos = 0 %}
{%- endif %}
{%- set end_of_message = message.content[start_pos:] %}
{%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
{%- set ns.multi_step_tool = false %}
{%- set ns.last_query_index = index %}
{%- endif %}
{%- endfor %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set content = message.content %}
{%- set reasoning_content = '' %}
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
{%- set reasoning_content = message.reasoning_content %}
{%- else %}
{%- if '' in message.content %}
{%- set content = (message.content.split('')|last).lstrip('\n') %}
{%- set reasoning_content = (message.content.split('')|first).rstrip('\n') %}
{%- set reasoning_content = (reasoning_content.split('')|last).lstrip('\n') %}
{%- endif %}
{%- endif %}
{%- if loop.index0 > ns.last_query_index %}
{%- if loop.last or (not loop.last and reasoning_content) %}
{{- '<|im_start|>' + message.role + '\n\n' + reasoning_content.strip('\n') + '\n\n\n' + content.lstrip('\n') }}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- if message.tool_calls %}
{%- for tool_call in message.tool_calls %}
{%- if (loop.first and content) or (not loop.first) %}
{{- '\n' }}
{%- endif %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{%- if tool_call.arguments is string %}
{{- tool_call.arguments }}
{%- else %}
{{- tool_call.arguments | tojson }}
{%- endif %}
{{- '}\n</tool_call>' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- if enable_thinking is defined and enable_thinking is false %}
{{- '\n\n\n\n' }}
{%- endif %}
{%- endif %}, example_format: '<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Hello<|im_end|>
<|im_start|>assistant
Hi there<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
'
main: server is listening on http://127.0.0.1:8080 - starting the main loop
srv update_slots: all slots are idle

@jeffbolznv
Copy link
Collaborator Author

OK, so your RTX 2070 should have been using flash attention. Were all your test results with this change? Did you try using flash attention without this change? It would have fallen back to the CPU.

@nalf3in
Copy link

nalf3in commented May 6, 2025

Yes all tests were made with this change, more specifically with this commit cherry picked on top of the latest commit at the time of writing (9070365)

git log
commit 6c7443cbcfc34c9247166a3f9ed9cfe762441a43 (HEAD -> master)
Author: Jeff Bolz <jbolz@nvidia.com>
Date:   Mon May 5 19:34:23 2025 -0500

    vulkan: scalar flash attention implementation

commit 907036502070ba608bdb2aaebf802092d4cfba07 (tag: b5287, origin/master, origin/HEAD)
Author: Johannes Gäßler <johannesg@5d6.de>
Date:   Mon May 5 22:32:13 2025 +0200

    CUDA: fix logic for clearing padding with -ngl 0 (#13320)

Just did the same test again without this commit and indeed it falls back to the cpu and is very slow:

9070365

./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host :: -fa

prompt eval time =   38053.10 ms /  1930 tokens (   19.72 ms per token,    50.72 tokens per second)
       eval time =   45318.10 ms /   345 tokens (  131.36 ms per token,     7.61 tokens per second)

359a92f691ff74f7fc89cf12cac744bb18ab98df (this pr commit)

./build/bin/llama-server -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -dev Vulkan0 -ngl 99 -c 8192 --host :: -fa

prompt eval time =   11606.94 ms /  1930 tokens (    6.01 ms per token,   166.28 tokens per second)
       eval time =   17867.61 ms /   365 tokens (   48.95 ms per token,    20.43 tokens per second)

@netrunnereve
Copy link
Collaborator

netrunnereve commented May 6, 2025

@nalf3in I think there's something wrong with your setup as your numbers already don't make sense for the non fa case. First of all even if you have no DP4A and no matrix cores the 2070 should easily beat the 480 in prompt processing. Your inference speeds are also really low for a Q4 4B model.

Can you run a regular llama-bench on each GPU separately using the same model? The server and SillyTavern might be messing things up.

@nalf3in
Copy link

nalf3in commented May 7, 2025

I didn't use llama-bench previously because it doesn't support the -dev option which allows to specify which gpu I want to use. From what I understand it isn't possible using llama-bench command line arguments but I was able to do it anyway using bwrap (see below for full command line args)

Short version of the results:

Commit GPU fa pp512 t/s tg128 t/s
141a908 AMD RX 480 0 294.5 ± 0.7 37.8 ± 0.1
141a908 AMD RX 480 1 140.6 ± 0.6 26.5 ± 0.1
141a908 NVIDIA RTX2070 0 461.6 ± 2.7 63.0 ± 1.5
141a908 NVIDIA RTX2070 1 140.0 ± 1.1 35.9 ± 1.0
005756a AMD RX 480 0 294.4 ± 0.7 37.9 ± 0.5
005756a AMD RX 480 1 230.3 ± 0.3 31.3 ± 0.1
005756a NVIDIA RTX2070 0 461.0 ± 1.6 63.1 ± 2.2
005756a NVIDIA RTX2070 1 444.6 ± 1.6 59.7 ± 0.7

It seems that the rtx 2070 is still around 1.6x faster than then rx 480 using vulkan. Cuda is much faster for prompt ingestion (vulkan doesn't use KHR_coopmat there though)

Cuda without flash attention for reference

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B CUDA 99 pp512 806.22 ± 5.90
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B CUDA 99 tg128 71.77 ± 4.07
Long version Commit 141a908

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 294.50 ± 0.66
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 37.79 ± 0.11

build: 141a908 (5298)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 140.58 ± 0.57
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 26.48 ± 0.07

build: 141a908 (5298)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 461.58 ± 2.73
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 63.01 ± 1.47

build: 141a908 (5298)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 139.97 ± 1.06
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 35.87 ± 1.01

build: 141a908 (5298)


Commit 005756a

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 294.41 ± 0.74
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 37.87 ± 0.49

build: bd417ee8 (5299)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/dri/card0 /dev/dri/card0
--dev-bind /dev/dri/renderD128 /dev/dri/renderD128
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon RX 480 Graphics (RADV POLARIS10) (radv) | uma: 0 | fp16: 0 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 230.25 ± 0.30
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 31.31 ± 0.07

build: bd417ee8 (5299)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 pp512 460.96 ± 1.61
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 tg128 63.14 ± 2.21

build: bd417ee8 (5299)

~/ai/temp/llama.cpp$ bwrap
--ro-bind / /
--dev-bind /dev/nvidia0 /dev/nvidia0
--dev-bind /dev/nvidiactl /dev/nvidiactl
--dev-bind /dev/nvidia-uvm /dev/nvidia-uvm
--dev-bind /dev/nvidia-modeset /dev/nvidia-modeset
--dev-bind /dev/null /dev/null
--dev-bind /dev/urandom /dev/urandom
--dev-bind /dev/zero /dev/zero
--dir /tmp
./build/bin/llama-bench -m /share/Qwen3-4B-UD-Q4_K_XL.gguf -fa 1
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 2070 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 pp512 444.55 ± 1.55
qwen3 4B Q4_K - Medium 2.37 GiB 4.02 B Vulkan 99 1 tg128 59.65 ± 0.71

build: bd417ee8 (5299)

@netrunnereve
Copy link
Collaborator

netrunnereve commented May 7, 2025

Anyways I went and tried this out on my RX 470. With FA turned on prompt processing becomes slower and inference becomes faster when I make it generate a lot of text. I guess there's a tradeoff here and this should be quite useful for those new thinking models.

model size params backend ngl threads main_gpu sm fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 pp512 183.57 ± 1.11
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp512 174.75 ± 1.23
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 tg128 33.85 ± 0.06
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg128 33.52 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 pp2000 177.40 ± 0.20
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp2000 85.46 ± 0.12
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 0 tg2000 18.45 ± 0.00
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg2000 30.28 ± 0.39

The FA tests are passing on my RX 470 but they're failing on my W8100 when prec=def as the shaders are trying to do FP16 math on a chip that doesn't support it. The prec=f32 tests are passing on the W8100. In this case we'll either need to disable these FA shaders if the GPU doesn't support FP16 or have two sets of shaders like how it's done for mul mat and mat vec.

@netrunnereve
Copy link
Collaborator

I didn't use llama-bench previously because it doesn't support the -dev option which allows to specify which gpu I want to use.

Oh you can just use the -mg option to set your GPU number and then set -sm none to make it only run on a single GPU.

It seems that the rtx 2070 is still around 1.6x faster than then rx 480 using vulkan. Cuda is much faster for prompt ingestion (vulkan doesn't use KHR_coopmat there though)

Yeah those numbers make more sense now 😉. If you get coopmat2 working it should be much closer to CUDA but I think it's still going to be a bit slower.

@jeffbolznv
Copy link
Collaborator Author

I didn't use llama-bench previously because it doesn't support the -dev option which allows to specify which gpu I want to use.

Oh you can just use the -mg option to set your GPU number and then set -sm none to make it only run on a single GPU.

You can also set the env var GGML_VK_VISIBLE_DEVICES=0 or 1 to hide the other device.

@jeffbolznv
Copy link
Collaborator Author

The FA tests are passing on my RX 470 but they're failing on my W8100 when prec=def as the shaders are trying to do FP16 math on a chip that doesn't support it.

I hadn't realized this was happening, it's the leftover ACC_TYPE in the shader that's barely used. I've changed the logic to always select the f32 variant for scalar.

@jeffbolznv jeffbolznv changed the title vulkan: scalar flash attention implementation draft: vulkan: scalar flash attention implementation May 7, 2025
@jeffbolznv
Copy link
Collaborator Author

Set to draft, I have a bit more perf tuning I want to try.

@0cc4m
Copy link
Collaborator

0cc4m commented May 7, 2025

This is very exciting. I'll test it across my devices within the next days.

@Mushoz
Copy link

Mushoz commented May 7, 2025

I have some 7900XTX results to share with the radv vulkan driver with the Qwen3 32B Q4_K_S model. I am seeing very nice speedups for token generation at longer context depths, but unfortunately prompt processing drops off a cliff:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 331.03 ± 0.71
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 35.68 ± 0.04
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d128 326.19 ± 0.23
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d128 35.46 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d256 322.68 ± 0.42
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d256 35.34 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d512 311.54 ± 0.32
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d512 34.89 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d1024 294.65 ± 7.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d1024 34.56 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d2048 296.63 ± 0.19
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d2048 33.10 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d4096 269.95 ± 0.39
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d4096 29.74 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d8192 236.21 ± 0.25
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d8192 24.72 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 @ d16384 182.46 ± 0.17
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 @ d16384 18.59 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 313.83 ± 0.34
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 35.60 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d128 300.44 ± 0.44
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d128 35.68 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d256 300.73 ± 0.30
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d256 35.10 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d512 282.28 ± 0.27
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d512 34.89 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d1024 260.11 ± 0.39
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d1024 34.27 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d2048 222.80 ± 0.15
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d2048 33.24 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d4096 157.15 ± 0.30
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d4096 31.13 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d8192 96.81 ± 0.06
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d8192 27.96 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d16384 53.02 ± 0.05
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d16384 23.11 ± 0.02

@0cc4m
Copy link
Collaborator

0cc4m commented May 7, 2025

I have some 7900XTX results to share with the radv vulkan driver with the Qwen3 32B Q4_K_S model. I am seeing very nice speedups for token generation at longer context depths, but unfortunately prompt processing drops off a cliff:

That is expected for you since the new flash attention shader doesn't use coopmat1 for matrix core acceleration, which your GPU supports and uses for non-FA prompt processing, that's why it's slower.

I'll look into a coopmat1 version that would fix this at some point, if nobody else gets to it first.

@Mushoz
Copy link

Mushoz commented May 7, 2025

ROCm number for reference in case they are useful:

ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32

model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 744.73 ± 1.10
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 26.07 ± 0.03
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d128 725.08 ± 14.51
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d128 25.98 ± 0.02
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d256 724.20 ± 12.35
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d256 25.69 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d512 713.54 ± 0.97
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d512 24.93 ± 0.07
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d1024 689.74 ± 2.81
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d1024 24.21 ± 0.05
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d2048 648.16 ± 0.61
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d2048 24.39 ± 0.03
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d4096 579.65 ± 1.06
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d4096 22.81 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d8192 472.54 ± 0.75
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d8192 20.25 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 pp512 @ d16384 338.52 ± 0.47
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B ROCm 99 1 tg128 @ d16384 16.57 ± 0.00

@Mushoz
Copy link

Mushoz commented May 7, 2025

I'll look into a coopmat1 version that would fix this at some point, if nobody else gets to it first.

That is honestly great to hear, thank you! I think once KV Cache quantization is in place, and prompt processing performance has been resolved, there really isn't much reason left to use ROCm over Vulkan. Vulkan has shown great performance at token generation compared to ROCm.

@netrunnereve
Copy link
Collaborator

I hadn't realized this was happening, it's the leftover ACC_TYPE in the shader that's barely used. I've changed the logic to always select the f32 variant for scalar.

Thanks it's passing now!

@jeffbolznv
Copy link
Collaborator Author

It was probably the tile size change, going fro 4 to 8 rows when tg only needs one row. I've pushed a change that only uses 1 row when that's all that's needed. I verified this fixed a small regression when running llama-2-7b.Q4_0.gguf. When I ran qwen3 I hit #13164, were you working around that in your tests?

I also fixed an issue where the last round of optimizations had reintroduced usage of Float16.

@Mushoz
Copy link

Mushoz commented May 8, 2025

I've pushed a change that only uses 1 row when that's all that's needed. I verified this fixed a small regression when running llama-2-7b.Q4_0.gguf.

Perfect! Recompiling now to retest on my 7900XTX as well. Will let you know as soon as I have the results.

When I ran qwen3 I hit #13164, were you working around that in your tests?

I am testing with the dense 32B model, which is unaffected by that bug. It only impacts the 30B MOE model.

@Mushoz
Copy link

Mushoz commented May 8, 2025

Benchmark only just started running and will take a while to fully complete, but the initial tests show worse performance for both prompt processing as well as token generation compared to the previous build. So the token generation regression seems to have gotten worse, and the prompt processing improvements have been reduced. Will edit this post with the full result as soon as it's done, but just wanted to share some initial results:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 315.60 ± 0.89
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 33.89 ± 0.03
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d128 303.15 ± 0.14
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d128 33.86 ± 0.02
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d256 305.06 ± 0.26
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d256 33.13 ± 0.02
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d512 298.26 ± 0.47
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d512 32.25 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d1024 283.61 ± 0.36
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d1024 32.16 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d2048 254.35 ± 0.52
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d2048 31.11 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d4092 207.06 ± 0.10
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d4092 29.81 ± 0.01
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d8192 143.00 ± 0.17
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d8192 27.33 ± 0.00
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 @ d16384 85.52 ± 0.18
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 @ d16384 23.68 ± 0.00

@jeffbolznv
Copy link
Collaborator Author

Hmm, I don't know what's going on there. I tried Qwen3-14B-Q4_K_M.gguf on my RTX 4070 using the KHR_coopmat path, and I see an improvement vs yesterday with both pp512 @ d1024 and tg128 @ d1024

@wbruna
Copy link
Contributor

wbruna commented May 8, 2025

On my Ryzen 5 3400G iGPU, most tests get a little bit slower, a few improve slightly; the difference seems to be less than the variation between consecutive runs.

ggml_vulkan: 0 = AMD Radeon Vega 11 Graphics (RADV RAVEN) (radv) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 0 | matrix cores: none

Qwen3-30B-A3B-UD-Q4_K_XL at e660942 :

type_k type_v fa test t/s
q8_0 q8_0 1 pp512 24.57 ± 0.28
q8_0 q8_0 1 tg128 15.39 ± 0.08
q8_0 q8_0 1 pp2048 22.65 ± 0.04
q8_0 q8_0 1 tg128 15.70 ± 0.13
f16 f16 1 pp512 22.81 ± 0.17
f16 f16 1 tg128 15.08 ± 0.10
f16 f16 1 pp2048 22.56 ± 0.08
f16 f16 1 tg128 15.72 ± 0.03
f16 f16 0 pp512 22.97 ± 0.17
f16 f16 0 tg128 15.72 ± 0.07
f16 f16 0 pp2048 23.48 ± 0.09
f16 f16 0 tg128 16.27 ± 0.04

llama-3.2-1b-instruct-q8_0 at 20a6246 :

type_k type_v fa test t/s
q8_0 q8_0 1 pp8192 232.88 ± 5.47
q8_0 q8_0 1 tg128 30.29 ± 0.21
f16 f16 1 pp8192 239.87 ± 3.24
f16 f16 1 tg128 29.98 ± 0.08
f16 f16 0 pp8192 268.65 ± 3.35
f16 f16 0 tg128 30.01 ± 0.09

@Mushoz
Copy link

Mushoz commented May 8, 2025

Updated my table above with the full results. Observations:

  1. Token generation has regressed at all (most?) depths versus yesterday's version unfortunately. Especially noticeable at 16k
  2. Prompt processing has regressed at the lower depths versus yesterday's version
  3. Prompt processing has improved at high depths versus yesterday's version

The differences of my setup versus yours:

  1. I am using Q4_K_S versus your Q4_K_M
  2. I am using the 32B model versus your 14B
  3. I am using AMD versus your Nvidia

@daniandtheweb
Copy link
Contributor

daniandtheweb commented May 8, 2025

I've just retested the latest changes on my cards:

Radeon RX 5700 XT

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (RADV NAVI10) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 468.06 ± 0.65
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 65.19 ± 0.02
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 444.67 ± 0.34
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 65.28 ± 0.04

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 437.86 ± 0.49
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 64.57 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 257.83 ± 0.33
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 63.34 ± 0.01

ggml_vulkan: 0 = AMD Radeon RX 5700 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 561.11 ± 0.40
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 71.29 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 400.18 ± 0.14
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 70.71 ± 0.05

Radeon RX 7800 XT

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (RADV NAVI32) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 1236.76 ± 19.36
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 111.39 ± 0.08
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1187.56 ± 4.26
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 113.31 ± 0.03

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD open-source driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2062.42 ± 14.64
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 98.06 ± 0.31
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1210.71 ± 1.74
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 95.14 ± 0.39

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2047.26 ± 20.53
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.43 ± 0.32
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1209.53 ± 1.19
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 94.67 ± 0.28

Token generation has improved on almost any scenario, however there seems to be a constant performance penalty in prompt processing on amdvlk and vulkan_pro, this hardly affects linux but since those drivers are based on the Windows ones the regression may be present there.

@jeffbolznv
Copy link
Collaborator Author

Token generation has improved on almost any scenario, however there seems to be a constant performance penalty in prompt processing

Do you just mean that the scalar FA is slower than the KHR_coopmat alternative? This is expected.

I'm going to have very limited availability over the next week, and I don't think anybody has reported a serious performance problem. So I suggest we merge this as-is (after any review fixes) and further tuning can happen later.

@daniandtheweb
Copy link
Contributor

daniandtheweb commented May 9, 2025

What I mean is that I compared my first results with today's and performance on the amdvlk and vulkan_pro on Linux and the performance on both got worse in prompt processing. I'm just pointing this out since these drivers behave almost identically to the AMD driver on Windows (linux uses radv by default so it's not an issue there).

This is the result from 005756a:

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2074.83 ± 6.27
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.45 ± 0.36
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1727.79 ± 8.97
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 76.40 ± 0.16

And this is from 20a6246:

ggml_vulkan: 0 = AMD Radeon RX 7800 XT (AMD proprietary driver) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 32768 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 pp512 2047.26 ± 20.53
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 0 tg128 97.43 ± 0.32
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 pp512 1209.53 ± 1.19
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 1 tg128 94.67 ± 0.28

The performance hit seems to have initiated on a6c940b and got worse with further commits. Overall it's amazing that we finally have a flash attention implementation on vulkan for non coopmat2. I'm just commenting about this so there's initial data for some future tuning.

@0cc4m
Copy link
Collaborator

0cc4m commented May 9, 2025

ggml_vulkan: 0 = AMD Radeon (TM) Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 661.44 ± 1.31
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 64.35 ± 0.14
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 605.52 ± 0.75
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 58.28 ± 0.12

ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 707.30 ± 4.01
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 31.12 ± 0.02
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 230.01 ± 0.13
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 22.36 ± 0.01

ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 4280.88 ± 76.19
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 103.67 ± 5.79
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 4581.71 ± 17.68
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 105.89 ± 0.15

ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: KHR_coopmat

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 3133.75 ± 26.72
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 103.28 ± 5.72
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 2992.94 ± 3.63
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 101.36 ± 0.18

ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 1932.35 ± 4.55
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 103.88 ± 4.29
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 1907.39 ± 5.47
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 100.69 ± 0.15

Performance is good in my tests.

@0cc4m
Copy link
Collaborator

0cc4m commented May 9, 2025

It would be cool if we could figure out the performance regression on AMD non-mesa drivers, but I wouldn't hold up the PR with it. They constantly cause issues. At least performance with them seems pretty good at this point, apart from this problem.

@Mushoz
Copy link

Mushoz commented May 9, 2025

ggml_vulkan: 0 = AMD Radeon (TM) Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none
model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 661.44 ± 1.31
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 64.35 ± 0.14
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 605.52 ± 0.75
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 58.28 ± 0.12

Your AMD tests are also showing a token generation performance drop after enabling FA. I see the same with the latest build, but that wasn't the case in the earlier version, see here:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 pp512 331.03 ± 0.71
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 0 tg128 35.68 ± 0.04
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 313.83 ± 0.34
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 35.60 ± 0.01

Latest build with regressions included:

ggml_vulkan: 0 = AMD Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
model size params backend ngl fa test t/s
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 pp512 315.60 ± 0.89
qwen3 32B Q4_K - Small 17.48 GiB 32.76 B Vulkan 99 1 tg128 33.89 ± 0.03

@0cc4m
Copy link
Collaborator

0cc4m commented May 9, 2025

Yes, but my initial concern is just that there are no issues with the output and performance is roughly in line with expected numbers. Performance tuning can happen in follow-up PRs.

@Mushoz
Copy link

Mushoz commented May 9, 2025

Yes, but my initial concern is just that there are no issues with the output and performance is roughly in line with expected numbers. Performance tuning can happen in follow-up PRs.

Fair enough. It does seem better not to have any further holdup, since even with this regression this is a massive improvement and finally allows me to drop ROCm completely as KV cache quantization was the only thing preventing me from moving over to Vulkan.

Just hoping we can get back to the same token generation performance as the earlier version of this PR in a followup PR :)

Great job on this massive step forward for the Vulkan backend!

@LostRuins
Copy link
Collaborator

Seems to be working very well. Thanks!

@ross-rosario
Copy link

Can't wait to test this out once merged!

@netrunnereve
Copy link
Collaborator

Compared to my last run pp2000 is around 30% faster with these new changes, while everything else is pretty close to before. As the others mentioned optimizations will come eventually and I think this is good enough to merge.

model size params backend ngl threads main_gpu sm fa test t/s
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp512 173.22 ± 0.47
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg128 34.46 ± 0.03
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 pp2000 113.37 ± 0.19
llama 7B Q4_0 3.56 GiB 6.74 B Vulkan 100 8 1 none 1 tg2000 30.54 ± 0.47

@0cc4m
Copy link
Collaborator

0cc4m commented May 10, 2025

ggml_vulkan: 0 = AMD Radeon RX 6800 XT (RADV NAVI21) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 pp512 1455.98 ± 2.36
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 0 tg128 87.02 ± 0.08
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 pp512 1385.49 ± 0.29
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 1 tg128 81.38 ± 0.12

@0cc4m 0cc4m merged commit dc1d2ad into ggml-org:master May 10, 2025
44 checks passed
@Nindaleth
Copy link
Contributor

Nindaleth commented May 10, 2025

Thanks for fixing #12526! Tests with Qwen2.5-Coder-1.5B-Instruct-Q8_0.gguf and Qwen2.5-Coder-14B-Instruct-Q4_K_L.gguf.

ggml_vulkan: 0 = AMD Radeon RX 6700 XT (RADV NAVI22) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl fa test t/s
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 0 pp512 4693.34 ± 4.48
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 0 tg128 114.12 ± 0.07
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan 99 0 pp512 1991.92 ± 1.50
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan 99 0 tg128 98.12 ± 0.64
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan main 99 1 pp512 632.78 ± 24.03
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan main 99 1 tg128 9.96 ± 0.07
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan PR 99 1 pp512 1875.01 ± 1.59
qwen2 1.5B Q8_0 1.53 GiB 1.54 B Vulkan PR 99 1 tg128 89.87 ± 0.15
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 1 pp512 3467.94 ± 2.05
qwen2 1.5B Q8_0 1.53 GiB 1.54 B ROCm 99 1 tg128 101.82 ± 0.89
model size params backend ngl fa test t/s
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 0 pp512 409.84 ± 0.56
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 0 tg128 27.27 ± 0.11
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan 99 0 pp512 248.96 ± 0.52
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan 99 0 tg128 29.09 ± 0.17
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan main 99 1 pp512 111.39 ± 2.66
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan main 99 1 tg128 12.33 ± 0.08
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan PR 99 1 pp512 238.57 ± 1.41
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B Vulkan PR 99 1 tg128 27.16 ± 0.13
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 1 pp512 355.37 ± 2.31
qwen2 14B Q4_K - Medium 8.90 GiB 14.77 B ROCm 99 1 tg128 26.23 ± 0.08

@soerenkampschroer
Copy link

Just wanted to let you know that on macOS (Intel CPU/AMD GPU) this doesn't seem to work. I tried using flash attention and I'm getting the following error:

common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
ggml_vulkan: Compute pipeline creation failed for flash_attn_f32_f16_D128_aligned_f32accf16
ggml_vulkan: vk::Device::createComputePipeline: ErrorInitializationFailed
libc++abi: terminating due to uncaught exception of type std::out_of_range: unordered_map::at: key not found
[1]    44288 abort      ./llama-server --port 2108 -m  --n-gpu-layers 200 -fa

I'm using MoltenVK v1.3.0 and Vulkan SDK v1.4.313 with an RX 6800, macOS 15.4.1.

./test_backend_ops also crashes here:

FLASH_ATTN_EXT(hsk=64,hsv=64,nh=4,nr=1,kv=512,nb=1,mask=1,max_bias=0.000000,logit_softcap=0.000000,prec=f32,type_KV=f16,permute=[0,1,2,3]): ggml_vulkan: Compute pipeline creation failed for flash_attn_f32_f16_D64_aligned_f32acc_smallrowsf16
ggml_vulkan: vk::Device::createComputePipeline: ErrorInitializationFailed
libc++abi: terminating due to uncaught exception of type std::out_of_range: unordered_map::at: key not found
[1]    44482 abort      ./test-backend-ops

I can provide more logs here or open a separate issue if you want me to.

@jeffbolznv
Copy link
Collaborator Author

Yeah, please file a new issue to track this. Do the validation layers report anything?

SamuelOliveirads pushed a commit to SamuelOliveirads/llama.cpp that referenced this pull request Dec 29, 2025
* Merge vulkan code from mainline up to commit of 6/28/2025

* Vulkan Optimizations and Fixes (ggml-org#8959)

* Optimize Vulkan REPEAT performance

* Use Vulkan GLSL fused multiply-add instruction where possible

* Add GGML_VULKAN_PERF option to output performance data per operator

* Rework and fix Vulkan descriptor set and descriptor pool handling

* Fix float32 concat f16 shader validation error

* Add Vulkan GROUP_NORM eps parameter

* Fix validation error with transfer queue memory barrier flags

* Remove trailing whitespaces

vulkan : do not use tensor->extra (ggml-org#9407)

* vulkan : do not use tensor->extra

This patch allows using the Vulkan backend with the RPC backend as
tensor->extra is no longer used.

Ref: ggml-org#8536

* Adapt GGML_VULKAN_CHECK_RESULTS to extra removal (F1LM1#2)

---------

Co-authored-by: 0cc4m <picard12@live.de>
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan : fix build (#0)

ggml-ci

Improve Vulkan shader build system (ggml-org#9239)

* Improve Vulkan shader builds system

- Add dependency to vulkan-shaders-gen to rebuild shaders when changing the shader compilation utility.
- Add option to generate debug info for Vulkan shaders to provide shader source to Vulkan shader profiling tools

* remove not required self dependency

ggml : fix build break for the vulkan-debug (ggml-org#9265)

- windows build : Ok.
- linux build : Ok.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

vulkan: correctly report support for OP_CONT (ggml/946)

test-backend-ops fails because ggml_cont aborts
when invoked passing an unsupported type.

This commit makes ggml_cont tests pass

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan: add dryrun support to sin and cos ops (ggml/947)

sin and cos failed test-backend-ops because they
tried to dereference a context pointer that is null
on dry runs.

This commit prevents that segfault.

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early. (ggml-org#9118)

* Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early.

* fix compile issues

* Fix issues where the last submit wasn't executed or handled properly.

* remove trailing whitespace

* Repair GGML_VULKAN_CHECK_RESULTS

* Increase submit counter only if actual work has been submitted and increase submit count to 100.

* Fix some nodes are not checked with GGML_VULKAN_CHECK_RESULTS enabled.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

Enable use to the rebar feature to upload buffers to the device. (ggml-org#9251)

vulkan : argsort barriers must be under uniform control flow (ggml/951)

a return before a barrier (that happens only in some threads in
a workgroup) leads to UB.
While the old code actually works on some devices,
it fails on some others (i.e. "smaller" GPUs).

BTW, I think it would be better to set specialization constants
when the graph is built, in that way the local workgroup
could be sized appropriately.
But it would take a lot of work.

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan : fix build for GGML_VULKAN_RUN_TESTS, add TFLOPS to log (ggml/961)

vulkan : multithread pipeline creation (ggml/963)

vulkan : mul_mat: fix UB with small warps (ggml/952)

When the device's warp size is less than 16,
it is possible for loadstride_a (mul_mm.comp:114)
and loadstride_b (mul_mm.comp:115) to be set to 0.
Because they are calculated as: the workgroup size,
multiplied by LOAD_VEC_* (which can be 1) and divided by 16.
And the workgroup size is set to be the same as the
warp/subgroup size.

The loadstride_* variables are used as increments in the
loops that populate the buffers used for the multiplication.

When they are 0 they cause an infinite loop.
But infinite loops without side-effects are UB and the
values of loadstride_* are known at compile time.
So, the compiler quietly optimizes all the loops away.
As a consequence, the buffers are not populated and
the multiplication result is just a matrix with all elements
set to 0.

We prevent the UB by making sure that the workgroup size
will never be less than 16, even if our device has a
smaller warp size (e.g. 8).

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan : retry allocation with fallback flags (whisper/2451)

Co-authored-by: Samuel Morris <samuel.morris@artlist.io>

vulkan : improve ggml_vk_create_buffer error handling (ggml-org#9898)

vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (ggml-org#10226)

vulkan: Throttle the number of shader compiles during the build step. (ggml-org#10222)

Fixes ggml-org#9582

Spawning too many concurrent copies of glslc leads to "Failed to create pipes"
errors on Linux. This change applies the same throttling we use for
multithreaded pipeline creation.
# Conflicts:
#	ggml/src/vulkan-shaders/vulkan-shaders-gen.cpp

vulkan: Optimize contiguous copies (ggml-org#10254)

* tests: Fix memory bandwidth calculation for perf tests

Add a flops calculation for flash attention.

Add one GGML_OP_CPY perf test.

* vulkan: Optimize contiguous copies

Add a variant of the copy shader for when the tensors are contiguous. Avoid
the complex addressing calculations, and do four elements per invocation
to hide some other overhead.

Apply similar changes to the scale shader, since scale is always contiguous.

Add a "progress bar" for shader compiles.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: Use macros to make the mat mul pipeline creation more concise (ggml-org#10259)

Also add vk_matmul_pipeline2 to hold f16/f32 accumulator versions of a
pipeline. This isn't really used yet.

vulkan: Optimize binary ops (ggml-org#10270)

Reuse the index calculations across all of src0/src1/dst. Add a shader
variant for when src0/src1 are the same dimensions and additional modulus
for src1 aren't needed. Div/mod are slow, so add "fast" div/mod that
have a fast path when the calculation isn't needed or can be done more
cheaply.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/acc.comp

ggml : vulkan logs (whisper/2547)

vulkan: Optimize some mat-vec mul quant shaders (ggml-org#10296)

Compute two result elements per workgroup (for Q{4,5}_{0,1}). This reuses
the B loads across the rows and also reuses some addressing calculations.
This required manually partially unrolling the loop, since the compiler
is less willing to unroll outer loops.

Add bounds-checking on the last iteration of the loop. I think this was at
least partly broken before.

Optimize the Q4_K shader to vectorize most loads and reduce the number of
bit twiddling instructions.

Vulkan: Fix device info output format specifiers (ggml-org#10366)

* Vulkan: Fix device info output format specifiers

* Vulkan: Use zu printf specifier for size_t instead of ld

vulkan: remove use of null initializer (ggml-org#10372)

Seems like this isn't working for vulkan-over-metal when the array is sized
by a spec constant. Maybe a spirv-cross limitation?

vulkan: Optimize soft_max (ggml-org#10301)

* vulkan: Optimize soft_max

Large soft_max could already saturate memory, but small/medium sizes were
pretty slow. The bulk of the gains for them comes from using a smaller
workgroup size, and making the workgroup size match the subgroup size also
makes the barriers much cheaper.

Cache some values in locals to avoid refetching/recomputing. And stamp
out a few "template instantiations" so smaller cases will fully unroll.

Add a missing early return for OOB rows. This happens when there are more
than 512 rows and the dispatch is 512 x H.

* vulkan: Further soft_max optimizations

Restore the workgroup size of 512 case, use it for >1024.

Use unrollable loops for more iteration counts.

vulkan: further optimize mul_mat_vec using larger loads (ggml-org#10387)

* vulkan: Use pipeline_robustness to disable robustness in mul_mat_vec.

Add some early returns for nonexistent rows in mul_mat_vec shaders. These
can only be hit when dispatching a 2D grid of workgroups. Fix the logic
for the 2D grid of workgroups to round up.

Enable the pipeline robustness extension if it's available, and use it to
disable robustness for these pipelines. The instructions to do the bounds
checking contend for the same ALU resources as the bit twiddling dequant
instructions.

* vulkan: Add GLSL structure aliases for quant types to allow larger loads

In Vulkan it's not possible to cast pointer types, so instead you have to
declare an aliased binding for the memory with a different type. This
commit adds aliases for the quant formats using 16b ints, and in a few
places where the struct size is a multiple of 4 also using 32b ints.
Currently only q4_k's aliases are used, but others will be used in
subsequent commits.

* vulkan: use larger loads in q5_k and q6_k shaders.

Similar to the optimization I did in q4_k recently, this vectorizes some loads
and reduces the number of bit twiddling instructions.

* vulkan: use larger K step per iteration in mul_mat_vec.

Add vec4 dequantization functions, and use them to do K=8 per iteration in
mul_mat_vec. This uses 16b loads for the quant values and 128b loads for B
which helps reduce the load on the memory system.

The K_PER_ITER==2 logic is still there, just for F16/F32, and really only
because they support unaligned sizes.

Tweak the num_iters/unrolling logic to be simpler and catch a couple missed
unrolling opportunities.

vulkan: copy iq4_nl LUT into shared memory (ggml-org#10409)

vulkan: predicate max operation in soft_max shaders/soft_max (ggml-org#10437)

Fixes ggml-org#10434

vulkan: Fix a vulkan-shaders-gen arugment parsing error (ggml-org#10484)

The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.

vulkan: fix group_norm (ggml-org#10496)

Fix bad calculation of the end of the range. Add a backend test that
covers the bad case (taken from stable diffusion).

Fixes leejet/stable-diffusion.cpp#439.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: optimize Q2_K and Q3_K mul_mat_vec (ggml-org#10459)

vulkan: skip integer div/mod in get_offsets for batch_idx==0 (ggml-org#10506)

vulkan: further optimize q5_k mul_mat_vec (ggml-org#10479)

vulkan: Handle GPUs with less shared memory (ggml-org#10468)

There have been reports of failure to compile on systems with <= 32KB
of shared memory (e.g. ggml-org#10037). This change makes the large tile size
fall back to a smaller size if necessary, and makes mul_mat_id fall
back to CPU if there's only 16KB of shared memory.

vulkan: define all quant data structures in types.comp (ggml-org#10440)

vulkan: get the first command buffer submitted sooner (ggml-org#10499)

This is an incremental improvement over ggml-org#9118 to get work to the GPU a bit
sooner. The first part is to start with a smaller number of nodes before
the first submit, and ramp it up to the current 100 nodes/submit. The
second part is to reduce the dryrun overhead for all the nodes that just
need to request descriptor space.

With these changes I get around 1-2% speedup on RTX 4070 combined with my
old Haswell-era CPU.

vulkan: Dynamic subgroup size support for Q6_K mat_vec (ggml-org#10536)

* subgroup 64 version with subgroup add. 15% faster

scalable version

tested for subgroup sizes 16-128

* check for subgroup multiple of 16 and greater than 16

* subgroup sizes are always a power of 2 (KhronosGroup/GLSL#45)

* force 16 sequential threads per block

* make 16 subgroup size a constant

vulkan: optimize and reenable split_k (ggml-org#10637)

Use vector loads when possible in mul_mat_split_k_reduce. Use split_k
when there aren't enough workgroups to fill the shaders.

vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (ggml-org#10642)

vulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (ggml-org#10206)

# Conflicts:
#	ggml/src/vulkan-shaders/dequant_funcs_cm2.comp
#	ggml/src/vulkan-shaders/flash_attn_cm2.comp
#	ggml/src/vulkan-shaders/mul_mm_cm2.comp

Vulkan: VK_KHR_cooperative_matrix support to speed up prompt processing (ggml-org#10597)

* Vulkan: Implement VK_KHR_cooperative_matrix support in the matrix matrix multiplication shader

* Improve performance with better q4_k and q5_k dequant and store unrolling

* Add Vulkan MUL_MAT and MUL_MAT_ID accumulator precision selection

* Rework mulmat shader selection and compilation logic, avoid compiling shaders that won't get used by device

* Vulkan: Implement accumulator switch for specific mul mat mat shaders

* Vulkan: Unroll more loops for more mul mat mat performance

* Vulkan: Add VK_AMD_shader_core_properties2 support to read Compute Unit count for split_k logic

* Disable coopmat support on AMD proprietary driver

* Remove redundant checks

* Add environment variable GGML_VK_DISABLE_COOPMAT to disable VK_KHR_cooperative_matrix support

* Fix rebase typo

* Fix coopmat2 MUL_MAT_ID pipeline selection
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: compile a test shader in cmake to check for coopmat2 support (ggml-org#10713)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	ggml/src/vulkan-shaders/test_coopmat2_support.comp

Vulkan: fix NaN in tanh.comp with AMD proprietary driver on Windows (ggml-org#10723)

* Vulkan: fix NaN in tanh.comp

* Faster NaN-free tanh

vulkan: fix compile warnings (ggml-org#10731)

vulkan: disable spirv-opt for coopmat shaders (ggml-org#10763)

There are some bugs in the 1.3.296 SDK, so disable this. It isn't strictly
necessary anyway.

Add missing dependency on vulkan-shaders-gen, so shaders get recompiled when it
changes.

Fix coopmat support reporting when glslc doesn't support NV_coopmat2.

vulkan: dynamic subgroup size for the remaining k quants (ggml-org#10745)

* q5_k

q4_k

q3_k

q2_k

q6_k multi row example

* revert as multi row isnt faster for k quants

vulkan: request round-to-even for fp16 in im2col/rope_head (ggml-org#10767)

Vulkan doesn't mandate a specific rounding mode, but the shader_float_controls
feature allows rounding mode to be requested if the implementation supports it.

Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats (ggml-org#10721)

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* Fix subgroup size control extension support check

Add accf32 and accf16 checks for coopmats

* Also disable coopmats on amdvlk

Vulkan: Use improved q4_k and q5_k dequant code in dequant shaders (ggml-org#10798)

vulkan: small mul_mat_vec optimizations (ggml-org#10665)

* double the number of rows per workgroup

* Update ggml-vulkan.cpp

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* only increase the number of rows for amd and subgroup size 64

* fix missing NUM_ROWS for mul_mat_vec_iq4_nl_f16_f32, untested

* use subgroup min and max to check for gcn (requires ggml-org#10721)

* manual merge ggml-vulkan.cpp

* set min and max subgroup size in any case

* Also double the number of rows for Intel GPUs

Change Debug print name

add GGML_ROPE_TYPE_MROPE

rwkv6: add wkv6 support for Vulkan backend (ggml-org#10829)

* rwkv_wkv6 vulkan shader

* RWKV_WKV6 Vulkan op tests passed

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* Apply code format changes

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* add [[unroll]] and remove unnecessary conditions

* add uma support

* fix erros in EditorConfig Checker

---------

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
Co-authored-by: Molly Sophia <mollysophia379@gmail.com>
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/wkv6.comp

vulkan: bugfixes for small subgroup size systems + llvmpipe test (ggml-org#10809)

* ensure mul mat shaders work on systems with subgroup size less than 32

more fixes

add test

* only s_warptile_mmq needs to be run with 32 threads or more
# Conflicts:
#	.github/workflows/build.yml

vulkan : fix soft_max.comp division by zero (whisper/2633)

This change prevents a division by zero error when p.KY is 0.

vulkan: optimize coopmat2 dequant functions (ggml-org#10855)

Change the code to do 16b loads when possible and extract the appropriate
component late, so the code is effectively decoding a pair of elements and
then selecting one. This can allow more commoning to happen in the compiler
when neighboring elements are loaded.

vulkan: build fixes for 32b (ggml-org#10927)

* vulkan: build fixes for 32b

Should fix ggml-org#10923

* vulkan: initialize some buffer/offset variables

examples, ggml : fix GCC compiler warnings (ggml-org#10983)

Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers]  (emitted for all struct field except first)
# Conflicts:
#	examples/export-lora/export-lora.cpp

vulkan: multi-row k quants (ggml-org#10846)

* multi row k quant shaders!

* better row selection

* more row choices

* readjust row selection

* rm_kq=2 by default

vulkan: Use push constant offset to handle misaligned descriptors (ggml-org#10987)

vulkan: im2col and matmul optimizations for stable diffusion (ggml-org#10942)

* tests: Add im2col perf tests

* vulkan: optimize im2col, more elements per thread

* vulkan: increase small tile size for NV_coopmat2

* vulkan: change im2col to 512 elements per workgroup

vulkan: optimize mul_mat for small values of N (ggml-org#10991)

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.
# Conflicts:
#	tests/test-backend-ops.cpp

fix: Vulkan shader gen binary path (ggml-org#11037)

Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (ggml-org#11074)

* Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver

* Add (TM) to AMD name check

fix lora print

Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (ggml-org#11117)

* Disable GL_KHR_cooperative_matrix Vulkan extension if not available.

* Perform Vulkan extensions checks in a more sensible order

* Remove unnecessary #ifdef directive
# Conflicts:
#	ggml/src/vulkan-shaders/test_coopmat_support.comp

llama: add support for QRWKV6 model architecture (ggml-org#11001)

Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (ggml-org#11161)

* Vulkan: Remove float16 use in shaders

* Fix validation error about subgroup_size_control extension

fix: ggml: fix vulkan-shaders-gen build (ggml-org#10448)

* fix: ggml: fix vulkan-shaders-gen build

The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

Use configure_file to generate host_toolchain.cmake from template

* fix: ggml: Fix compile error

Fix compile error not finding vulkan-shaders-gen

* fix: vulkan-shaders-gen build and path handling

Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation

* fix: improve host compiler detection in vulkan shader build

Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation

* refactor: Simplify CMake function for detecting host compiler

Simplified the CMake function to improve the process of detecting the host compiler.

* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt

Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See: ecc93d0)

* refactor: Rename host_toolchain.cmake.in

- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in

* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN

Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
# Conflicts:
#	ggml/src/ggml-vulkan/CMakeLists.txt

vulkan: scale caching for k quants + misc fixes (ggml-org#11081)

* q6_k scale caching

* 16 bit unpack

* q4_k test (slow)

* revert it

* q3_k

* q2_k

* little stuff

* try precalculating products of a and q2_k scales

* Revert "try precalculating products of a and q2_k scales"

This reverts commit 65110b81f23f66331a50c6e889a7c1ab9470a86b.

* unpack should be u16, add vim swap to gitignore (about time)

* better q4_k scales

* q5_k

* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations

* q2_k better dequant

* q3_k optimizations

* q3_k use hmask simd from cpu avx version

* make the caches happy

* q3_k separate out calculation

* q2_k separate out

* little stuff

* use calc_superblock everywhere

* q2_k optimize scale calculation

* more barriers

vulkan: optimize coopmat2 q2_k dequant function (ggml-org#11130)

vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (ggml-org#11206)

Do masking on whole dwords, fetch all scales at once.

vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (ggml-org#11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination
# Conflicts:
#	ggml/src/ggml-cpu/ggml-cpu.c
#	ggml/src/vulkan-shaders/copy_from_quant.comp
#	ggml/src/vulkan-shaders/copy_to_quant.comp

vulkan: fix coopmat2 flash attention for non-contiguous inputs (ggml-org#11281)

Add code similar to mul_mm_cm2 to force alignment of strides, to avoid
a performance regression.

Add noncontiguous FA tests in test-backend-ops.

Fixes ggml-org#11268.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: fix coopmat2 validation failures (ggml-org#11284)

mul mat and flash attention shaders were loading f32 types directly into
A/B matrices, which happens to work but is technically invalid usage.
For FA, we can load it as an Accumulator matrix and convert and this
is not in the inner loop and is cheap enough. For mul mat, it's more
efficient to do this conversion in a separate pass and have the input(s)
be f16.

coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId
requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3.

vulkan: fix diag_mask_inf (ggml-org#11323)

With robustbufferaccess disabled, this shader was showing OOB stores. There
is a bounds check in the code, but the workgrouop dimensions were reversed vs
CUDA and it was running the wrong number of threads. So fix the workgroup
dimensions and disable robustness for this pipeline.

vulkan: sort shaders for more deterministic binary (ggml-org#11315)

Fixes ggml-org#11306.

Vulkan-run-test: fix mmq_wg_denoms (ggml-org#11343)

There should be a copy-and-paste error here.

*mmq_wg_denoms should be used together with *warptile_mmq, instead of
wg_denoms.

vulkan: compile shaders on-demand (ggml-org#11406)

Reduce first-run startup time and memory consumption.

Should fix ggml-org#11339.

vulkan: Catch pipeline creation failure and print an error message (ggml-org#11436)

* vulkan: Catch pipeline creation failure and print an error message

Also, fix some warnings from my on-demand compile change.

* vulkan: fix pipeline creation logging

vulkan: implement initial support for IQ2 and IQ3 quantizations (ggml-org#11360)

* vulkan: initial support for IQ3_S

* vulkan: initial support for IQ3_XXS

* vulkan: initial support for IQ2_XXS

* vulkan: initial support for IQ2_XS

* vulkan: optimize Q3_K by removing branches

* vulkan: implement dequantize variants for coopmat2

* vulkan: initial support for IQ2_S

* vulkan: vertically realign code

* port failing dequant callbacks from mul_mm

* Fix array length mismatches

* vulkan: avoid using workgroup size before it is referenced

* tests: increase timeout for Vulkan llvmpipe backend

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq2_s.comp
#	ggml/src/vulkan-shaders/dequant_iq2_xs.comp
#	ggml/src/vulkan-shaders/dequant_iq2_xxs.comp
#	ggml/src/vulkan-shaders/dequant_iq3_s.comp
#	ggml/src/vulkan-shaders/dequant_iq3_xxs.comp

CUDA: non-contiguous (RMS) norm support (ggml-org#11659)

vulkan: use smaller combined allocations to avoid fragmentation (ggml-org#11551)

# Conflicts:
#	ggml/src/ggml-alloc.c

vulkan: initial support for IQ4_XS quantization (ggml-org#11501)

# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq4_xs.comp

vulkan: optimize coopmat2 iq2/iq3 callbacks (ggml-org#11521)

* vulkan: optimize coopmat2 iq2/iq3 callbacks

* build: trigger CI on GLSL compute shader changes

vulkan: print shared memory size (ggml-org#11719)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: account for lookup tables when checking shared memory size (ggml-org#11502)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (ggml-org#11592)

vulkan: linux builds + small subgroup size fixes (ggml-org#11767)

* mm subgroup size

* upload vulkan x86 builds

vulkan: initial support for IQ1_S and IQ1_M quantizations (ggml-org#11528)

* vulkan: initial support for IQ1_S and IQ1_M quantizations

* vulkan: define MMV kernels for IQ1 quantizations

* devops: increase timeout of Vulkan tests again

* vulkan: simplify ifdef for init_iq_shmem
# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq1_m.comp
#	ggml/src/vulkan-shaders/dequant_iq1_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq1_m.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq1_s.comp

vulkan: support multi/vision rope, and noncontiguous rope (ggml-org#11902)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/rope_multi.comp
#	ggml/src/vulkan-shaders/rope_vision.comp

vulkan: implement several ops relevant for ggml_opt (ggml-org#11769)

* vulkan: support memset_tensor

* vulkan: support GGML_OP_SUM

* vulkan: implement GGML_OP_ARGMAX

* vulkan: implement GGML_OP_SUB

* vulkan: implement GGML_OP_COUNT_EQUAL

* vulkan: implement GGML_OP_OPT_STEP_ADAMW

* vulkan: fix check_results RWKV_WKV6 crash and memory leaks

* vulkan: implement GGML_OP_REPEAT_BACK

* tests: remove invalid test-backend-ops REPEAT_BACK tests

* vulkan: fix COUNT_EQUAL memset using a fillBuffer command
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/argmax.comp
#	ggml/src/vulkan-shaders/count_equal.comp
#	ggml/src/vulkan-shaders/opt_step_adamw.comp
#	ggml/src/vulkan-shaders/repeat_back.comp
#	ggml/src/vulkan-shaders/sub.comp
#	tests/test-backend-ops.cpp

vulkan: implement more backpropagation operators (ggml-org#11914)

* vulkan: implement GGML_OP_ROPE_BACK

* vulkan: implement GGML_OP_RMS_NORM_BACK

* vulkan: implement GGML_OP_SILU_BACK

* vulkan: implement GGML_OP_SOFTMAX_BACK
# Conflicts:
#	ggml/src/vulkan-shaders/rms_norm_back.comp
#	ggml/src/vulkan-shaders/silu_back.comp
#	ggml/src/vulkan-shaders/soft_max_back.comp

Add memset tensor in all backend interface

SYCL: implement memset ggml backend buffer interface (ggml-org#12580)

* SYCL: implement memset ggml backend buffer interface

* use GGML_ABORT macro

* Do not wait for all queues to finish for memset operation
# Conflicts:
#	ggml/src/ggml-sycl.cpp

add OP sigmoid (ggml-org#12056)

Co-authored-by: Judd <foldl@boxvest.com>
# Conflicts:
#	ggml/src/vulkan-shaders/sigmoid.comp

vulkan: fix assertion when qy_needs_dequant (ggml-org#12068)

Looks like a copy/paste bug from qx_needs_dequant.

vulkan: improve im2col (ggml-org#11826)

* vulkan: improve im2col performance

vulkan: matmul dequantization improvements (ggml-org#12015)

* faster dequant for old quants

* dont use unpack for iq4_nl

* vec2 unpack for q8

vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations (ggml-org#11595)

* vulkan: implement specialized MMV kernels for IQ2 quantizations

* vulkan: add MMV kernels for IQ3 quants

* vulkan: Increase MMV batch size and unroll IQ LUT setup

* vulkan: fix init_iq_shmem for WG sizes larger than tables

* vulkan: common batch size for all I-quants
# Conflicts:
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_xs.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_xxs.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq3_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq3_xxs.comp

cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)

ggml-ci

# Conflicts:
#	ggml/src/ggml-cuda.cu
#	tests/test-backend-ops.cpp

mat vec double buffer (ggml-org#12188)

vulkan: fix bug in coopmat1 mul_mat_id (ggml-org#12316)

* tests: run mul_mat_id with a larger N

* vulkan: fix bug in coopmat1 mul_mat_id

Update build.yml for Windows Vulkan builder to use Vulkan 1.4.304 SDK for VK_NV_cooperative_matrix2 support (ggml-org#12301)

vulkan: Adjust coopmat2 tile sizes and selection heuristic (ggml-org#12258)

vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (ggml-org#12273)

* vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking

vulkan: use fp32 in coopmat2 q4_k dequant function (ggml-org#12309)

vulkan: subgroup size tuning (ggml-org#12087)

* vulkan: subgroup size test

* Vulkan: Add device architecture enum and logic to recognize AMD generations

* vulkan: use new architecture logic to specify subgroup size

* Initial vulkan subgroup size tuning for RDNA3

* vulkan: commonize RDNA subgroup tuning

* vulkan: override subgroup size if required_subgroup_size = 0

* vulkan: disable warp 32 for RDNA3

* vulkan: fine tuned RDNA1 subgroup sizes

* vulkan: adjusted subgroup size map

* vulkan: fixed RDNA2 subgroup map

---------

Co-authored-by: 0cc4m <picard12@live.de>

vulkan: Add N/2 and N/4 optimized paths in coopmat2 shader (ggml-org#12312)

ggml-vulkan: remove unused find_program(glslc) (ggml-org#12416)

It's already found by FindVulkan.cmake in the parent CMakeLists

Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (ggml-org#12434)

vulkan: Submit once enough matmul work has been recorded (ggml-org#12406)

I've been seeing significantly worse performance for tg with flash attention
enabled vs disabled, and it seems to be related to the submit heuristic.
Change the heuristic to check how many bytes worth of weight matrix are
used and flush every 100MB, and ramp up after the first few submits.
This seems to resolve the issue, and also increases perf for non-FA a bit.

vulkan: optimize iq1 coopmat2 dequant functions (ggml-org#12427)

vulkan: workaround for AMD Windows driver 16 bit unpack8 bug (ggml-org#12472)

Vulkan: RTE rounding for cpy to quant (ggml-org#12480)

* Vulkan: RTE rounding for cpy to quant

Co-Authored-By: Jeff Bolz <jbolz@nvidia.com>

* remove trailing whitespace

* avoid duplicating pipeline_cpy_f32_quant

* fix copypasting issue

* remove duplicated code

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

vulkan: Optimize mul_mat_vec p021 and nc shaders (ggml-org#12505)

* tests: add mul_mat perf/functional tests for p021/nc vulkan shaders

* vulkan: Optimize mul_mat_vec p021 and nc shaders.

These shaders are used in attention calculations, and when the KV cache grows
large they start to dominate the run time. For the nc shader (which is called
with large 'k' dimension), use unrolling and vector loads. For the p021 shader
(which is called with large 'm' and small 'k' dimensions), take advantage of
grouped query attention to reuse loads from the A matrix for the whole group,
and reduce the number of workgroups (too much overhead from tiny dispatches).

Using subgroupAdd in the p021 shader also helps, use that conditionally.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: fix mul_mat_vec failure in backend tests (ggml-org#12529)

The OOB calculation could be wrong if the last iteration was during one of
the unrolled loops. Adjust the unrolling counts to avoid this. Add a couple
new backend tests that hit this failure on NVIDIA GPUs.

vulkan: fix coopmat shader generation when cross-compiling (ggml-org#12272)

* vulkan: fix coopmat shader generation when cross-compiling

Previously the status of coopmat{,2} support isn't passed to the
vulkan-shaders-gen project building on the host, which leads to build
failure because of the cross-compiling code expecting coopmat{,2}
shaders that didn't get generated.

Fix this by passing the coopmat{,2} support status to vulkan-shaders
subproject.

Signed-off-by: Icenowy Zheng <uwu@icenowy.me>

* Only call coop-mat shaders once

* Fix whitespace

---------

Signed-off-by: Icenowy Zheng <uwu@icenowy.me>
Co-authored-by: bandoti <141645996+bandoti@users.noreply.github.com>

cmake: improve Vulkan cooperative matrix support checks (whisper/2966)

Co-authored-by: Sandro Hanea <me@sandro.rocks>

cmake : fix whitespace (#0)

Vulkan: Add DP4A MMQ and Q8_1 quantization shader (ggml-org#12135)

* Vulkan: Add DP4A MMQ and Q8_1 quantization shader

* Add q4_0 x q8_1 matrix matrix multiplication support

* Vulkan: Add int8 coopmat MMQ support

* Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code

* Add GL_EXT_integer_dot_product check

* Remove ggml changes, fix mmq pipeline picker

* Remove ggml changes, restore Intel coopmat behaviour

* Fix glsl compile attempt when integer vec dot is not supported

* Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq

* Remove redundant comment

* Fix integer dot check

* Fix compile issue with unsupported int dot glslc

* Update Windows build Vulkan SDK version
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/mul_mmq.comp
#	ggml/src/vulkan-shaders/mul_mmq_funcs.comp
#	ggml/src/vulkan-shaders/quantize_q8_1.comp
#	ggml/src/vulkan-shaders/test_integer_dot_support.comp

vulkan: fix build when glslc doesn't support coopmat (ggml-org#12683)

Vulkan: Fix mmq int dot float cache size (ggml-org#12722)

vulkan: Implement grouped query attention in the coopmat2 FA shader (ggml-org#12559)

When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:

dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))

previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.

This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.

cmake: remove caching from vulkan coopmat checks (ggml-org#12719)

vulkan: Implement split_k for coopmat2 flash attention. (ggml-org#12627)

When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_split_k_reduce.comp

vulkan: Fix missing cmake logic for dot product extension (ggml-org#12721)

vulkan: set cmake minimum and project name in vulkan-shaders (ggml-org#12744)

vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (ggml-org#12630)

There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

cmake: fix ggml-shaders-gen compiler paths containing spaces (ggml-org#12747)

fixes error for compiler paths with spaces

Vulkan: Tune Vulkan mmq int dot shader for performance (ggml-org#12767)

vulkan: Use unclamped loads for flash attention mask (ggml-org#12720)

nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.

vulkan: fix NaN issue in flash attention shader (ggml-org#12776)

Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.

vulkan: Use fp16 for the flash attention P*V multiplication (ggml-org#12783)

This is consistent with the ggml-cuda behavior and the mul_mat fallback.

vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (ggml-org#12833)

q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.

vulkan: use aligned loads for flash attention mask (ggml-org#12853)

Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.

vulkan: enable coopmat2 FA gqa and split_k optimizations more often (ggml-org#12931)

The grouped query attention optmization doesn't require a power of two ratio,
the only thing relying on it was the modulo operation written as bitwise &.

split_k need not depend on gqa_ratio - enable it any time there's only one
workgroup in the X dimension. The shader gets the split index from the x coord,
and multiple workgroups in the X dimension (pre-split) indicates a larger
FA operation that wouldn't need splitting.

vulkan: support noncontiguous rms_norm (ggml-org#13031)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: matmul gcn tuning (ggml-org#13016)

* tune matmul for gcn

* this one is more power efficient

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

Co-authored-by: 0cc4m <picard12@live.de>

* disable this tune for the proprietary driver

---------

Co-authored-by: 0cc4m <picard12@live.de>

vulkan: use uint array index to avoid glslang bug (ggml-org#13193)

vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (ggml-org#13191)

* vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader

vulkan: Add bfloat16 support (ggml-org#12554)

* vulkan: Add bfloat16 support

This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16.
The extension is required for coopmat multiply support, but matrix-vector
multiply trivially promotes bf16 to fp32 and doesn't require the extension.
The copy/get_rows shaders also don't require the extension.

It's probably possible to fall back to non-coopmat and promote to fp32 when
the extension isn't supported, but this change doesn't do that.

The coopmat support also requires a glslc that supports the extension, which
currently requires a custom build.

* vulkan: Support bf16 tensors without the bf16 extension or coopmat support

Compile a variant of the scalar mul_mm shader that will promote the bf16
values to float, and use that when either the bf16 extension or the coopmat
extensions aren't available.

* vulkan: bfloat16 fixes (really works without bfloat16 support now)

* vulkan: fix spirv-val failure and reenable -O
# Conflicts:
#	ggml/src/vulkan-shaders/test_bfloat16_support.comp

vulkan: Additional type support for unary, binary, and copy (ggml-org#13266)

Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: Allow up to 4096 elements for mul_mat_id row_ids (ggml-org#13326)

This assert fired running Qwen_Qwen3-30B-A3B-Q2_K.gguf:

GGML_ASSERT(nei0 * nei1 <= 3072);

The tensor is 8 x 512. Increase this array size to accommodate.

vulkan: scalar flash attention implementation (ggml-org#13324)

* vulkan: scalar flash attention implementation

* vulkan: always use fp32 for scalar flash attention

* vulkan: use vector loads in scalar flash attention shader

* vulkan: remove PV matrix, helps with register usage

* vulkan: reduce register usage in scalar FA, but perf may be slightly worse

* vulkan: load each Q value once. optimize O reduction. more tuning

* vulkan: support q4_0/q8_0 KV in scalar FA

* CI: increase timeout to accommodate newly-supported tests

* vulkan: for scalar FA, select between 1 and 8 rows

* vulkan: avoid using Float16 capability in scalar FA
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/flash_attn.comp

vulkan: workaround FA compile failures on macos (ggml-org#13517)

vulkan: KHR_coopmat flash attention (ggml-org#13506)

This shader uses coopmat1 to do the Q*K^T multiply. The P*V multiply is more
difficult for various reasons so I haven't done it. Performance for this
shader is around 2.5x better than for the scalar shader when doing prompt
processing. Some of the benefit may be from other optimizations like staging
through shared memory, or splitting by rows.
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_cm1.comp

cmake: simplify vulkan shader test logic (ggml-org#13263)

vulkan: use scalar FA rather than coopmat2 when N==1 (ggml-org#13554)

Add pipeline_acc_f32

vulkan: move common FA code to flash_attn_base.comp (ggml-org#13556)

* vulkan: move common FA code to flash_attn_base.comp

* vulkan: move common FA index/stride setup code to flash_attn_base.comp

* build fix
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_base.comp

cmake: use the current build config for vulkan-shaders-gen (ggml-org#13595)

* fix: use the current build config for `vulkan-shaders-gen`

* fix: only pass a valid build type to `--config`

Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (ggml-org#13607)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: fix warnings (ggml-org#13626)

* small fixes

* remove ifdef

use LOG_WARN to replace `std::cerr` (ggml-org#13657)

vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (ggml-org#13696)

vulkan: support CPY from any type to itself (ggml-org#13695)

Reuse the f16/f32 copy shaders, and just scale the number of elements
according to the type size.

add GGML_LOG_WARN

vulkan: mark IM2COL as supporting non-contig (ggml-org#13783)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: use timestamp queries for GGML_VULKAN_PERF (ggml-org#13817)

Also change it to be controlled by an env var rather than cmake flag

vulkan : Remove unexpected ; (ggml/1253)

vulkan: fix warnings in perf logger querypool code (ggml-org#13937)

ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (ggml-org#13813)

* * ggml-vulkan: adds op CONV_TRANSPOSE_1D

* test-backend-ops: adds more spohisticated tests for CONV_TRANSPOSE_1D

* Missing barrier added to shader.
Number of additional tests reduced to 108.

* * Fixes typo in variable name.

* Removes extra whitespaces.

* Adds int64->int32 casts to prevent possible warnings.

* Problem size reduced in tests to pass tests with llvmpipe.

* supports_op condition moved from unintended position
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/conv_transpose_1d.comp

vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (ggml-org#14001)

* allowing B580 and U9-288V

* experimenting code to detect Xe2

* allowing coopmat only for Xe2 GPUs

* fixed comment wording

* fixed comment wording

* removed unnecessary driver check

Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (ggml-org#14099)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: force device 0 in CI (ggml-org#14106)

Add GGML_LOG_INFO

vulkan: Track descriptor pools/sets per-context (ggml-org#14109)

Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.

vulkan: Better thread-safety for command pools/buffers (ggml-org#14116)

This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: mutex around vkQueueSubmit (ggml-org#14127)

This fixes the remaining crash in test-thread-safety on my system.

cmake: clean up external project logic for vulkan-shaders-gen (ggml-org#14179)

* Remove install step for vulkan-shaders-gen

* Add install step to normalize msvc with make

* Regenerate modified shaders at build-time
# Conflicts:
#	.github/workflows/build.yml

cmake: remove shader-gen step-targets from ggml-vulkan (ggml-org#14226)

* Remove step-targets from vulkan-shaders-gen

* Unset DESTDIR when building vulkan-shaders-gen

Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (ggml-org#14249)

Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (ggml-org#13792)

* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.

* remove #ifdef for debug utils and add queue marker.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: update windows SDK in CI (ggml-org#14334)

vulkan: update windows SDK in release.yml (ggml-org#14344)

# Conflicts:
#	.github/workflows/release.yml

cmake: regen vulkan shaders when shaders-gen sources change (ggml-org#14398)

* Add shaders-gen sources as target deps

vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (ggml-org#14427)

This setting needs to be passed through to vulkan-shaders-gen

vulkan: lock accesses of pinned_memory vector (ggml-org#14333)

vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (ggml-org#14378)

Fix cuda build error

test

* remove  new cpu backend and yml files

* remove new op and GGML_ROPE_TYPE_NEOX

* fix build error

* change cmake file to add matrix operation

* remove coopmat2 check in flash attention

* print gpu info for vulkan

* disable fuse to recover vulkan performance

---------

Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: firecoperana <firecoperana>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

devops improvements to build systems and github actions ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.