Skip to content

vulkan: scale caching for k quants + misc fixes#11081

Merged
netrunnereve merged 25 commits intoggml-org:masterfrom
netrunnereve:vulkan
Jan 15, 2025
Merged

vulkan: scale caching for k quants + misc fixes#11081
netrunnereve merged 25 commits intoggml-org:masterfrom
netrunnereve:vulkan

Conversation

@netrunnereve
Copy link
Collaborator

We can make inference run a bit faster by extracting the scales in parallel and saving them to shared memory, where they'll be used by all the threads working on the superblock. This came out of the experiments in #10999.

This was not done for Q4_K and Q5_K as their scales are packed in a complicated way which makes this method even slower.

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   232.89 us/run - 117.44 MFLOP/run - 504.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   359.69 us/run - 117.44 MFLOP/run - 326.50 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   234.78 us/run - 117.44 MFLOP/run - 500.22 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   313.31 us/run - 117.44 MFLOP/run - 374.84 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   333.78 us/run - 117.44 MFLOP/run - 351.85 GFLOPS
model size params backend ngl threads main_gpu sm test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 100 8 1 none tg128 24.78 ± 0.03
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 100 8 1 none tg128 21.98 ± 0.02
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 100 8 1 none tg128 22.27 ± 0.01

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   241.10 us/run - 117.44 MFLOP/run - 487.09 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   449.01 us/run - 117.44 MFLOP/run - 261.56 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   235.58 us/run - 117.44 MFLOP/run - 498.51 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   315.21 us/run - 117.44 MFLOP/run - 372.58 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   365.79 us/run - 117.44 MFLOP/run - 321.06 GFLOPS
model size params backend ngl threads main_gpu sm test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 100 8 1 none tg128 22.15 ± 0.01
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 100 8 1 none tg128 18.97 ± 0.00
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 100 8 1 none tg128 20.38 ± 0.00

@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Jan 5, 2025
@netrunnereve netrunnereve requested a review from 0cc4m January 5, 2025 02:26
@github-actions github-actions bot added script Script related python python script changes Apple Metal https://en.wikipedia.org/wiki/Metal_(API) labels Jan 5, 2025
@netrunnereve netrunnereve removed script Script related python python script changes Apple Metal https://en.wikipedia.org/wiki/Metal_(API) labels Jan 5, 2025
@jeffbolznv jeffbolznv self-requested a review January 5, 2025 05:36
@jeffbolznv
Copy link
Collaborator

RTX 4070 results. Keep in mind there's a lot of variability in the results, but at first glance it seems like an improvement for Q3_K but worse for the others:

after:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    60.51 us/run - 117.44 MFLOP/run -   1.94 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10650 runs -    95.48 us/run - 234.88 MFLOP/run -   2.46 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7952 runs -   126.38 us/run - 352.32 MFLOP/run -   2.79 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6603 runs -   153.03 us/run - 469.76 MFLOP/run -   3.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6498 runs -   157.50 us/run - 587.20 MFLOP/run -   3.73 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2568 runs -   402.20 us/run - 939.52 MFLOP/run -   2.34 TFLOPS

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    79.41 us/run - 117.44 MFLOP/run -   1.48 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9798 runs -   105.30 us/run - 234.88 MFLOP/run -   2.23 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.98 us/run - 352.32 MFLOP/run -   2.36 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   172.48 us/run - 469.76 MFLOP/run -   2.72 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4617 runs -   220.06 us/run - 587.20 MFLOP/run -   2.67 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2675 runs -   384.43 us/run - 939.52 MFLOP/run -   2.44 TFLOPS

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   131.17 us/run - 117.44 MFLOP/run - 895.32 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   133.59 us/run - 234.88 MFLOP/run -   1.76 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6248 runs -   163.28 us/run - 352.32 MFLOP/run -   2.16 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6603 runs -   153.27 us/run - 469.76 MFLOP/run -   3.06 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   164.57 us/run - 587.20 MFLOP/run -   3.57 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3959 runs -   253.39 us/run - 939.52 MFLOP/run -   3.71 TFLOPS

before:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  18744 runs -    54.18 us/run - 117.44 MFLOP/run -   2.17 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    73.36 us/run - 234.88 MFLOP/run -   3.20 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10508 runs -    95.71 us/run - 352.32 MFLOP/run -   3.68 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8307 runs -   122.29 us/run - 469.76 MFLOP/run -   3.84 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7011 runs -   145.50 us/run - 587.20 MFLOP/run -   4.04 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3531 runs -   284.81 us/run - 939.52 MFLOP/run -   3.30 TFLOPS

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.74 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   123.00 us/run - 234.88 MFLOP/run -   1.91 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.62 us/run - 352.32 MFLOP/run -   2.37 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6177 runs -   162.05 us/run - 469.76 MFLOP/run -   2.90 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   203.90 us/run - 587.20 MFLOP/run -   2.88 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   309.30 us/run - 939.52 MFLOP/run -   3.04 TFLOPS

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   105.69 us/run - 117.44 MFLOP/run -   1.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   110.35 us/run - 234.88 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8236 runs -   122.10 us/run - 352.32 MFLOP/run -   2.89 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   142.71 us/run - 469.76 MFLOP/run -   3.29 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   166.92 us/run - 587.20 MFLOP/run -   3.52 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   217.50 us/run - 939.52 MFLOP/run -   4.32 TFLOPS

@netrunnereve
Copy link
Collaborator Author

For multiple ns I'm seeing clear improvements with Q3_K and Q6_K, but Q2_K is much less consistent and is in some cases slower than master.

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   337.74 us/run - 234.88 MFLOP/run - 695.45 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   441.88 us/run - 352.32 MFLOP/run - 797.33 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   566.21 us/run - 469.76 MFLOP/run - 829.66 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   740.15 us/run - 587.20 MFLOP/run - 793.36 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1064.79 us/run - 939.52 MFLOP/run - 882.36 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   454.94 us/run - 234.88 MFLOP/run - 516.29 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   539.48 us/run - 352.32 MFLOP/run - 653.08 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1491 runs -   754.86 us/run - 469.76 MFLOP/run - 622.32 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1197 runs -   862.33 us/run - 587.20 MFLOP/run - 680.95 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1182.12 us/run - 939.52 MFLOP/run - 794.78 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   388.88 us/run - 234.88 MFLOP/run - 603.99 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   464.96 us/run - 352.32 MFLOP/run - 757.74 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   550.29 us/run - 469.76 MFLOP/run - 853.67 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   675.49 us/run - 587.20 MFLOP/run - 869.30 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1070 runs -   966.01 us/run - 939.52 MFLOP/run - 972.59 GFLOPS

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   336.28 us/run - 234.88 MFLOP/run - 698.47 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   458.81 us/run - 352.32 MFLOP/run - 767.90 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   573.96 us/run - 469.76 MFLOP/run - 818.45 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   727.08 us/run - 587.20 MFLOP/run - 807.62 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1067.80 us/run - 939.52 MFLOP/run - 879.87 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   543.67 us/run - 234.88 MFLOP/run - 432.03 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   642.54 us/run - 352.32 MFLOP/run - 548.33 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1278 runs -   885.94 us/run - 469.76 MFLOP/run - 530.24 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1026 runs -  1004.95 us/run - 587.20 MFLOP/run - 584.31 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    856 runs -  1270.78 us/run - 939.52 MFLOP/run - 739.33 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   425.50 us/run - 234.88 MFLOP/run - 552.01 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1988 runs -   537.97 us/run - 352.32 MFLOP/run - 654.91 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   625.29 us/run - 469.76 MFLOP/run - 751.28 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   771.12 us/run - 587.20 MFLOP/run - 761.49 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1076.21 us/run - 939.52 MFLOP/run - 872.99 GFLOPS

I tried calculating the A * scale multiplication ahead of time for Q2_K, but it didn't do much. That also should reduce the number of shared memory reads as the products are stored in registers.

A * scale multiplication cached in registers:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   332.67 us/run - 234.88 MFLOP/run - 706.06 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   443.91 us/run - 352.32 MFLOP/run - 793.69 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   565.84 us/run - 469.76 MFLOP/run - 830.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   741.69 us/run - 587.20 MFLOP/run - 791.71 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1071.39 us/run - 939.52 MFLOP/run - 876.92 GFLOPS

@0cc4m
Copy link
Collaborator

0cc4m commented Jan 7, 2025

I'll post benchmarks at a later point, but this reduces performance on RTX 3090 for q2_k and q6_k. I see small improvements on Radeon Pro VII. Intel still crashes, but only in test-backend-ops -o MUL_MAT. I don't know what's going on there, since test-backend-ops -o MUL_MAT perf passes just fine. Looking at the perf results, it's a small improvement on A770, too.

@jeffbolznv
Copy link
Collaborator

IMO the crash is still very likely related to the barriers in nonuniform control flow. It really needs to be fixed if we're going to use shared memory here. If the additional branches are causing too many problems then maybe we could change how the work is spread across a workgroup so that the number of iterations is uniform, but that could also affect perf (likely making it worse, I'd guess).

@netrunnereve
Copy link
Collaborator Author

If the additional branches are causing too many problems then maybe we could change how the work is spread across a workgroup so that the number of iterations is uniform, but that could also affect perf

To get rid of the branches we could just have the main i loop run with no checks as long as we have enough blocks remaining to use all threads, and then switch to a separate code path for the final multiplications. There's no need to redo the algorithm.

@netrunnereve
Copy link
Collaborator Author

netrunnereve commented Jan 8, 2025

Okay I've fixed up Q6_K to handle the early return case, and it's now running at 23.3 t/s with a few extra tweaks. @0cc4m can you try this on Intel to see if it prevents the crash?

@jeffbolznv
Copy link
Collaborator

I tested the latest Q6_K changes on RTX 4070. For llama-bench with llama-2-7b.Q6_K, the perf is basically unchanged, which is not surprising since it's just memory bandwidth-limited. The directed perf results are more interesting:

before:
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  46860 runs -   107.44 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  45582 runs -   110.08 us/run - 234.88 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  39760 runs -   126.70 us/run - 352.32 MFLOP/run -   2.78 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  33654 runs -   149.37 us/run - 469.76 MFLOP/run -   3.15 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  30438 runs -   164.95 us/run - 587.20 MFLOP/run -   3.56 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22684 runs -   221.28 us/run - 939.52 MFLOP/run -   4.25 TFLOPS

after:
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  45156 runs -   112.21 us/run - 117.44 MFLOP/run -   1.05 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  46860 runs -   106.90 us/run - 234.88 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  43168 runs -   116.55 us/run - 352.32 MFLOP/run -   3.02 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  44304 runs -   113.42 us/run - 469.76 MFLOP/run -   4.14 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  37962 runs -   132.16 us/run - 587.20 MFLOP/run -   4.44 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9202 runs -   544.83 us/run - 939.52 MFLOP/run -   1.72 TFLOPS

So there's a nice boost for larger n, but it just falls off a cliff for n=8. I looked into this, and what's happening is the barriers are causing all the loads of the B matrix to be bunched together, and it's using too many registers. I tried moving all the B loads to the start of the function and saving them in local arrays, and that seems to resolve the issue:

with loads at the top:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  48564 runs -   104.69 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  47286 runs -   106.60 us/run - 234.88 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  40328 runs -   124.44 us/run - 352.32 MFLOP/run -   2.83 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  44091 runs -   113.45 us/run - 469.76 MFLOP/run -   4.14 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  39159 runs -   127.93 us/run - 587.20 MFLOP/run -   4.59 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22791 runs -   220.12 us/run - 939.52 MFLOP/run -   4.27 TFLOPS

@netrunnereve
Copy link
Collaborator Author

netrunnereve commented Jan 8, 2025

So there's a nice boost for larger n, but it just falls off a cliff for n=8.

Hmm this looks like an Nvidia only issue, and I didn't see this on my end when I was testing my changes. AMD's tools report that 82/256 vector registers are used in the 64 block size, 4 rows, and 8 columns case.

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   31
2.90 us/run - 117.44 MFLOP/run - 375.33 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   37
2.20 us/run - 234.88 MFLOP/run - 631.06 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   45
4.42 us/run - 352.32 MFLOP/run - 775.32 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   55
0.29 us/run - 469.76 MFLOP/run - 853.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   66
9.48 us/run - 587.20 MFLOP/run - 877.11 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1070 runs -   979.36 us/run - 939.52 MFLOP/run - 959.33 GFLOPS

I checked the assembly and at least for me the compiler is interleaving the B loads and the sum FMAs rather than doing them all at once. Also if I do a quick estimation:

temp: 8*4 = 32 registers
B: 4*4 = 16 registers
sum: 4 registers
scales: 4 registers
qs: 4*4 = 16 registers

That's 72 vector registers, and I guess we can go up to 100 ish when we include the indexes and so forth. If we assume that the compiler is loading all the B columns together then that's 16*8 = 128 registers, which would bring the total number over 200. However the compiler in this case doesn't need to load all the B values into registers in one go, and it should be smart enough to not spill over.

BTW I definitely can make this change to fix the n=8 performance, and I'll do these tweaks in one go once I get confirmation that Intel is working. It's just weird that the compiler is running out of registers in this case, which hinders performance more than smaller loads would.

@github-actions github-actions bot added the script Script related label Jan 9, 2025
@netrunnereve
Copy link
Collaborator Author

New numbers after fixing the early returns and making some more changes:

model size params backend ngl threads main_gpu sm test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 100 8 1 none tg128 28.47 ± 0.05
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 100 8 1 none tg128 25.06 ± 0.05
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 100 8 1 none tg128 23.42 ± 0.03
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   210.87 us/run - 117.44 MFLOP/run - 556.94 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   275.80 us/run - 117.44 MFLOP/run - 425.81 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   224.08 us/run - 117.44 MFLOP/run - 524.09 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   300.91 us/run - 117.44 MFLOP/run - 390.29 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   312.18 us/run - 117.44 MFLOP/run - 376.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   321.61 us/run - 234.88 MFLOP/run - 730.32 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   367.27 us/run - 234.88 MFLOP/run - 639.53 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   314.11 us/run - 234.88 MFLOP/run - 747.77 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   393.84 us/run - 234.88 MFLOP/run - 596.39 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   364.45 us/run - 234.88 MFLOP/run - 644.48 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   420.31 us/run - 352.32 MFLOP/run - 838.23 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   463.35 us/run - 352.32 MFLOP/run - 760.38 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2556 runs -   423.65 us/run - 352.32 MFLOP/run - 831.63 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   482.50 us/run - 352.32 MFLOP/run - 730.20 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2272 runs -   444.87 us/run - 352.32 MFLOP/run - 791.97 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   609.99 us/run - 469.76 MFLOP/run - 770.11 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   662.96 us/run - 469.76 MFLOP/run - 708.59 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   538.73 us/run - 469.76 MFLOP/run - 871.98 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   627.62 us/run - 469.76 MFLOP/run - 748.48 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1917 runs -   538.51 us/run - 469.76 MFLOP/run - 872.34 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   717.78 us/run - 587.20 MFLOP/run - 818.08 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   765.61 us/run - 587.20 MFLOP/run - 766.97 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   683.26 us/run - 587.20 MFLOP/run - 859.41 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   737.19 us/run - 587.20 MFLOP/run - 796.54 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   664.04 us/run - 587.20 MFLOP/run - 884.28 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1049.30 us/run - 939.52 MFLOP/run - 895.38 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1077.66 us/run - 939.52 MFLOP/run - 871.82 GFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1060.65 us/run - 939.52 MFLOP/run - 885.80 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    963 runs -  1086.90 us/run - 939.52 MFLOP/run - 864.40 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1070 runs -   974.52 us/run - 939.52 MFLOP/run - 964.09 GFLOPS

@netrunnereve
Copy link
Collaborator Author

Okay this should be everything I think.

@0cc4m
Copy link
Collaborator

0cc4m commented Jan 12, 2025

Looks good overall now, the biggest positive effect I saw was on q2_k and q3_k on RX 6800 XT and A770. RTX 3090 has some regressions on q2_k, but the model benchmark didn't show a big difference. I think it's fine.

RTX 3090

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 240.59 ± 1.69 238.62 ± 9.33
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 241.33 ± 0.78 241.57 ± 1.29
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 259.55 ± 1.39 255.27 ± 1.66
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 253.01 ± 0.91 256.02 ± 11.17
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 247.65 ± 1.30 244.02 ± 0.84
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17892 runs -    56.22 us/run - 117.44 MFLOP/run -   2.09 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14058 runs -    71.63 us/run - 234.88 MFLOP/run -   3.28 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11644 runs -    87.65 us/run - 352.32 MFLOP/run -   4.02 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9159 runs -   110.74 us/run - 469.76 MFLOP/run -   4.24 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7353 runs -   136.55 us/run - 587.20 MFLOP/run -   4.30 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5136 runs -   198.87 us/run - 939.52 MFLOP/run -   4.72 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  736 runs -  1361.04 us/run -  60.13 GFLOP/run -  44.18 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  13632 runs -    76.75 us/run - 117.44 MFLOP/run -   1.53 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    93.39 us/run - 234.88 MFLOP/run -   2.52 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   120.35 us/run - 352.32 MFLOP/run -   2.93 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6603 runs -   153.05 us/run - 469.76 MFLOP/run -   3.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5643 runs -   182.61 us/run - 587.20 MFLOP/run -   3.22 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2889 runs -   354.30 us/run - 939.52 MFLOP/run -   2.65 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  722 runs -  1386.19 us/run -  60.13 GFLOP/run -  43.38 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -    98.18 us/run - 117.44 MFLOP/run -   1.20 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   113.49 us/run - 234.88 MFLOP/run -   2.07 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   131.34 us/run - 352.32 MFLOP/run -   2.68 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   145.76 us/run - 469.76 MFLOP/run -   3.22 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5472 runs -   187.01 us/run - 587.20 MFLOP/run -   3.14 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2996 runs -   337.69 us/run - 939.52 MFLOP/run -   2.78 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  288 runs -  3493.50 us/run -  60.13 GFLOP/run -  17.21 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    80.31 us/run - 117.44 MFLOP/run -   1.46 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9798 runs -   102.13 us/run - 234.88 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   152.26 us/run - 352.32 MFLOP/run -   2.31 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   203.74 us/run - 469.76 MFLOP/run -   2.31 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4104 runs -   252.48 us/run - 587.20 MFLOP/run -   2.33 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2889 runs -   359.58 us/run - 939.52 MFLOP/run -   2.61 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  288 runs -  3476.28 us/run -  60.13 GFLOP/run -  17.30 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    72.46 us/run - 117.44 MFLOP/run -   1.62 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    86.31 us/run - 234.88 MFLOP/run -   2.72 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9656 runs -   103.78 us/run - 352.32 MFLOP/run -   3.39 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   118.89 us/run - 469.76 MFLOP/run -   3.95 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7695 runs -   132.78 us/run - 587.20 MFLOP/run -   4.42 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5243 runs -   193.19 us/run - 939.52 MFLOP/run -   4.86 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  582 runs -  1720.06 us/run -  60.13 GFLOP/run -  34.96 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    72.02 us/run - 117.44 MFLOP/run -   1.63 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    84.35 us/run - 234.88 MFLOP/run -   2.78 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10508 runs -    96.18 us/run - 352.32 MFLOP/run -   3.66 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   114.39 us/run - 469.76 MFLOP/run -   4.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7524 runs -   133.18 us/run - 587.20 MFLOP/run -   4.41 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4708 runs -   217.04 us/run - 939.52 MFLOP/run -   4.33 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  574 runs -  1747.00 us/run -  60.13 GFLOP/run -  34.42 TFLOPS

AMD Radeon RX 6800 XT

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 360.12 ± 0.60 412.16 ± 0.40
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 326.16 ± 0.20 360.38 ± 1.04
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 335.30 ± 1.09 338.49 ± 1.86
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 296.91 ± 0.99 296.82 ± 1.70
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 281.05 ± 0.59 280.95 ± 1.12
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  19596 runs -    52.78 us/run - 117.44 MFLOP/run -   2.22 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    69.55 us/run - 234.88 MFLOP/run -   3.38 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11644 runs -    87.84 us/run - 352.32 MFLOP/run -   4.01 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.01 us/run - 469.76 MFLOP/run -   4.39 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7866 runs -   128.39 us/run - 587.20 MFLOP/run -   4.57 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4922 runs -   203.35 us/run - 939.52 MFLOP/run -   4.62 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  170 runs -  5904.59 us/run -  60.13 GFLOP/run -  10.18 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    45.51 us/run - 117.44 MFLOP/run -   2.58 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  15336 runs -    65.29 us/run - 234.88 MFLOP/run -   3.60 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    84.27 us/run - 352.32 MFLOP/run -   4.18 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9585 runs -   105.70 us/run - 469.76 MFLOP/run -   4.44 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7695 runs -   131.98 us/run - 587.20 MFLOP/run -   4.45 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4494 runs -   222.98 us/run - 939.52 MFLOP/run -   4.21 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  170 runs -  5896.34 us/run -  60.13 GFLOP/run -  10.20 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    93.01 us/run - 117.44 MFLOP/run -   1.26 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   110.52 us/run - 234.88 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7952 runs -   127.70 us/run - 352.32 MFLOP/run -   2.76 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   145.58 us/run - 469.76 MFLOP/run -   3.23 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   165.48 us/run - 587.20 MFLOP/run -   3.55 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4387 runs -   230.45 us/run - 939.52 MFLOP/run -   4.08 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  140 runs -  7176.04 us/run -  60.13 GFLOP/run -   8.38 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    61.66 us/run - 117.44 MFLOP/run -   1.90 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    79.17 us/run - 234.88 MFLOP/run -   2.97 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10508 runs -    96.90 us/run - 352.32 MFLOP/run -   3.64 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   119.89 us/run - 469.76 MFLOP/run -   3.92 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6840 runs -   148.39 us/run - 587.20 MFLOP/run -   3.96 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4387 runs -   230.15 us/run - 939.52 MFLOP/run -   4.08 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  140 runs -  7168.31 us/run -  60.13 GFLOP/run -   8.39 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  19596 runs -    53.32 us/run - 117.44 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    69.96 us/run - 234.88 MFLOP/run -   3.36 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    85.78 us/run - 352.32 MFLOP/run -   4.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9798 runs -   103.90 us/run - 469.76 MFLOP/run -   4.52 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8208 runs -   123.18 us/run - 587.20 MFLOP/run -   4.77 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5457 runs -   184.79 us/run - 939.52 MFLOP/run -   5.08 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  158 runs -  6363.64 us/run -  60.13 GFLOP/run -   9.45 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  22152 runs -    46.43 us/run - 117.44 MFLOP/run -   2.53 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17040 runs -    59.73 us/run - 234.88 MFLOP/run -   3.93 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14200 runs -    71.68 us/run - 352.32 MFLOP/run -   4.92 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10650 runs -    95.09 us/run - 469.76 MFLOP/run -   4.94 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8379 runs -   120.81 us/run - 587.20 MFLOP/run -   4.86 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4922 runs -   204.12 us/run - 939.52 MFLOP/run -   4.60 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  158 runs -  6353.53 us/run -  60.13 GFLOP/run -   9.46 TFLOPS

AMD Radeon Pro VII

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 173.38 ± 0.35 188.96 ± 0.67
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 187.53 ± 0.80 185.90 ± 0.48
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 189.13 ± 0.65 194.97 ± 0.86
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 180.68 ± 0.54 185.56 ± 0.55
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 177.98 ± 2.14 177.90 ± 0.55
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   101.66 us/run - 117.44 MFLOP/run -   1.16 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   136.25 us/run - 234.88 MFLOP/run -   1.72 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   174.24 us/run - 352.32 MFLOP/run -   2.02 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4686 runs -   215.95 us/run - 469.76 MFLOP/run -   2.18 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3591 runs -   281.98 us/run - 587.20 MFLOP/run -   2.08 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2354 runs -   432.78 us/run - 939.52 MFLOP/run -   2.17 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   66 runs - 15289.68 us/run -  60.13 GFLOP/run -   3.93 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11928 runs -    88.16 us/run - 117.44 MFLOP/run -   1.33 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8094 runs -   124.83 us/run - 234.88 MFLOP/run -   1.88 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6248 runs -   161.34 us/run - 352.32 MFLOP/run -   2.18 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4686 runs -   220.51 us/run - 469.76 MFLOP/run -   2.13 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3249 runs -   321.18 us/run - 587.20 MFLOP/run -   1.83 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   484.22 us/run - 939.52 MFLOP/run -   1.94 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   66 runs - 15289.14 us/run -  60.13 GFLOP/run -   3.93 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   177.18 us/run - 117.44 MFLOP/run - 662.83 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   211.21 us/run - 234.88 MFLOP/run -   1.11 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   246.23 us/run - 352.32 MFLOP/run -   1.43 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3195 runs -   329.12 us/run - 469.76 MFLOP/run -   1.43 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2736 runs -   385.56 us/run - 587.20 MFLOP/run -   1.52 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2033 runs -   516.76 us/run - 939.52 MFLOP/run -   1.82 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   56 runs - 18338.54 us/run -  60.13 GFLOP/run -   3.28 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   119.03 us/run - 117.44 MFLOP/run - 986.63 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   155.41 us/run - 234.88 MFLOP/run -   1.51 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   202.12 us/run - 352.32 MFLOP/run -   1.74 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3621 runs -   276.86 us/run - 469.76 MFLOP/run -   1.70 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2736 runs -   367.02 us/run - 587.20 MFLOP/run -   1.60 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2140 runs -   467.44 us/run - 939.52 MFLOP/run -   2.01 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   56 runs - 18452.18 us/run -  60.13 GFLOP/run -   3.26 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   153.24 us/run - 117.44 MFLOP/run - 766.41 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   177.88 us/run - 234.88 MFLOP/run -   1.32 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5112 runs -   206.07 us/run - 352.32 MFLOP/run -   1.71 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3834 runs -   261.47 us/run - 469.76 MFLOP/run -   1.80 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3249 runs -   310.98 us/run - 587.20 MFLOP/run -   1.89 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2247 runs -   449.79 us/run - 939.52 MFLOP/run -   2.09 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   62 runs - 16564.45 us/run -  60.13 GFLOP/run -   3.63 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   142.09 us/run - 117.44 MFLOP/run - 826.51 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   165.11 us/run - 234.88 MFLOP/run -   1.42 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5396 runs -   188.69 us/run - 352.32 MFLOP/run -   1.87 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   243.86 us/run - 469.76 MFLOP/run -   1.93 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3591 runs -   282.84 us/run - 587.20 MFLOP/run -   2.08 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2461 runs -   409.23 us/run - 939.52 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   62 runs - 16584.63 us/run -  60.13 GFLOP/run -   3.63 TFLOPS

Intel A770

llama-bench

model size params backend ngl test t/s Master t/s PR
llama 1B Q2_K - Medium 411.41 MiB 1.10 B Vulkan 99 tg128 89.81 ± 0.06 102.47 ± 0.13
llama 1B Q3_K - Medium 523.67 MiB 1.10 B Vulkan 99 tg128 82.08 ± 0.05 97.08 ± 0.11
llama 1B Q4_K - Medium 636.18 MiB 1.10 B Vulkan 99 tg128 88.36 ± 0.04 87.97 ± 0.03
llama 1B Q5_K - Medium 745.11 MiB 1.10 B Vulkan 99 tg128 77.66 ± 0.25 91.44 ± 0.26
llama 1B Q6_K 860.86 MiB 1.10 B Vulkan 99 tg128 62.35 ± 0.08 63.25 ± 0.08
test-backend-ops perf

q2_K

Master:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   286.83 us/run - 117.44 MFLOP/run - 409.45 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   165.47 us/run - 234.88 MFLOP/run -   1.42 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6532 runs -   157.65 us/run - 352.32 MFLOP/run -   2.23 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5325 runs -   190.01 us/run - 469.76 MFLOP/run -   2.47 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3591 runs -   279.26 us/run - 587.20 MFLOP/run -   2.10 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    535 runs -  2034.61 us/run - 939.52 MFLOP/run - 461.77 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 47439.59 us/run -  60.13 GFLOP/run -   1.27 TFLOPS

PR:

  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   245.55 us/run - 117.44 MFLOP/run - 478.27 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   158.57 us/run - 234.88 MFLOP/run -   1.48 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5396 runs -   188.83 us/run - 352.32 MFLOP/run -   1.87 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4473 runs -   226.73 us/run - 469.76 MFLOP/run -   2.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2907 runs -   347.94 us/run - 587.20 MFLOP/run -   1.69 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3652.90 us/run - 939.52 MFLOP/run - 257.20 GFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46193.82 us/run -  60.13 GFLOP/run -   1.30 TFLOPS

q3_K

Master:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3408 runs -   347.65 us/run - 117.44 MFLOP/run - 337.82 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2982 runs -   367.95 us/run - 234.88 MFLOP/run - 638.35 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2840 runs -   365.56 us/run - 352.32 MFLOP/run - 963.79 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2343 runs -   439.27 us/run - 469.76 MFLOP/run -   1.07 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   782.66 us/run - 587.20 MFLOP/run - 750.27 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    428 runs -  2421.19 us/run - 939.52 MFLOP/run - 388.04 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 47517.68 us/run -  60.13 GFLOP/run -   1.27 TFLOPS

PR:

  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   238.87 us/run - 117.44 MFLOP/run - 491.66 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5964 runs -   178.86 us/run - 234.88 MFLOP/run -   1.31 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4260 runs -   241.10 us/run - 352.32 MFLOP/run -   1.46 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3195 runs -   325.11 us/run - 469.76 MFLOP/run -   1.44 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1710 runs -   605.50 us/run - 587.20 MFLOP/run - 969.77 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    321 runs -  3766.98 us/run - 939.52 MFLOP/run - 249.41 GFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46954.27 us/run -  60.13 GFLOP/run -   1.28 TFLOPS

q6_K

Master:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   670.19 us/run - 117.44 MFLOP/run - 175.23 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   2130 runs -   510.59 us/run - 234.88 MFLOP/run - 460.02 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   640.81 us/run - 352.32 MFLOP/run - 549.81 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   661.87 us/run - 469.76 MFLOP/run - 709.75 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1539 runs -   691.92 us/run - 587.20 MFLOP/run - 848.66 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    642 runs -  1702.87 us/run - 939.52 MFLOP/run - 551.73 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46237.27 us/run -  60.13 GFLOP/run -   1.30 TFLOPS

PR:

  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   745.08 us/run - 117.44 MFLOP/run - 157.62 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   630.85 us/run - 234.88 MFLOP/run - 372.33 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1704 runs -   665.55 us/run - 352.32 MFLOP/run - 529.37 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1491 runs -   750.69 us/run - 469.76 MFLOP/run - 625.77 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1368 runs -   805.75 us/run - 587.20 MFLOP/run - 728.77 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                    642 runs -  1771.64 us/run - 939.52 MFLOP/run - 530.31 GFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=512,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   22 runs - 46139.23 us/run -  60.13 GFLOP/run -   1.30 TFLOPS

@netrunnereve
Copy link
Collaborator Author

I generally expect this to benefit the most on compute limited hardware but the drop in performance for the 3090 is really strange indeed. My guess is that the GPU runs so fast that the bit operations and four unpacks end up taking less time than the barrier overhead. At least on AMD I'm seeing that the sccache values end up being kept in registers for the FMA loop so it's not like the GPU is reading from shared memory every time it does a calculation.

@jeffbolznv
Copy link
Collaborator

Results with the latest change on RTX 4070. I tried the phi3 Q4_K model (includes Q5_K and Q6_K) and perf was basically unchanged, which is expected. For the directed tests:

before
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  16188 runs -    62.13 us/run - 117.44 MFLOP/run -   1.89 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10650 runs -    96.44 us/run - 234.88 MFLOP/run -   2.44 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10792 runs -    95.13 us/run - 352.32 MFLOP/run -   3.70 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7242 runs -   140.11 us/run - 469.76 MFLOP/run -   3.35 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5301 runs -   191.16 us/run - 587.20 MFLOP/run -   3.07 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3317 runs -   302.70 us/run - 939.52 MFLOP/run -   3.10 TFLOPS
  
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   104.41 us/run - 117.44 MFLOP/run -   1.12 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8094 runs -   123.99 us/run - 234.88 MFLOP/run -   1.89 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6816 runs -   148.84 us/run - 352.32 MFLOP/run -   2.37 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   161.33 us/run - 469.76 MFLOP/run -   2.91 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   203.25 us/run - 587.20 MFLOP/run -   2.89 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3210 runs -   313.03 us/run - 939.52 MFLOP/run -   3.00 TFLOPS
  
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  20448 runs -    50.53 us/run - 117.44 MFLOP/run -   2.32 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14910 runs -    68.59 us/run - 234.88 MFLOP/run -   3.42 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11076 runs -    92.24 us/run - 352.32 MFLOP/run -   3.82 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7455 runs -   134.65 us/run - 469.76 MFLOP/run -   3.49 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7524 runs -   133.64 us/run - 587.20 MFLOP/run -   4.39 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5029 runs -   202.53 us/run - 939.52 MFLOP/run -   4.64 TFLOPS
  
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   118.81 us/run - 117.44 MFLOP/run - 988.48 GFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    87.61 us/run - 234.88 MFLOP/run -   2.68 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   109.53 us/run - 352.32 MFLOP/run -   3.22 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   132.13 us/run - 469.76 MFLOP/run -   3.56 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5301 runs -   190.75 us/run - 587.20 MFLOP/run -   3.08 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3531 runs -   284.79 us/run - 939.52 MFLOP/run -   3.30 TFLOPS
  
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.66 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   109.77 us/run - 234.88 MFLOP/run -   2.14 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   121.36 us/run - 352.32 MFLOP/run -   2.90 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7029 runs -   142.58 us/run - 469.76 MFLOP/run -   3.29 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6327 runs -   161.89 us/run - 587.20 MFLOP/run -   3.63 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4601 runs -   221.37 us/run - 939.52 MFLOP/run -   4.24 TFLOPS
  
after
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  14484 runs -    69.36 us/run - 117.44 MFLOP/run -   1.69 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12354 runs -    81.76 us/run - 234.88 MFLOP/run -   2.87 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7952 runs -   129.14 us/run - 352.32 MFLOP/run -   2.73 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4899 runs -   204.16 us/run - 469.76 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4617 runs -   220.59 us/run - 587.20 MFLOP/run -   2.66 TFLOPS
  MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3103 runs -   331.47 us/run - 939.52 MFLOP/run -   2.83 TFLOPS
  
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12780 runs -    78.82 us/run - 117.44 MFLOP/run -   1.49 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   102.02 us/run - 234.88 MFLOP/run -   2.30 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7668 runs -   134.63 us/run - 352.32 MFLOP/run -   2.62 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6390 runs -   158.03 us/run - 469.76 MFLOP/run -   2.97 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4959 runs -   205.47 us/run - 587.20 MFLOP/run -   2.86 TFLOPS
  MUL_MAT(type_a=q3_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3103 runs -   324.29 us/run - 939.52 MFLOP/run -   2.90 TFLOPS
  
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  17892 runs -    58.00 us/run - 117.44 MFLOP/run -   2.02 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  12354 runs -    82.57 us/run - 234.88 MFLOP/run -   2.84 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11360 runs -    89.97 us/run - 352.32 MFLOP/run -   3.92 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   112.37 us/run - 469.76 MFLOP/run -   4.18 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6156 runs -   163.33 us/run - 587.20 MFLOP/run -   3.60 TFLOPS
  MUL_MAT(type_a=q4_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   4173 runs -   239.70 us/run - 939.52 MFLOP/run -   3.92 TFLOPS
  
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   107.72 us/run - 117.44 MFLOP/run -   1.09 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  11502 runs -    87.66 us/run - 234.88 MFLOP/run -   2.68 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   6532 runs -   156.18 us/run - 352.32 MFLOP/run -   2.26 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   7881 runs -   126.93 us/run - 469.76 MFLOP/run -   3.70 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   5130 runs -   195.14 us/run - 587.20 MFLOP/run -   3.01 TFLOPS
  MUL_MAT(type_a=q5_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   3531 runs -   289.89 us/run - 939.52 MFLOP/run -   3.24 TFLOPS
  
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                  10224 runs -   105.40 us/run - 117.44 MFLOP/run -   1.11 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   9372 runs -   106.70 us/run - 234.88 MFLOP/run -   2.20 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8520 runs -   117.63 us/run - 352.32 MFLOP/run -   3.00 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8946 runs -   113.02 us/run - 469.76 MFLOP/run -   4.16 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   8037 runs -   126.20 us/run - 587.20 MFLOP/run -   4.65 TFLOPS
  MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]):                   1819 runs -   560.48 us/run - 939.52 MFLOP/run -   1.68 TFLOPS

Still a regression for NUM_COLS==8 with Q6_K, maybe also to a lesser extent for Q4_K? I guess it doesn't need to block this change.

@netrunnereve
Copy link
Collaborator Author

Still a regression for NUM_COLS==8 with Q6_K, maybe also to a lesser extent for Q4_K?

The Q6_K case is getting stranger especially since the 3090 is handling the 8 column case fine. I still think it's a compiler issue though.

For Q4_K I have no idea why this is happening when we do the same types of loads and literally reduce the instruction count for the scale calculation.

@netrunnereve
Copy link
Collaborator Author

I don't know why the full CI isn't starting up after the approval, so I've run it in my fork and it's passing there.

@netrunnereve netrunnereve merged commit adc5dd9 into ggml-org:master Jan 15, 2025
2 checks passed
@netrunnereve netrunnereve deleted the vulkan branch January 15, 2025 19:52
@slaren
Copy link
Member

slaren commented Jan 15, 2025

The CI only runs when a source file changes, it should be extended to include the vulkan shaders.
https://github.com/ggerganov/llama.cpp/blob/adc5dd92e8aea98f5e7ac84f6e1bc15de35130b5/.github/workflows/build.yml#L14-L16

@neilmehta24
Copy link

neilmehta24 commented Jan 31, 2025

hey @netrunnereve, I noticed that some Qwen2.5 models on Vulkan experienced an inference speed slowdown after this was merged in. If you have a moment, I would appreciate it if you could take a look at this, in case you know what might be the root cause. I filed an issue here #11559 describing how to reproduce.

Edit:
I am seeing a slowdown for a few different model architectures, not just Qwen2.5

@jeffliu517
Copy link

@netrunnereve I notice some performance drop with this commit on RTX 4080:

build: f11cfdf (4489)
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 29820 runs - 33.68 us/run - 117.44 MFLOP/run - 3.49 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 21726 runs - 46.80 us/run - 234.88 MFLOP/run - 5.02 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 17324 runs - 57.76 us/run - 352.32 MFLOP/run - 6.10 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 14271 runs - 70.80 us/run - 469.76 MFLOP/run - 6.64 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 11286 runs - 89.33 us/run - 587.20 MFLOP/run - 6.57 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 7704 runs - 130.67 us/run - 939.52 MFLOP/run - 7.19 TFLOPS
MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 8453 runs - 119.15 us/run - 939.52 MFLOP/run - 7.89 TFLOPS

build: adc5dd9 (4490)
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=1,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 23856 runs - 42.53 us/run - 117.44 MFLOP/run - 2.76 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=2,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 12780 runs - 78.88 us/run - 234.88 MFLOP/run - 2.98 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=3,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 9940 runs - 103.51 us/run - 352.32 MFLOP/run - 3.40 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=4,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 7881 runs - 127.77 us/run - 469.76 MFLOP/run - 3.68 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=5,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 8037 runs - 126.27 us/run - 587.20 MFLOP/run - 4.65 TFLOPS
MUL_MAT(type_a=q2_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 4066 runs - 249.75 us/run - 939.52 MFLOP/run - 3.76 TFLOPS
MUL_MAT(type_a=q6_K,type_b=f32,m=4096,n=8,k=14336,bs=[1,1],nr=[1,1],per=[0,1,2,3]): 6313 runs - 158.59 us/run - 939.52 MFLOP/run - 5.92 TFLOPS

@jeffliu517
Copy link

llama-bench with real model, I can see llama 8B Q3_K has 6.3% perf gain while llama 8B Q2_K drops 9% and llama 7B Q6_K drops 3.5%. It seems this commit has different impact on different ASICs, it can introduce side effect on some ASICs, could you please help confirm the variable?

build: f11cfdf (4489)
ggml_vulkan: 0 = NVIDIA GeForce RTX 4080 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | matrix cores: KHR_coopmat

model size params backend ngl test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 99 tg128 110.30 ± 0.89
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 99 tg128 87.85 ± 0.53
llama 7B Q6_K 5.15 GiB 6.74 B Vulkan 99 tg128 81.96 ± 0.63

build: adc5dd9 (4490)
ggml_vulkan: 0 = NVIDIA GeForce RTX 4080 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | matrix cores: KHR_coopmat

model size params backend ngl test t/s
llama 8B Q2_K - Medium 2.95 GiB 8.03 B Vulkan 99 tg128 100.42 ± 0.96
llama 8B Q3_K - Medium 3.74 GiB 8.03 B Vulkan 99 tg128 93.04 ± 0.47
llama 7B Q6_K 5.15 GiB 6.74 B Vulkan 99 tg128 79.05 ± 0.82

@netrunnereve
Copy link
Collaborator Author

It seems this commit has different impact on different ASICs, it can introduce side effect on some ASICs, could you please help confirm the variable?

Yes this is expected and similar to what we saw in our tests, where there were good improvements for some GPUs and small regressions for others. See this comment for a bunch of benchmarks on different hardware. Now the delta for Q2_K on your 4080 is worse than what we saw but we can't really do anything about that unless we write multiple shaders to optimize for different chips.

For the Q6_K n=8 case you're probably seeing the same compiler issue as the 4070 where you're running out of registers.

arthw pushed a commit to arthw/llama.cpp that referenced this pull request Feb 26, 2025
* q6_k scale caching

* 16 bit unpack

* q4_k test (slow)

* revert it

* q3_k

* q2_k

* little stuff

* try precalculating products of a and q2_k scales

* Revert "try precalculating products of a and q2_k scales"

This reverts commit 65110b81f23f66331a50c6e889a7c1ab9470a86b.

* unpack should be u16, add vim swap to gitignore (about time)

* better q4_k scales

* q5_k

* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations

* q2_k better dequant

* q3_k optimizations

* q3_k use hmask simd from cpu avx version

* make the caches happy

* q3_k separate out calculation

* q2_k separate out

* little stuff

* use calc_superblock everywhere

* q2_k optimize scale calculation

* more barriers
SamuelOliveirads pushed a commit to SamuelOliveirads/llama.cpp that referenced this pull request Dec 29, 2025
* Merge vulkan code from mainline up to commit of 6/28/2025

* Vulkan Optimizations and Fixes (ggml-org#8959)

* Optimize Vulkan REPEAT performance

* Use Vulkan GLSL fused multiply-add instruction where possible

* Add GGML_VULKAN_PERF option to output performance data per operator

* Rework and fix Vulkan descriptor set and descriptor pool handling

* Fix float32 concat f16 shader validation error

* Add Vulkan GROUP_NORM eps parameter

* Fix validation error with transfer queue memory barrier flags

* Remove trailing whitespaces

vulkan : do not use tensor->extra (ggml-org#9407)

* vulkan : do not use tensor->extra

This patch allows using the Vulkan backend with the RPC backend as
tensor->extra is no longer used.

Ref: ggml-org#8536

* Adapt GGML_VULKAN_CHECK_RESULTS to extra removal (F1LM1#2)

---------

Co-authored-by: 0cc4m <picard12@live.de>
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan : fix build (#0)

ggml-ci

Improve Vulkan shader build system (ggml-org#9239)

* Improve Vulkan shader builds system

- Add dependency to vulkan-shaders-gen to rebuild shaders when changing the shader compilation utility.
- Add option to generate debug info for Vulkan shaders to provide shader source to Vulkan shader profiling tools

* remove not required self dependency

ggml : fix build break for the vulkan-debug (ggml-org#9265)

- windows build : Ok.
- linux build : Ok.

Signed-off-by: Changyeon Kim <cyzero.kim@samsung.com>

vulkan: correctly report support for OP_CONT (ggml/946)

test-backend-ops fails because ggml_cont aborts
when invoked passing an unsupported type.

This commit makes ggml_cont tests pass

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan: add dryrun support to sin and cos ops (ggml/947)

sin and cos failed test-backend-ops because they
tried to dereference a context pointer that is null
on dry runs.

This commit prevents that segfault.

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early. (ggml-org#9118)

* Overlap cmdbuffer creation and cmdbuffer execution in Vulkan backend by submitting smaller cmdbuffers early.

* fix compile issues

* Fix issues where the last submit wasn't executed or handled properly.

* remove trailing whitespace

* Repair GGML_VULKAN_CHECK_RESULTS

* Increase submit counter only if actual work has been submitted and increase submit count to 100.

* Fix some nodes are not checked with GGML_VULKAN_CHECK_RESULTS enabled.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

Enable use to the rebar feature to upload buffers to the device. (ggml-org#9251)

vulkan : argsort barriers must be under uniform control flow (ggml/951)

a return before a barrier (that happens only in some threads in
a workgroup) leads to UB.
While the old code actually works on some devices,
it fails on some others (i.e. "smaller" GPUs).

BTW, I think it would be better to set specialization constants
when the graph is built, in that way the local workgroup
could be sized appropriately.
But it would take a lot of work.

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan : fix build for GGML_VULKAN_RUN_TESTS, add TFLOPS to log (ggml/961)

vulkan : multithread pipeline creation (ggml/963)

vulkan : mul_mat: fix UB with small warps (ggml/952)

When the device's warp size is less than 16,
it is possible for loadstride_a (mul_mm.comp:114)
and loadstride_b (mul_mm.comp:115) to be set to 0.
Because they are calculated as: the workgroup size,
multiplied by LOAD_VEC_* (which can be 1) and divided by 16.
And the workgroup size is set to be the same as the
warp/subgroup size.

The loadstride_* variables are used as increments in the
loops that populate the buffers used for the multiplication.

When they are 0 they cause an infinite loop.
But infinite loops without side-effects are UB and the
values of loadstride_* are known at compile time.
So, the compiler quietly optimizes all the loops away.
As a consequence, the buffers are not populated and
the multiplication result is just a matrix with all elements
set to 0.

We prevent the UB by making sure that the workgroup size
will never be less than 16, even if our device has a
smaller warp size (e.g. 8).

Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>

vulkan : retry allocation with fallback flags (whisper/2451)

Co-authored-by: Samuel Morris <samuel.morris@artlist.io>

vulkan : improve ggml_vk_create_buffer error handling (ggml-org#9898)

vulkan: Fix newly added tests for permuted mul_mat and 1D im2col (ggml-org#10226)

vulkan: Throttle the number of shader compiles during the build step. (ggml-org#10222)

Fixes ggml-org#9582

Spawning too many concurrent copies of glslc leads to "Failed to create pipes"
errors on Linux. This change applies the same throttling we use for
multithreaded pipeline creation.
# Conflicts:
#	ggml/src/vulkan-shaders/vulkan-shaders-gen.cpp

vulkan: Optimize contiguous copies (ggml-org#10254)

* tests: Fix memory bandwidth calculation for perf tests

Add a flops calculation for flash attention.

Add one GGML_OP_CPY perf test.

* vulkan: Optimize contiguous copies

Add a variant of the copy shader for when the tensors are contiguous. Avoid
the complex addressing calculations, and do four elements per invocation
to hide some other overhead.

Apply similar changes to the scale shader, since scale is always contiguous.

Add a "progress bar" for shader compiles.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: Use macros to make the mat mul pipeline creation more concise (ggml-org#10259)

Also add vk_matmul_pipeline2 to hold f16/f32 accumulator versions of a
pipeline. This isn't really used yet.

vulkan: Optimize binary ops (ggml-org#10270)

Reuse the index calculations across all of src0/src1/dst. Add a shader
variant for when src0/src1 are the same dimensions and additional modulus
for src1 aren't needed. Div/mod are slow, so add "fast" div/mod that
have a fast path when the calculation isn't needed or can be done more
cheaply.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/acc.comp

ggml : vulkan logs (whisper/2547)

vulkan: Optimize some mat-vec mul quant shaders (ggml-org#10296)

Compute two result elements per workgroup (for Q{4,5}_{0,1}). This reuses
the B loads across the rows and also reuses some addressing calculations.
This required manually partially unrolling the loop, since the compiler
is less willing to unroll outer loops.

Add bounds-checking on the last iteration of the loop. I think this was at
least partly broken before.

Optimize the Q4_K shader to vectorize most loads and reduce the number of
bit twiddling instructions.

Vulkan: Fix device info output format specifiers (ggml-org#10366)

* Vulkan: Fix device info output format specifiers

* Vulkan: Use zu printf specifier for size_t instead of ld

vulkan: remove use of null initializer (ggml-org#10372)

Seems like this isn't working for vulkan-over-metal when the array is sized
by a spec constant. Maybe a spirv-cross limitation?

vulkan: Optimize soft_max (ggml-org#10301)

* vulkan: Optimize soft_max

Large soft_max could already saturate memory, but small/medium sizes were
pretty slow. The bulk of the gains for them comes from using a smaller
workgroup size, and making the workgroup size match the subgroup size also
makes the barriers much cheaper.

Cache some values in locals to avoid refetching/recomputing. And stamp
out a few "template instantiations" so smaller cases will fully unroll.

Add a missing early return for OOB rows. This happens when there are more
than 512 rows and the dispatch is 512 x H.

* vulkan: Further soft_max optimizations

Restore the workgroup size of 512 case, use it for >1024.

Use unrollable loops for more iteration counts.

vulkan: further optimize mul_mat_vec using larger loads (ggml-org#10387)

* vulkan: Use pipeline_robustness to disable robustness in mul_mat_vec.

Add some early returns for nonexistent rows in mul_mat_vec shaders. These
can only be hit when dispatching a 2D grid of workgroups. Fix the logic
for the 2D grid of workgroups to round up.

Enable the pipeline robustness extension if it's available, and use it to
disable robustness for these pipelines. The instructions to do the bounds
checking contend for the same ALU resources as the bit twiddling dequant
instructions.

* vulkan: Add GLSL structure aliases for quant types to allow larger loads

In Vulkan it's not possible to cast pointer types, so instead you have to
declare an aliased binding for the memory with a different type. This
commit adds aliases for the quant formats using 16b ints, and in a few
places where the struct size is a multiple of 4 also using 32b ints.
Currently only q4_k's aliases are used, but others will be used in
subsequent commits.

* vulkan: use larger loads in q5_k and q6_k shaders.

Similar to the optimization I did in q4_k recently, this vectorizes some loads
and reduces the number of bit twiddling instructions.

* vulkan: use larger K step per iteration in mul_mat_vec.

Add vec4 dequantization functions, and use them to do K=8 per iteration in
mul_mat_vec. This uses 16b loads for the quant values and 128b loads for B
which helps reduce the load on the memory system.

The K_PER_ITER==2 logic is still there, just for F16/F32, and really only
because they support unaligned sizes.

Tweak the num_iters/unrolling logic to be simpler and catch a couple missed
unrolling opportunities.

vulkan: copy iq4_nl LUT into shared memory (ggml-org#10409)

vulkan: predicate max operation in soft_max shaders/soft_max (ggml-org#10437)

Fixes ggml-org#10434

vulkan: Fix a vulkan-shaders-gen arugment parsing error (ggml-org#10484)

The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.

vulkan: fix group_norm (ggml-org#10496)

Fix bad calculation of the end of the range. Add a backend test that
covers the bad case (taken from stable diffusion).

Fixes leejet/stable-diffusion.cpp#439.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: optimize Q2_K and Q3_K mul_mat_vec (ggml-org#10459)

vulkan: skip integer div/mod in get_offsets for batch_idx==0 (ggml-org#10506)

vulkan: further optimize q5_k mul_mat_vec (ggml-org#10479)

vulkan: Handle GPUs with less shared memory (ggml-org#10468)

There have been reports of failure to compile on systems with <= 32KB
of shared memory (e.g. ggml-org#10037). This change makes the large tile size
fall back to a smaller size if necessary, and makes mul_mat_id fall
back to CPU if there's only 16KB of shared memory.

vulkan: define all quant data structures in types.comp (ggml-org#10440)

vulkan: get the first command buffer submitted sooner (ggml-org#10499)

This is an incremental improvement over ggml-org#9118 to get work to the GPU a bit
sooner. The first part is to start with a smaller number of nodes before
the first submit, and ramp it up to the current 100 nodes/submit. The
second part is to reduce the dryrun overhead for all the nodes that just
need to request descriptor space.

With these changes I get around 1-2% speedup on RTX 4070 combined with my
old Haswell-era CPU.

vulkan: Dynamic subgroup size support for Q6_K mat_vec (ggml-org#10536)

* subgroup 64 version with subgroup add. 15% faster

scalable version

tested for subgroup sizes 16-128

* check for subgroup multiple of 16 and greater than 16

* subgroup sizes are always a power of 2 (KhronosGroup/GLSL#45)

* force 16 sequential threads per block

* make 16 subgroup size a constant

vulkan: optimize and reenable split_k (ggml-org#10637)

Use vector loads when possible in mul_mat_split_k_reduce. Use split_k
when there aren't enough workgroups to fill the shaders.

vulkan: Implement "fast divide" (mul+shift) for unary ops like copy (ggml-org#10642)

vulkan: Add VK_NV_cooperative_matrix2 support for mul_mat and flash attention (ggml-org#10206)

# Conflicts:
#	ggml/src/vulkan-shaders/dequant_funcs_cm2.comp
#	ggml/src/vulkan-shaders/flash_attn_cm2.comp
#	ggml/src/vulkan-shaders/mul_mm_cm2.comp

Vulkan: VK_KHR_cooperative_matrix support to speed up prompt processing (ggml-org#10597)

* Vulkan: Implement VK_KHR_cooperative_matrix support in the matrix matrix multiplication shader

* Improve performance with better q4_k and q5_k dequant and store unrolling

* Add Vulkan MUL_MAT and MUL_MAT_ID accumulator precision selection

* Rework mulmat shader selection and compilation logic, avoid compiling shaders that won't get used by device

* Vulkan: Implement accumulator switch for specific mul mat mat shaders

* Vulkan: Unroll more loops for more mul mat mat performance

* Vulkan: Add VK_AMD_shader_core_properties2 support to read Compute Unit count for split_k logic

* Disable coopmat support on AMD proprietary driver

* Remove redundant checks

* Add environment variable GGML_VK_DISABLE_COOPMAT to disable VK_KHR_cooperative_matrix support

* Fix rebase typo

* Fix coopmat2 MUL_MAT_ID pipeline selection
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: compile a test shader in cmake to check for coopmat2 support (ggml-org#10713)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/ggml-vulkan/CMakeLists.txt
#	ggml/src/vulkan-shaders/test_coopmat2_support.comp

Vulkan: fix NaN in tanh.comp with AMD proprietary driver on Windows (ggml-org#10723)

* Vulkan: fix NaN in tanh.comp

* Faster NaN-free tanh

vulkan: fix compile warnings (ggml-org#10731)

vulkan: disable spirv-opt for coopmat shaders (ggml-org#10763)

There are some bugs in the 1.3.296 SDK, so disable this. It isn't strictly
necessary anyway.

Add missing dependency on vulkan-shaders-gen, so shaders get recompiled when it
changes.

Fix coopmat support reporting when glslc doesn't support NV_coopmat2.

vulkan: dynamic subgroup size for the remaining k quants (ggml-org#10745)

* q5_k

q4_k

q3_k

q2_k

q6_k multi row example

* revert as multi row isnt faster for k quants

vulkan: request round-to-even for fp16 in im2col/rope_head (ggml-org#10767)

Vulkan doesn't mandate a specific rounding mode, but the shader_float_controls
feature allows rounding mode to be requested if the implementation supports it.

Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats (ggml-org#10721)

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* Fix subgroup size control extension support check

Add accf32 and accf16 checks for coopmats

* Also disable coopmats on amdvlk

Vulkan: Use improved q4_k and q5_k dequant code in dequant shaders (ggml-org#10798)

vulkan: small mul_mat_vec optimizations (ggml-org#10665)

* double the number of rows per workgroup

* Update ggml-vulkan.cpp

* Vulkan: Add VK_EXT_subgroup_size_control support to ensure full subgroups for coopmats

* only increase the number of rows for amd and subgroup size 64

* fix missing NUM_ROWS for mul_mat_vec_iq4_nl_f16_f32, untested

* use subgroup min and max to check for gcn (requires ggml-org#10721)

* manual merge ggml-vulkan.cpp

* set min and max subgroup size in any case

* Also double the number of rows for Intel GPUs

Change Debug print name

add GGML_ROPE_TYPE_MROPE

rwkv6: add wkv6 support for Vulkan backend (ggml-org#10829)

* rwkv_wkv6 vulkan shader

* RWKV_WKV6 Vulkan op tests passed

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* Apply code format changes

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>

* add [[unroll]] and remove unnecessary conditions

* add uma support

* fix erros in EditorConfig Checker

---------

Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
Co-authored-by: Molly Sophia <mollysophia379@gmail.com>
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/wkv6.comp

vulkan: bugfixes for small subgroup size systems + llvmpipe test (ggml-org#10809)

* ensure mul mat shaders work on systems with subgroup size less than 32

more fixes

add test

* only s_warptile_mmq needs to be run with 32 threads or more
# Conflicts:
#	.github/workflows/build.yml

vulkan : fix soft_max.comp division by zero (whisper/2633)

This change prevents a division by zero error when p.KY is 0.

vulkan: optimize coopmat2 dequant functions (ggml-org#10855)

Change the code to do 16b loads when possible and extract the appropriate
component late, so the code is effectively decoding a pair of elements and
then selecting one. This can allow more commoning to happen in the compiler
when neighboring elements are loaded.

vulkan: build fixes for 32b (ggml-org#10927)

* vulkan: build fixes for 32b

Should fix ggml-org#10923

* vulkan: initialize some buffer/offset variables

examples, ggml : fix GCC compiler warnings (ggml-org#10983)

Warning types fixed (observed under MSYS2 GCC 14.2.0):
* format '%ld' expects argument of type 'long int', but argument has type 'size_t'
* llama.cpp/ggml/src/ggml-vulkan/vulkan-shaders/vulkan-shaders-gen.cpp:81:46: warning: missing initializer for member '_STARTUPINFOA::lpDesktop' [-Wmissing-field-initializers]  (emitted for all struct field except first)
# Conflicts:
#	examples/export-lora/export-lora.cpp

vulkan: multi-row k quants (ggml-org#10846)

* multi row k quant shaders!

* better row selection

* more row choices

* readjust row selection

* rm_kq=2 by default

vulkan: Use push constant offset to handle misaligned descriptors (ggml-org#10987)

vulkan: im2col and matmul optimizations for stable diffusion (ggml-org#10942)

* tests: Add im2col perf tests

* vulkan: optimize im2col, more elements per thread

* vulkan: increase small tile size for NV_coopmat2

* vulkan: change im2col to 512 elements per workgroup

vulkan: optimize mul_mat for small values of N (ggml-org#10991)

Make the mul_mat_vec shaders support N>1 (as a spec constant, NUM_COLS) where
the batch_strides are overloaded to hold the row strides. Put the loads from the
B matrix in the innermost loop because it should cache better.

Share some code for reducing the result values to memory in mul_mat_vec_base.
# Conflicts:
#	tests/test-backend-ops.cpp

fix: Vulkan shader gen binary path (ggml-org#11037)

Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver (ggml-org#11074)

* Vulkan: Add device-specific blacklist for coopmat for the AMD proprietary driver

* Add (TM) to AMD name check

fix lora print

Disable GL_KHR_cooperative_matrix Vulkan extension if not available. (ggml-org#11117)

* Disable GL_KHR_cooperative_matrix Vulkan extension if not available.

* Perform Vulkan extensions checks in a more sensible order

* Remove unnecessary #ifdef directive
# Conflicts:
#	ggml/src/vulkan-shaders/test_coopmat_support.comp

llama: add support for QRWKV6 model architecture (ggml-org#11001)

Vulkan: Fix float16 use on devices without float16 support + fix subgroup_size_control validation error (ggml-org#11161)

* Vulkan: Remove float16 use in shaders

* Fix validation error about subgroup_size_control extension

fix: ggml: fix vulkan-shaders-gen build (ggml-org#10448)

* fix: ggml: fix vulkan-shaders-gen build

The vulkan-shaders-gen target was not being built correctly
in case of cross-compilation.
Other outputs need to be built for the cross compile target,
but vulkan-shaders-gen needs to be built for the host.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

- Add GGML_SHADERS_GEN_TOOLCHAIN CMake option.
- Auto-detect host toolchain if not set.

* refactor: ggml: Improve vulkan-shaders-gen toolchain setup

Use configure_file to generate host_toolchain.cmake from template

* fix: ggml: Fix compile error

Fix compile error not finding vulkan-shaders-gen

* fix: vulkan-shaders-gen build and path handling

Fix build issues with vulkan-shaders-gen:
- Add target dependency for correct build order
- Use CMAKE_HOST_SYSTEM_NAME for executable suffix
- Fix MSVC output directory in host toolchain
- Normalize path handling for cross-compilation

* fix: improve host compiler detection in vulkan shader build

Improve host compiler detection for vulkan shader generation:
- Add NO_CMAKE_FIND_ROOT_PATH to all compiler searches
- Consolidate compiler detection logic
- Fix Windows-specific MSVC detection
- Ensure correct compiler search in cross-compilation

* refactor: Simplify CMake function for detecting host compiler

Simplified the CMake function to improve the process of detecting the host compiler.

* fix: Remove unnecessary Vulkan library linkage in CMakeLists.txt

Since `vulkan-shader-gen.cpp` only requires the `glslc` executable
and not the Vulkan headers or libraries, CMakeLists.txt needs to
be corrected.
(See: ecc93d0)

* refactor: Rename host_toolchain.cmake.in

- Rename host_toolchain.cmake.in to cmake/host-toolchain.cmake.in

* refactor: GGML_VULKAN_SHADERS_GEN_TOOLCHAIN

Rename the macro GGML_SHADERS_GEN_TOOLCHAIN to GGML_VULKAN_SHADERS_GEN_TOOLCHAIN
# Conflicts:
#	ggml/src/ggml-vulkan/CMakeLists.txt

vulkan: scale caching for k quants + misc fixes (ggml-org#11081)

* q6_k scale caching

* 16 bit unpack

* q4_k test (slow)

* revert it

* q3_k

* q2_k

* little stuff

* try precalculating products of a and q2_k scales

* Revert "try precalculating products of a and q2_k scales"

This reverts commit 65110b81f23f66331a50c6e889a7c1ab9470a86b.

* unpack should be u16, add vim swap to gitignore (about time)

* better q4_k scales

* q5_k

* better q6_k with separate paths for all threads and partial threads in use, plus some more optimizations

* q2_k better dequant

* q3_k optimizations

* q3_k use hmask simd from cpu avx version

* make the caches happy

* q3_k separate out calculation

* q2_k separate out

* little stuff

* use calc_superblock everywhere

* q2_k optimize scale calculation

* more barriers

vulkan: optimize coopmat2 q2_k dequant function (ggml-org#11130)

vulkan: optimize coopmat2 q4_k/q5_k dequant functions. (ggml-org#11206)

Do masking on whole dwords, fetch all scales at once.

vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl (ggml-org#11166)

* vulkan: support copy from f32 to q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl

Shaders are based on cpy.cu.

* vulkan: support copy from q4_0/q4_1/q5_0/q5_1/q8_0/iq4_nl to f32

* ggml: copy q->f32 assumes some contiguity in the destination
# Conflicts:
#	ggml/src/ggml-cpu/ggml-cpu.c
#	ggml/src/vulkan-shaders/copy_from_quant.comp
#	ggml/src/vulkan-shaders/copy_to_quant.comp

vulkan: fix coopmat2 flash attention for non-contiguous inputs (ggml-org#11281)

Add code similar to mul_mm_cm2 to force alignment of strides, to avoid
a performance regression.

Add noncontiguous FA tests in test-backend-ops.

Fixes ggml-org#11268.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: fix coopmat2 validation failures (ggml-org#11284)

mul mat and flash attention shaders were loading f32 types directly into
A/B matrices, which happens to work but is technically invalid usage.
For FA, we can load it as an Accumulator matrix and convert and this
is not in the inner loop and is cheap enough. For mul mat, it's more
efficient to do this conversion in a separate pass and have the input(s)
be f16.

coopmat2 requires SPIR-V 1.6 (related using to LocalSizeId). LocalSizeId
requires maintenance4 be enabled, and SPIR-V 1.6 requires Vulkan 1.3.

vulkan: fix diag_mask_inf (ggml-org#11323)

With robustbufferaccess disabled, this shader was showing OOB stores. There
is a bounds check in the code, but the workgrouop dimensions were reversed vs
CUDA and it was running the wrong number of threads. So fix the workgroup
dimensions and disable robustness for this pipeline.

vulkan: sort shaders for more deterministic binary (ggml-org#11315)

Fixes ggml-org#11306.

Vulkan-run-test: fix mmq_wg_denoms (ggml-org#11343)

There should be a copy-and-paste error here.

*mmq_wg_denoms should be used together with *warptile_mmq, instead of
wg_denoms.

vulkan: compile shaders on-demand (ggml-org#11406)

Reduce first-run startup time and memory consumption.

Should fix ggml-org#11339.

vulkan: Catch pipeline creation failure and print an error message (ggml-org#11436)

* vulkan: Catch pipeline creation failure and print an error message

Also, fix some warnings from my on-demand compile change.

* vulkan: fix pipeline creation logging

vulkan: implement initial support for IQ2 and IQ3 quantizations (ggml-org#11360)

* vulkan: initial support for IQ3_S

* vulkan: initial support for IQ3_XXS

* vulkan: initial support for IQ2_XXS

* vulkan: initial support for IQ2_XS

* vulkan: optimize Q3_K by removing branches

* vulkan: implement dequantize variants for coopmat2

* vulkan: initial support for IQ2_S

* vulkan: vertically realign code

* port failing dequant callbacks from mul_mm

* Fix array length mismatches

* vulkan: avoid using workgroup size before it is referenced

* tests: increase timeout for Vulkan llvmpipe backend

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq2_s.comp
#	ggml/src/vulkan-shaders/dequant_iq2_xs.comp
#	ggml/src/vulkan-shaders/dequant_iq2_xxs.comp
#	ggml/src/vulkan-shaders/dequant_iq3_s.comp
#	ggml/src/vulkan-shaders/dequant_iq3_xxs.comp

CUDA: non-contiguous (RMS) norm support (ggml-org#11659)

vulkan: use smaller combined allocations to avoid fragmentation (ggml-org#11551)

# Conflicts:
#	ggml/src/ggml-alloc.c

vulkan: initial support for IQ4_XS quantization (ggml-org#11501)

# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq4_xs.comp

vulkan: optimize coopmat2 iq2/iq3 callbacks (ggml-org#11521)

* vulkan: optimize coopmat2 iq2/iq3 callbacks

* build: trigger CI on GLSL compute shader changes

vulkan: print shared memory size (ggml-org#11719)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: account for lookup tables when checking shared memory size (ggml-org#11502)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (ggml-org#11592)

vulkan: linux builds + small subgroup size fixes (ggml-org#11767)

* mm subgroup size

* upload vulkan x86 builds

vulkan: initial support for IQ1_S and IQ1_M quantizations (ggml-org#11528)

* vulkan: initial support for IQ1_S and IQ1_M quantizations

* vulkan: define MMV kernels for IQ1 quantizations

* devops: increase timeout of Vulkan tests again

* vulkan: simplify ifdef for init_iq_shmem
# Conflicts:
#	ggml/src/vulkan-shaders/dequant_iq1_m.comp
#	ggml/src/vulkan-shaders/dequant_iq1_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq1_m.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq1_s.comp

vulkan: support multi/vision rope, and noncontiguous rope (ggml-org#11902)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/rope_multi.comp
#	ggml/src/vulkan-shaders/rope_vision.comp

vulkan: implement several ops relevant for ggml_opt (ggml-org#11769)

* vulkan: support memset_tensor

* vulkan: support GGML_OP_SUM

* vulkan: implement GGML_OP_ARGMAX

* vulkan: implement GGML_OP_SUB

* vulkan: implement GGML_OP_COUNT_EQUAL

* vulkan: implement GGML_OP_OPT_STEP_ADAMW

* vulkan: fix check_results RWKV_WKV6 crash and memory leaks

* vulkan: implement GGML_OP_REPEAT_BACK

* tests: remove invalid test-backend-ops REPEAT_BACK tests

* vulkan: fix COUNT_EQUAL memset using a fillBuffer command
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/argmax.comp
#	ggml/src/vulkan-shaders/count_equal.comp
#	ggml/src/vulkan-shaders/opt_step_adamw.comp
#	ggml/src/vulkan-shaders/repeat_back.comp
#	ggml/src/vulkan-shaders/sub.comp
#	tests/test-backend-ops.cpp

vulkan: implement more backpropagation operators (ggml-org#11914)

* vulkan: implement GGML_OP_ROPE_BACK

* vulkan: implement GGML_OP_RMS_NORM_BACK

* vulkan: implement GGML_OP_SILU_BACK

* vulkan: implement GGML_OP_SOFTMAX_BACK
# Conflicts:
#	ggml/src/vulkan-shaders/rms_norm_back.comp
#	ggml/src/vulkan-shaders/silu_back.comp
#	ggml/src/vulkan-shaders/soft_max_back.comp

Add memset tensor in all backend interface

SYCL: implement memset ggml backend buffer interface (ggml-org#12580)

* SYCL: implement memset ggml backend buffer interface

* use GGML_ABORT macro

* Do not wait for all queues to finish for memset operation
# Conflicts:
#	ggml/src/ggml-sycl.cpp

add OP sigmoid (ggml-org#12056)

Co-authored-by: Judd <foldl@boxvest.com>
# Conflicts:
#	ggml/src/vulkan-shaders/sigmoid.comp

vulkan: fix assertion when qy_needs_dequant (ggml-org#12068)

Looks like a copy/paste bug from qx_needs_dequant.

vulkan: improve im2col (ggml-org#11826)

* vulkan: improve im2col performance

vulkan: matmul dequantization improvements (ggml-org#12015)

* faster dequant for old quants

* dont use unpack for iq4_nl

* vec2 unpack for q8

vulkan: add specific MMV kernels for IQ2 and IQ3 quants + optimizations (ggml-org#11595)

* vulkan: implement specialized MMV kernels for IQ2 quantizations

* vulkan: add MMV kernels for IQ3 quants

* vulkan: Increase MMV batch size and unroll IQ LUT setup

* vulkan: fix init_iq_shmem for WG sizes larger than tables

* vulkan: common batch size for all I-quants
# Conflicts:
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_xs.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq2_xxs.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq3_s.comp
#	ggml/src/vulkan-shaders/mul_mat_vec_iq3_xxs.comp

cuda/vulkan: specify fp32-only support for some operations in supports_op (ggml/1129)

ggml-ci

# Conflicts:
#	ggml/src/ggml-cuda.cu
#	tests/test-backend-ops.cpp

mat vec double buffer (ggml-org#12188)

vulkan: fix bug in coopmat1 mul_mat_id (ggml-org#12316)

* tests: run mul_mat_id with a larger N

* vulkan: fix bug in coopmat1 mul_mat_id

Update build.yml for Windows Vulkan builder to use Vulkan 1.4.304 SDK for VK_NV_cooperative_matrix2 support (ggml-org#12301)

vulkan: Adjust coopmat2 tile sizes and selection heuristic (ggml-org#12258)

vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking (ggml-org#12273)

* vulkan: Pad N dimension of B matrix for coopmat2 perf, to avoid bounds checking

vulkan: use fp32 in coopmat2 q4_k dequant function (ggml-org#12309)

vulkan: subgroup size tuning (ggml-org#12087)

* vulkan: subgroup size test

* Vulkan: Add device architecture enum and logic to recognize AMD generations

* vulkan: use new architecture logic to specify subgroup size

* Initial vulkan subgroup size tuning for RDNA3

* vulkan: commonize RDNA subgroup tuning

* vulkan: override subgroup size if required_subgroup_size = 0

* vulkan: disable warp 32 for RDNA3

* vulkan: fine tuned RDNA1 subgroup sizes

* vulkan: adjusted subgroup size map

* vulkan: fixed RDNA2 subgroup map

---------

Co-authored-by: 0cc4m <picard12@live.de>

vulkan: Add N/2 and N/4 optimized paths in coopmat2 shader (ggml-org#12312)

ggml-vulkan: remove unused find_program(glslc) (ggml-org#12416)

It's already found by FindVulkan.cmake in the parent CMakeLists

Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentation and driver issues (ggml-org#12434)

vulkan: Submit once enough matmul work has been recorded (ggml-org#12406)

I've been seeing significantly worse performance for tg with flash attention
enabled vs disabled, and it seems to be related to the submit heuristic.
Change the heuristic to check how many bytes worth of weight matrix are
used and flush every 100MB, and ramp up after the first few submits.
This seems to resolve the issue, and also increases perf for non-FA a bit.

vulkan: optimize iq1 coopmat2 dequant functions (ggml-org#12427)

vulkan: workaround for AMD Windows driver 16 bit unpack8 bug (ggml-org#12472)

Vulkan: RTE rounding for cpy to quant (ggml-org#12480)

* Vulkan: RTE rounding for cpy to quant

Co-Authored-By: Jeff Bolz <jbolz@nvidia.com>

* remove trailing whitespace

* avoid duplicating pipeline_cpy_f32_quant

* fix copypasting issue

* remove duplicated code

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>

vulkan: Optimize mul_mat_vec p021 and nc shaders (ggml-org#12505)

* tests: add mul_mat perf/functional tests for p021/nc vulkan shaders

* vulkan: Optimize mul_mat_vec p021 and nc shaders.

These shaders are used in attention calculations, and when the KV cache grows
large they start to dominate the run time. For the nc shader (which is called
with large 'k' dimension), use unrolling and vector loads. For the p021 shader
(which is called with large 'm' and small 'k' dimensions), take advantage of
grouped query attention to reuse loads from the A matrix for the whole group,
and reduce the number of workgroups (too much overhead from tiny dispatches).

Using subgroupAdd in the p021 shader also helps, use that conditionally.
# Conflicts:
#	tests/test-backend-ops.cpp

vulkan: fix mul_mat_vec failure in backend tests (ggml-org#12529)

The OOB calculation could be wrong if the last iteration was during one of
the unrolled loops. Adjust the unrolling counts to avoid this. Add a couple
new backend tests that hit this failure on NVIDIA GPUs.

vulkan: fix coopmat shader generation when cross-compiling (ggml-org#12272)

* vulkan: fix coopmat shader generation when cross-compiling

Previously the status of coopmat{,2} support isn't passed to the
vulkan-shaders-gen project building on the host, which leads to build
failure because of the cross-compiling code expecting coopmat{,2}
shaders that didn't get generated.

Fix this by passing the coopmat{,2} support status to vulkan-shaders
subproject.

Signed-off-by: Icenowy Zheng <uwu@icenowy.me>

* Only call coop-mat shaders once

* Fix whitespace

---------

Signed-off-by: Icenowy Zheng <uwu@icenowy.me>
Co-authored-by: bandoti <141645996+bandoti@users.noreply.github.com>

cmake: improve Vulkan cooperative matrix support checks (whisper/2966)

Co-authored-by: Sandro Hanea <me@sandro.rocks>

cmake : fix whitespace (#0)

Vulkan: Add DP4A MMQ and Q8_1 quantization shader (ggml-org#12135)

* Vulkan: Add DP4A MMQ and Q8_1 quantization shader

* Add q4_0 x q8_1 matrix matrix multiplication support

* Vulkan: Add int8 coopmat MMQ support

* Vulkan: Add q4_1, q5_0 and q5_1 quants, improve integer dot code

* Add GL_EXT_integer_dot_product check

* Remove ggml changes, fix mmq pipeline picker

* Remove ggml changes, restore Intel coopmat behaviour

* Fix glsl compile attempt when integer vec dot is not supported

* Remove redundant code, use non-saturating integer dot, enable all matmul sizes for mmq

* Remove redundant comment

* Fix integer dot check

* Fix compile issue with unsupported int dot glslc

* Update Windows build Vulkan SDK version
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/mul_mmq.comp
#	ggml/src/vulkan-shaders/mul_mmq_funcs.comp
#	ggml/src/vulkan-shaders/quantize_q8_1.comp
#	ggml/src/vulkan-shaders/test_integer_dot_support.comp

vulkan: fix build when glslc doesn't support coopmat (ggml-org#12683)

Vulkan: Fix mmq int dot float cache size (ggml-org#12722)

vulkan: Implement grouped query attention in the coopmat2 FA shader (ggml-org#12559)

When adjacent batches of Q share the same batches of K/V, batch them into
the same workgroup. For example, when:

dst(128,32,1,1) = FA(q(128,1,32,1), k(128,16640,8,1), v(128,16640,8,1))

previously we would run 32 workgroups computing 1 result each, now we will
run 8 workgroups computing 4 results each.

This doesn't directly translate to better performance (at least when you have
>=32 SMs), but in a subsequent change I'll enable split_k which will scale much
better with 4x fewer workgroups.

cmake: remove caching from vulkan coopmat checks (ggml-org#12719)

vulkan: Implement split_k for coopmat2 flash attention. (ggml-org#12627)

When using group query attention, we have one workgroup per KV batch and this
can be very few workgroups (e.g. just 8 in some models). Enable split_k to
spread the work across SMs. This helps a lot when the KV cache is large.
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_split_k_reduce.comp

vulkan: Fix missing cmake logic for dot product extension (ggml-org#12721)

vulkan: set cmake minimum and project name in vulkan-shaders (ggml-org#12744)

vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (ggml-org#12630)

There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

cmake: fix ggml-shaders-gen compiler paths containing spaces (ggml-org#12747)

fixes error for compiler paths with spaces

Vulkan: Tune Vulkan mmq int dot shader for performance (ggml-org#12767)

vulkan: Use unclamped loads for flash attention mask (ggml-org#12720)

nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.

vulkan: fix NaN issue in flash attention shader (ggml-org#12776)

Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.

vulkan: Use fp16 for the flash attention P*V multiplication (ggml-org#12783)

This is consistent with the ggml-cuda behavior and the mul_mat fallback.

vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (ggml-org#12833)

q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.

vulkan: use aligned loads for flash attention mask (ggml-org#12853)

Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.

vulkan: enable coopmat2 FA gqa and split_k optimizations more often (ggml-org#12931)

The grouped query attention optmization doesn't require a power of two ratio,
the only thing relying on it was the modulo operation written as bitwise &.

split_k need not depend on gqa_ratio - enable it any time there's only one
workgroup in the X dimension. The shader gets the split index from the x coord,
and multiple workgroups in the X dimension (pre-split) indicates a larger
FA operation that wouldn't need splitting.

vulkan: support noncontiguous rms_norm (ggml-org#13031)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: matmul gcn tuning (ggml-org#13016)

* tune matmul for gcn

* this one is more power efficient

* Update ggml/src/ggml-vulkan/ggml-vulkan.cpp

Co-authored-by: 0cc4m <picard12@live.de>

* disable this tune for the proprietary driver

---------

Co-authored-by: 0cc4m <picard12@live.de>

vulkan: use uint array index to avoid glslang bug (ggml-org#13193)

vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader (ggml-org#13191)

* vulkan: Handle src1 batch dimension in non-contiguous mat-vec-mul shader

vulkan: Add bfloat16 support (ggml-org#12554)

* vulkan: Add bfloat16 support

This adds bfloat16 matrix multiply support based on VK_KHR_shader_bfloat16.
The extension is required for coopmat multiply support, but matrix-vector
multiply trivially promotes bf16 to fp32 and doesn't require the extension.
The copy/get_rows shaders also don't require the extension.

It's probably possible to fall back to non-coopmat and promote to fp32 when
the extension isn't supported, but this change doesn't do that.

The coopmat support also requires a glslc that supports the extension, which
currently requires a custom build.

* vulkan: Support bf16 tensors without the bf16 extension or coopmat support

Compile a variant of the scalar mul_mm shader that will promote the bf16
values to float, and use that when either the bf16 extension or the coopmat
extensions aren't available.

* vulkan: bfloat16 fixes (really works without bfloat16 support now)

* vulkan: fix spirv-val failure and reenable -O
# Conflicts:
#	ggml/src/vulkan-shaders/test_bfloat16_support.comp

vulkan: Additional type support for unary, binary, and copy (ggml-org#13266)

Support f16->f32 copy.
Support f16->f16 and f32->f32 unary ops.
Support all combinations of f16/f32 for src0/src1/dst for add/sub/mul/div.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: Allow up to 4096 elements for mul_mat_id row_ids (ggml-org#13326)

This assert fired running Qwen_Qwen3-30B-A3B-Q2_K.gguf:

GGML_ASSERT(nei0 * nei1 <= 3072);

The tensor is 8 x 512. Increase this array size to accommodate.

vulkan: scalar flash attention implementation (ggml-org#13324)

* vulkan: scalar flash attention implementation

* vulkan: always use fp32 for scalar flash attention

* vulkan: use vector loads in scalar flash attention shader

* vulkan: remove PV matrix, helps with register usage

* vulkan: reduce register usage in scalar FA, but perf may be slightly worse

* vulkan: load each Q value once. optimize O reduction. more tuning

* vulkan: support q4_0/q8_0 KV in scalar FA

* CI: increase timeout to accommodate newly-supported tests

* vulkan: for scalar FA, select between 1 and 8 rows

* vulkan: avoid using Float16 capability in scalar FA
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/flash_attn.comp

vulkan: workaround FA compile failures on macos (ggml-org#13517)

vulkan: KHR_coopmat flash attention (ggml-org#13506)

This shader uses coopmat1 to do the Q*K^T multiply. The P*V multiply is more
difficult for various reasons so I haven't done it. Performance for this
shader is around 2.5x better than for the scalar shader when doing prompt
processing. Some of the benefit may be from other optimizations like staging
through shared memory, or splitting by rows.
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_cm1.comp

cmake: simplify vulkan shader test logic (ggml-org#13263)

vulkan: use scalar FA rather than coopmat2 when N==1 (ggml-org#13554)

Add pipeline_acc_f32

vulkan: move common FA code to flash_attn_base.comp (ggml-org#13556)

* vulkan: move common FA code to flash_attn_base.comp

* vulkan: move common FA index/stride setup code to flash_attn_base.comp

* build fix
# Conflicts:
#	ggml/src/vulkan-shaders/flash_attn_base.comp

cmake: use the current build config for vulkan-shaders-gen (ggml-org#13595)

* fix: use the current build config for `vulkan-shaders-gen`

* fix: only pass a valid build type to `--config`

Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (ggml-org#13607)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: fix warnings (ggml-org#13626)

* small fixes

* remove ifdef

use LOG_WARN to replace `std::cerr` (ggml-org#13657)

vulkan: Disable coopmat/coopmat2/bfloat extensions if glslc doesn't support it (ggml-org#13696)

vulkan: support CPY from any type to itself (ggml-org#13695)

Reuse the f16/f32 copy shaders, and just scale the number of elements
according to the type size.

add GGML_LOG_WARN

vulkan: mark IM2COL as supporting non-contig (ggml-org#13783)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: use timestamp queries for GGML_VULKAN_PERF (ggml-org#13817)

Also change it to be controlled by an env var rather than cmake flag

vulkan : Remove unexpected ; (ggml/1253)

vulkan: fix warnings in perf logger querypool code (ggml-org#13937)

ggml-vulkan: adds support for op CONV_TRANSPOSE_1D (ggml-org#13813)

* * ggml-vulkan: adds op CONV_TRANSPOSE_1D

* test-backend-ops: adds more spohisticated tests for CONV_TRANSPOSE_1D

* Missing barrier added to shader.
Number of additional tests reduced to 108.

* * Fixes typo in variable name.

* Removes extra whitespaces.

* Adds int64->int32 casts to prevent possible warnings.

* Problem size reduced in tests to pass tests with llvmpipe.

* supports_op condition moved from unintended position
# Conflicts:
#	ggml/src/ggml-vulkan.cpp
#	ggml/src/vulkan-shaders/conv_transpose_1d.comp

vulkan: Enable VK_KHR_cooperative_matrix extension for Intel Xe2 GPUs (ggml-org#14001)

* allowing B580 and U9-288V

* experimenting code to detect Xe2

* allowing coopmat only for Xe2 GPUs

* fixed comment wording

* fixed comment wording

* removed unnecessary driver check

Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (ggml-org#14099)

# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: force device 0 in CI (ggml-org#14106)

Add GGML_LOG_INFO

vulkan: Track descriptor pools/sets per-context (ggml-org#14109)

Use the same descriptor set layout for all pipelines (MAX_PARAMETER_COUNT == 8)
and move it to the vk_device. Move all the descriptor pool and set tracking to
the context - none of it is specific to pipelines anymore. It has a single vector
of pools and vector of sets, and a single counter to track requests and a single
counter to track use.

vulkan: Better thread-safety for command pools/buffers (ggml-org#14116)

This change moves the command pool/buffer tracking into a vk_command_pool
structure. There are two instances per context (for compute+transfer) and
two instances per device for operations that don't go through a context.
This should prevent separate contexts from stomping on each other.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: mutex around vkQueueSubmit (ggml-org#14127)

This fixes the remaining crash in test-thread-safety on my system.

cmake: clean up external project logic for vulkan-shaders-gen (ggml-org#14179)

* Remove install step for vulkan-shaders-gen

* Add install step to normalize msvc with make

* Regenerate modified shaders at build-time
# Conflicts:
#	.github/workflows/build.yml

cmake: remove shader-gen step-targets from ggml-vulkan (ggml-org#14226)

* Remove step-targets from vulkan-shaders-gen

* Unset DESTDIR when building vulkan-shaders-gen

Vulkan: Set device max size for host memory to avoid OOM warning and fallback to CPU buffer (ggml-org#14249)

Add support for VK_EXT_debug_utils to add labels to Vulkan objects. (ggml-org#13792)

* Add support for VK_EXT_debug_utils to add labels to Vulkan objects. In step 1 compute pipelines are getting labeled.

* remove #ifdef for debug utils and add queue marker.
# Conflicts:
#	ggml/src/ggml-vulkan.cpp

vulkan: update windows SDK in CI (ggml-org#14334)

vulkan: update windows SDK in release.yml (ggml-org#14344)

# Conflicts:
#	.github/workflows/release.yml

cmake: regen vulkan shaders when shaders-gen sources change (ggml-org#14398)

* Add shaders-gen sources as target deps

vulkan: Fix GGML_VULKAN_SHADER_DEBUG_INFO (ggml-org#14427)

This setting needs to be passed through to vulkan-shaders-gen

vulkan: lock accesses of pinned_memory vector (ggml-org#14333)

vulkan: handle noncontig in the final case of ggml_vk_get_cpy_pipeline (ggml-org#14378)

Fix cuda build error

test

* remove  new cpu backend and yml files

* remove new op and GGML_ROPE_TYPE_NEOX

* fix build error

* change cmake file to add matrix operation

* remove coopmat2 check in flash attention

* print gpu info for vulkan

* disable fuse to recover vulkan performance

---------

Co-authored-by: 0cc4m <picard12@live.de>
Co-authored-by: firecoperana <firecoperana>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants