Skip to content

ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel #19132

Merged
taronaeo merged 7 commits intoggml-org:masterfrom
MonakaResearch:gemm_q4_K_8x8_q8_K_Kernel_SVE_Porting
Feb 16, 2026
Merged

ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel #19132
taronaeo merged 7 commits intoggml-org:masterfrom
MonakaResearch:gemm_q4_K_8x8_q8_K_Kernel_SVE_Porting

Conversation

@abhijain1204fujitsu
Copy link
Contributor

This PR introduces support for SVE (Scalable Vector Extensions) kernels for the q4_K_q8_K gemm using i8mm and vector instructions. ARM Neon support for this kernel added in PR #16739

Verifying Feature
----------------------------------------------------------------------------
This PR contains the SVE implementation of the gemm used to compute the Q4_K quantization.

Kernel: ggml_gemm_q4_K_8x8_q8_K()

By running a Q4_K_M quantized model of Llama-3.1-8B, I checked the generation output.
I also verified that the perplexity matches between the NEON and SVE implementations.

NEON (Original) SVE (This PR)
13.9017 +/- 1.44495 13.8577 +/- 1.44081

This correction does not appear to have any impact on accuracy.

The command used to measure the perplexity measure is

./llama-perplexity -m model.gguf -f wikitext-2-raw/wiki.test.raw --chunks 4

Performance Check
----------------------------------------------------------------------------

This PR Improves the Prompt Eval time (TTFT) of LLM Inference by 17-20%, as compared to NEON (PR #16739).

The performance was measured on Graviton3E @ 64 core.
Performance is improved as follows. The value is tokens per second.

Threads NEON (Original) SVE (This PR) Speedup
4 24.67 29.77 1.20
8 49.05 59.35 1.21
16 97.33 117.62 1.20
32 186.03 221.68 1.19
64 324.55 381.08 1.17

The command used to measure the performance is

llama-bench  --model ${PATH_TO_MODEL} -n 128 -p 128 -t 4,8,16,32,64

This work is a contribution of @Vithulep and @abhijain1204fujitsu

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Jan 27, 2026
@ggerganov
Copy link
Member

cc @Alcpz

@pvname
Copy link
Contributor

pvname commented Jan 28, 2026

Regarding CI Failure

When I ran the same command on my system, it build correctly with no issue. Can we check or Rerun the CI pipeline.

We have not made any changes in CMake or x86 code.

I am attaching the logs.

cmake -B build -DLLAMA_BUILD_BORINGSSL=ON -DGGML_SCHED_NO_REALLOC=ON
  cmake --build build --config RelWithDebInfo -j ${env:NUMBER_OF_PROCESSORS} --target llama-server 
-- The C compiler identification is GNU 13.1.0
-- The CXX compiler identification is GNU 13.1.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMAKE_BUILD_TYPE=Release
-- Found Git: /usr/bin/git (found version "2.34.1") 
-- The ASM compiler identification is GNU
-- Found assembler: /usr/bin/cc
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE  
-- ccache found, compilation results will be cached. Disable with GGML_CCACHE=OFF.
-- CMAKE_SYSTEM_PROCESSOR: aarch64
-- GGML_SYSTEM_ARCH: ARM
-- Including CPU backend
-- Found OpenMP_C: -fopenmp (found version "4.5") 
-- Found OpenMP_CXX: -fopenmp (found version "4.5") 
-- Found OpenMP: TRUE (found version "4.5")  
-- ARM detected
-- Performing Test GGML_COMPILER_SUPPORTS_FP16_FORMAT_I3E
-- Performing Test GGML_COMPILER_SUPPORTS_FP16_FORMAT_I3E - Failed
-- ARM detected flags: -mcpu=zeus+crc+aes+sha3+sm4
-- Performing Test GGML_MACHINE_SUPPORTS_dotprod
-- Performing Test GGML_MACHINE_SUPPORTS_dotprod - Success
-- Performing Test GGML_MACHINE_SUPPORTS_i8mm
-- Performing Test GGML_MACHINE_SUPPORTS_i8mm - Success
-- Performing Test GGML_MACHINE_SUPPORTS_sve
-- Performing Test GGML_MACHINE_SUPPORTS_sve - Success
-- Performing Test GGML_MACHINE_SUPPORTS_sme
-- Performing Test GGML_MACHINE_SUPPORTS_sme - Failed
-- Performing Test GGML_MACHINE_SUPPORTS_nosme
-- Performing Test GGML_MACHINE_SUPPORTS_nosme - Failed
-- Checking for ARM features using flags:
--   -mcpu=zeus+crc+aes+sha3+sm4+dotprod+i8mm+sve
-- Performing Test HAVE_DOTPROD
-- Performing Test HAVE_DOTPROD - Success
-- Performing Test HAVE_SVE
-- Performing Test HAVE_SVE - Success
-- Performing Test HAVE_MATMUL_INT8
-- Performing Test HAVE_MATMUL_INT8 - Success
-- Performing Test HAVE_FMA
-- Performing Test HAVE_FMA - Success
-- Performing Test HAVE_FP16_VECTOR_ARITHMETIC
-- Performing Test HAVE_FP16_VECTOR_ARITHMETIC - Success
-- Performing Test HAVE_SME
-- Performing Test HAVE_SME - Failed
-- Adding CPU backend variant ggml-cpu: -mcpu=zeus+crc+aes+sha3+sm4+dotprod+i8mm+sve 
-- ggml version: 0.9.5
-- ggml commit:  c3d8907de
-- Fetching BoringSSL version 0.20251002.0
-- Generating embedded license file for target: common
-- Configuring done (26.7s)
-- Generating done (0.4s)
-- Build files have been written to: /home/prashantv/fj-prop-test/llama.cpp/build

Copy link
Collaborator

@Alcpz Alcpz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall I don't see any issues with the existing implementation, so all good from my perspective. Please, also try to run clang-format on your changes, there are some inconsistencies in the style.

constexpr int q8_k_blocklen = 4;
const uint8x16_t m4b = vdupq_n_u8(0x0f);
#if defined(__aarch64__) && defined(__ARM_FEATURE_SVE) && defined(__ARM_FEATURE_MATMUL_INT8)
if (svcntb()*8 == 256) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Format

}

// q8_ptr[b].qs has interleaved Q8 rows (01, 23)
// const int8_t * q8_base = q8_ptr[b].qs + sb * 256;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is redundant commented code. Some comments could be improved a bit as well.


for (int y = 0; y < nr / q8_k_blocklen; y++) {
const block_q8_Kx4 * GGML_RESTRICT q8_ptr = (const block_q8_Kx4 *) vy + (y * nb);
const block_q8_Kx4 * GGML_RESTRICT q8_ptr_1 = (const block_q8_Kx4 *) vy + (y * nb);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the need for the same variable twice, I don't see them being used in a way that makes this necessary. Either clarify or cleanup.

acc_f32_67 = svdup_n_f32(0);

for (int b = 0; b < nb; b++) {
// bsums pairs belongs to the same q8_k subblock // 64 elemnts loaded and made sum of 0-7 and 8-15 sum || 16-23 and 24 - 31 sum
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// bsums pairs belongs to the same q8_k subblock // 64 elemnts loaded and made sum of 0-7 and 8-15 sum || 16-23 and 24 - 31 sum
// bsums pairs belongs to the same q8_k subblock
// 64 elements loaded and made sum of 0-7 and 8-15 sum || 16-23 and 24 - 31 sum

@Alcpz
Copy link
Collaborator

Alcpz commented Jan 28, 2026

Regarding CI Failure

When I ran the same command on my system, it build correctly with no issue. Can we check or Rerun the CI pipeline.

We have not made any changes in CMake or x86 code.

The Server failures are due to changes in the CI. If you rebase on top of master you should get rid of those. I also saw some issues with the `x86 high performance job failing on other pipelines, but as you say, this is not caused here.

@abhijain1204fujitsu abhijain1204fujitsu force-pushed the gemm_q4_K_8x8_q8_K_Kernel_SVE_Porting branch from c75f491 to 1d4d342 Compare January 29, 2026 07:46
@abhijain1204fujitsu
Copy link
Contributor Author

@Alcpz rebase and format related changes are pushed,
Kindly support to further review the PR

Thank you !

@pvname
Copy link
Contributor

pvname commented Feb 3, 2026

Hi @ggerganov and @Alcpz Please review the code. I don't why @loci-dev is not showing the performance gain. My guess he might not utilizing the SVE code.

I am to good help if you need anything else.

Thank you.

@Alcpz
Copy link
Collaborator

Alcpz commented Feb 3, 2026

Hi @ggerganov and @Alcpz Please review the code. I don't why @loci-dev is not showing the performance gain. My guess he might not utilizing the SVE code.

I am to good help if you need anything else.

Thank you.

I've already gave my review, unfortunately, I don't have access to an SVE system so I can't really dig further into it. I'm trusting everything was tested accordingly since it's GEMM, so Perplexity should be enough to detect failures.

Sorry I can't be of more help.

@taronaeo

This comment was marked as outdated.

@abhijain1204fujitsu
Copy link
Contributor Author

Hi @taronaeo Thanks for running the benchmark on c8gn.2xlarge. This is Graviton 4 machine.

Graviton 4 has SVE vector length of 128-bits, and current code is written for SVE vector length 256 bits.

So, when you are running with this PR it does not utilize both NEON and SVE code and uses ggml_gemm_q4_K_8x8_q8_K_generic, hence you saw huge gap in performance.

Now currently I have modified the code such way that if
SVE + Vector length == 256 then use SVE
else Check for NEON
else USE Generic Kernel.

So now with these changes, you will see the similar performance for both NEON and this PR results.

I am attaching my runtime results for c8gn.48larg (Graviton 4 128-bit vector length) results.

For NEON

model size params backend threads test t/s
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 8 pp128 33.61 ± 0.51
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 8 tg128 15.79 ± 0.30

----------------------------------------------------------------
This PR

model size params backend threads test t/s
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 8 pp128 33.61 ± 0.51
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 8 tg128 15.79 ± 0.30

I hope this clears your doubt.

Thank you.

@taronaeo
Copy link
Contributor

taronaeo commented Feb 9, 2026

Graviton 4 has SVE vector length of 128-bits

Great catch, I forgot about that. I can retest it on Graviton 3 where 256-bit SVE is available and update the benchmarks again :)

@taronaeo
Copy link
Contributor

I've tested this using an AWS hpc7g.16xlarge instance and managed to reproduce your performance improvement. Great job!

model size params backend threads test t/s MASTER t/s PR speedup
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 4 pp512 27.46 ± 0.00 31.87 ± 0.00 1.16
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 4 tg128 9.18 ± 0.00 9.38 ± 0.00 1.02
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 8 pp512 54.63 ± 0.00 63.32 ± 0.01 1.16
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 8 tg128 17.00 ± 0.00 17.45 ± 0.01 1.03
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 16 pp512 109.16 ± 0.01 125.77 ± 0.03 1.15
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 16 tg128 29.29 ± 0.01 30.06 ± 0.02 1.03
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 32 pp512 214.96 ± 0.04 247.34 ± 0.05 1.15
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 32 tg128 36.75 ± 0.02 36.89 ± 0.02 1.00
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 64 pp512 406.50 ± 2.41 463.26 ± 4.64 1.14
llama 8B Q4_K - Medium 4.58 GiB 8.03 B CPU 64 tg128 39.08 ± 0.69 39.22 ± 0.56 1.00

Copy link
Contributor

@taronaeo taronaeo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor code cleanup

@taronaeo
Copy link
Contributor

Merge on green.

@taronaeo
Copy link
Contributor

taronaeo commented Feb 12, 2026

Can you rebase this PR with upstream/master? These CIs are failing and are fixed upstream.

Server / server (ADDRESS, RelWithDebInfo) (pull_request)
Server / server (UNDEFINED, RelWithDebInfo) (pull_request)

@abhijain1204fujitsu abhijain1204fujitsu force-pushed the gemm_q4_K_8x8_q8_K_Kernel_SVE_Porting branch from 64b91e6 to 15ddb81 Compare February 13, 2026 09:08
@pvname
Copy link
Contributor

pvname commented Feb 16, 2026

Hi @taronaeo, @ggerganov,

Could you please assist with the CI failure? The failure appears to be related to riscv64, and I am unable to reproduce the issue locally.

@taronaeo
Copy link
Contributor

The CI failure is consistent across other PRs as well and unrelated to this PR AFAICT. Will merge now.

@taronaeo taronaeo merged commit 267ba5a into ggml-org:master Feb 16, 2026
77 of 78 checks passed
michaelneale added a commit to michaelneale/llama.cpp that referenced this pull request Feb 17, 2026
* upstream/master: (88 commits)
  ci : bump komac version (ggml-org#19682)
  build : link ws2_32 as PUBLIC on Windows (ggml-org#19666)
  build : cleanup library linking logic (ggml-org#19665)
  convert : add JoyAI-LLM-Flash (ggml-org#19651)
  perplexity: add proper batching (ggml-org#19661)
  common : inline functions (ggml-org#18639)
  ggml : make `ggml_is_view` as API (ggml-org#19539)
  model: Add support for Tiny Aya Models (ggml-org#19611)
  build : rework llama_option_depr to handle LLAMA_CURL (ggml-org#19658)
  Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (ggml-org#19591)
  models : deduplicate delta-net graphs for Qwen family (ggml-org#19597)
  graph : fix KQ mask, lora, cvec reuse checks (ggml-org#19644)
  ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel  (ggml-org#19132)
  sync : ggml
  ggml : bump version to 0.9.7 (ggml/1425)
  ggml : bump version to 0.9.6 (ggml/1423)
  cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization (ggml-org#19624)
  docs: update s390x build docs (ggml-org#19643)
  build : remove LLAMA_HTTPLIB option (ggml-org#19623)
  cmake : check if KleidiAI API has been fetched (ggml-org#19640)
  ...
liparetejas pushed a commit to liparetejas/llama.cpp that referenced this pull request Feb 23, 2026
…9132)

* Updated repack.cpp

* Updated repack.cpp

* Updated repack.cpp

* Added if condition to support only vector length 256.

* Changed the format removed comments and duplicate variable

* If SVE 256 not present then was using generic function to compute, hence slowing the performance. 

So added code if SVE 256 is not present then use NEON code.

* Code format change suggestion

---------

Co-authored-by: Vithule, Prashant <Prashant.Vithule@fujitsu.com>
bartowski1182 pushed a commit to bartowski1182/llama.cpp that referenced this pull request Mar 2, 2026
…9132)

* Updated repack.cpp

* Updated repack.cpp

* Updated repack.cpp

* Added if condition to support only vector length 256.

* Changed the format removed comments and duplicate variable

* If SVE 256 not present then was using generic function to compute, hence slowing the performance. 

So added code if SVE 256 is not present then use NEON code.

* Code format change suggestion

---------

Co-authored-by: Vithule, Prashant <Prashant.Vithule@fujitsu.com>
ArberSephirotheca pushed a commit to ArberSephirotheca/llama.cpp that referenced this pull request Mar 3, 2026
…9132)

* Updated repack.cpp

* Updated repack.cpp

* Updated repack.cpp

* Added if condition to support only vector length 256.

* Changed the format removed comments and duplicate variable

* If SVE 256 not present then was using generic function to compute, hence slowing the performance. 

So added code if SVE 256 is not present then use NEON code.

* Code format change suggestion

---------

Co-authored-by: Vithule, Prashant <Prashant.Vithule@fujitsu.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants