Skip to content

graph : add optional scale parameter to build_lora_mm#20427

Merged
CISC merged 1 commit intoggml-org:masterfrom
richarddd:chore/build-lora-mm-scale
Mar 11, 2026
Merged

graph : add optional scale parameter to build_lora_mm#20427
CISC merged 1 commit intoggml-org:masterfrom
richarddd:chore/build-lora-mm-scale

Conversation

@richarddd
Copy link
Contributor

@ggerganov As discused in #19769. Add optional w_s parameter to build_lora_mm() for applying a multiplicative scale after the matmul. This cleans up the pattern used by bitnet and NVFP4 models where a per-tensor scale is applied after each weight multiplication.

@richarddd richarddd requested a review from CISC as a code owner March 11, 2026 21:24
@github-actions github-actions bot added the model Model specific label Mar 11, 2026
Comment on lines 768 to +771
ggml_tensor * build_lora_mm(
ggml_tensor * w,
ggml_tensor * cur) const;
ggml_tensor * cur,
ggml_tensor * w_s = nullptr) const;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a follow-up PR, we should move the cur at the front, to increase consistency of the interfaces:

    ggml_tensor * build_lora_mm(
              ggml_tensor * cur,
              ggml_tensor * w,
              ggml_tensor * w_s = nullptr) const;

It will touch a lot of lines though.

Copy link
Member

@CISC CISC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concur with @ggerganov on ordering.

@CISC CISC merged commit 1eea6a2 into ggml-org:master Mar 11, 2026
7 of 76 checks passed
ProgenyAlpha pushed a commit to ProgenyAlpha/llama.cpp that referenced this pull request Mar 12, 2026
@richarddd richarddd deleted the chore/build-lora-mm-scale branch March 12, 2026 05:25
tekintian added a commit to tekintian/llama.cpp that referenced this pull request Mar 12, 2026
* 'master' of github.com:ggml-org/llama.cpp: (33 commits)
  convert : better mtp check and fix return [no ci] (ggml-org#20419)
  vulkan: fix SSM_CONV PP scaling with large ubatch sizes (ggml-org#20379)
  New conversations now auto-select the first loaded model (ggml-org#20403)
  ggml-virtgpu: Fix some build commands (ggml-org#20341)
  metal : avoid divisions in bin kernel (ggml-org#20426)
  ci: Setup self-hosted CI for Intel Linux Vulkan backend (ggml-org#20154)
  vulkan: fix l2_norm epsilon handling (ggml-org#20350)
  vulkan: fix OOB check in flash_attn_mask_opt (ggml-org#20296)
  vulkan: Fix ErrorOutOfHostMemory on Intel GPU when loading large models with --no-mmap (ggml-org#20059)
  opencl: use larger workgroup size for get_rows (ggml-org#20316)
  opencl: add cumsum op (ggml-org#18981)
  hip: compile debug builds with -O2 on hip to avoid a compiler bug (ggml-org#20392)
  common/parser: add GigaChatV3/3.1 models support (ggml-org#19931)
  model : add support for Phi4ForCausalLMV (ggml-org#20168)
  graph : add optional scale parameter to build_lora_mm [no ci] (ggml-org#20427)
  common : fix --n-cpu-moe, --cpu-moe for models with fused gate + up (ggml-org#20416)
  ggml-webgpu: Add supports for `GGML_OP_REPEAT` (ggml-org#20230)
  llama : enable chunked fused GDN path (ggml-org#20340)
  llama : whitespace cleanup (ggml-org#20422)
  ggml : add NVFP4 quantization type support (ggml-org#19769)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

model Model specific

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants