Skip to content

[ggml-virtgpu] Fix some build commands of development.md#20341

Merged
taronaeo merged 1 commit intoggml-org:masterfrom
yomaytk:fix-command-virtgpu
Mar 12, 2026
Merged

[ggml-virtgpu] Fix some build commands of development.md#20341
taronaeo merged 1 commit intoggml-org:masterfrom
yomaytk:fix-command-virtgpu

Conversation

@yomaytk
Copy link
Contributor

@yomaytk yomaytk commented Mar 10, 2026

Hi, this PR fixes the bug of build command in the document.

  • Fix incorrect cmake option in building host ggml-virtgpu (GGML_REMOTINGBACKEND=ONLYGGML_VIRTGPU=ON + GGML_VIRTGPU_BACKEND=ONLY)
  • Add missing cd virglrenderer step in building virglrenderer
  • Fix double slash typo in $PWD//build-virtgpu

@github-actions github-actions bot added the documentation Improvements or additions to documentation label Mar 10, 2026
@taronaeo
Copy link
Contributor

cc: @kpouget

@kpouget
Copy link
Contributor

kpouget commented Mar 12, 2026

good catches, thanks @yomaytk !

LGTM from my side

@taronaeo taronaeo merged commit 0503996 into ggml-org:master Mar 12, 2026
2 checks passed
@yomaytk yomaytk deleted the fix-command-virtgpu branch March 12, 2026 08:09
tekintian added a commit to tekintian/llama.cpp that referenced this pull request Mar 12, 2026
* 'master' of github.com:ggml-org/llama.cpp: (33 commits)
  convert : better mtp check and fix return [no ci] (ggml-org#20419)
  vulkan: fix SSM_CONV PP scaling with large ubatch sizes (ggml-org#20379)
  New conversations now auto-select the first loaded model (ggml-org#20403)
  ggml-virtgpu: Fix some build commands (ggml-org#20341)
  metal : avoid divisions in bin kernel (ggml-org#20426)
  ci: Setup self-hosted CI for Intel Linux Vulkan backend (ggml-org#20154)
  vulkan: fix l2_norm epsilon handling (ggml-org#20350)
  vulkan: fix OOB check in flash_attn_mask_opt (ggml-org#20296)
  vulkan: Fix ErrorOutOfHostMemory on Intel GPU when loading large models with --no-mmap (ggml-org#20059)
  opencl: use larger workgroup size for get_rows (ggml-org#20316)
  opencl: add cumsum op (ggml-org#18981)
  hip: compile debug builds with -O2 on hip to avoid a compiler bug (ggml-org#20392)
  common/parser: add GigaChatV3/3.1 models support (ggml-org#19931)
  model : add support for Phi4ForCausalLMV (ggml-org#20168)
  graph : add optional scale parameter to build_lora_mm [no ci] (ggml-org#20427)
  common : fix --n-cpu-moe, --cpu-moe for models with fused gate + up (ggml-org#20416)
  ggml-webgpu: Add supports for `GGML_OP_REPEAT` (ggml-org#20230)
  llama : enable chunked fused GDN path (ggml-org#20340)
  llama : whitespace cleanup (ggml-org#20422)
  ggml : add NVFP4 quantization type support (ggml-org#19769)
  ...
am17an pushed a commit to am17an/llama.cpp that referenced this pull request Mar 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants