-
Notifications
You must be signed in to change notification settings - Fork 16.2k
Misc. bug: builds >8149 for win-hip-radeon-x64 are unable to discover rocm devices as older versions did #21106
Copy link
Copy link
Open
Labels
Description
Name and Version
Last known working version:
version: 8149 (a96a1120b)
built with Clang 19.1.5 for Windows x86_64
First known broken version
version: 8152 (d7d826b3c)
built with Clang 19.1.5 for Windows x86_64
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
llama-cli
Command line
Problem description & steps to reproduce
Newer builds of llama.cpp win-hip-radeon-x64 are unable to discover rocm devices as older versions did:
PS C:\...\llama-b8198-bin-win-hip-radeon-x64> .\llama-cli --list-devices
load_backend: loaded RPC backend from C:\...\llama-b8198-bin-win-hip-radeon-x64\ggml-rpc.dll
load_backend: loaded CPU backend from C:\...\llama-b8198-bin-win-hip-radeon-x64\ggml-cpu-alderlake.dll
Available devices: <empty result>Problem persists in latest version:
PS C:\...\llama-b8563-bin-win-hip-radeon-x64> .\llama-cli --list-devices
load_backend: loaded RPC backend from C:\...\llama-b8563-bin-win-hip-radeon-x64\ggml-rpc.dll
load_backend: loaded CPU backend from C:\...\llama-b8563-bin-win-hip-radeon-x64\ggml-cpu-alderlake.dll
Available devices: <empty result>Older builds behave as expected and are able to dicover the device:
PS C:\...\llama-b7806-bin-win-hip-radeon-x64> .\llama-cli --list-devices
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX 6600, gfx1032 (0x1032), VMM: no, Wave Size: 32
load_backend: loaded ROCm backend from C:\...\llama-b7806-bin-win-hip-radeon-x64\ggml-hip.dll
load_backend: loaded RPC backend from C:\...\llama-b7806-bin-win-hip-radeon-x64\ggml-rpc.dll
load_backend: loaded CPU backend from C:\...\llama-b7806-bin-win-hip-radeon-x64\ggml-cpu-alderlake.dll
Available devices:
ROCm0: AMD Radeon RX 6600 (8176 MiB, 8034 MiB free)First Bad Commit
No response
Relevant log output
Logs
Reactions are currently unavailable