Bug type
Behavior bug (incorrect output/state without crash)
Summary
Updating the model baseUrl in openclaw.json does not take effect because the agent continues to use a cached configuration from .openclaw/agent/main/agent/models.json.
When a user corrects a mistakenly entered baseUrl in the main openclaw.json file and restarts the gateway, the changes are ignored. The agent still attempts to send requests to the old URL. Users are forced to manually dig into the internal .openclaw/ directory to modify models.json for the changes to apply, which is unintuitive and creates configuration inconsistencies.
Steps to reproduce
- Set up OpenClaw and select
vLLM as the model (running via a llama.cpp backend).
- During the initial configuration, intentionally set an incorrect
baseUrl (e.g., 127.0.0.1/completion instead of the correct 127.0.0.1/v1).
- Open
openclaw tui and send a chat message.
- The request fails with a "file not found" error. Checking the llama.cpp logs confirms the request was incorrectly sent to
127.0.0.1/completion/chat/completions.
- Open the main
openclaw.json config file and fix the baseUrl to the correct path (127.0.0.1/v1).
- Restart the gateway.
- Open
openclaw tui again and send another message.
- Check the llama.cpp logs again. The request is still being sent to the old, incorrect endpoint (127.0.0.1/completion/chat/completions).
- Navigate to
.openclaw/agent/main/agent/models.json and observe that the old baseUrl is still stored there.
p.s. 127.0.0.1 can change to your target server ip.
Expected behavior
The openclaw.json file should be the single source of truth for global configuration. Modifying the baseUrl in openclaw.json and restarting the gateway should automatically update or override the internal agent configurations. The user should not have to manually edit files inside the hidden .openclaw/agent/main/agent/ directory.
Actual behavior
The configuration in openclaw.json does not sync with the agent's internal state. The agent continues to use the outdated baseUrl stored in .openclaw/agent/main/agent/models.json. The only way to fix the issue and successfully route requests to v1/chat/completions is to manually find and edit this hidden models.json file.
OpenClaw version
2026.3.2
Operating system
Ubuntu24.04
Install method
curl -fsSL https://openclaw.ai/install.sh | bash
(Package is managed by nvm)
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
vLLM server: llama.cpp
vLLM model: unsloth/gpt-oss-20b-GGUF:Q4_K_M
vLLM config:
"models": {
"mode": "merge",
"providers": {
"vllm": {
"baseUrl": "http://my_vllm_server_ip:8000/v1",
"apiKey": "my_vllm_api_key",
"api": "openai-completions",
"models": [
{
"id": "unsloth/gpt-oss-20b-GGUF:Q4_K_M",
"name": "unsloth/gpt-oss-20b-GGUF:Q4_K_M",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 128000,
"maxTokens": 8192
}
]
}
}
},
Bug type
Behavior bug (incorrect output/state without crash)
Summary
Updating the model
baseUrlinopenclaw.jsondoes not take effect because the agent continues to use a cached configuration from.openclaw/agent/main/agent/models.json.When a user corrects a mistakenly entered
baseUrlin the mainopenclaw.jsonfile and restarts the gateway, the changes are ignored. The agent still attempts to send requests to the old URL. Users are forced to manually dig into the internal.openclaw/directory to modifymodels.jsonfor the changes to apply, which is unintuitive and creates configuration inconsistencies.Steps to reproduce
vLLMas the model (running via a llama.cpp backend).baseUrl(e.g.,127.0.0.1/completioninstead of the correct127.0.0.1/v1).openclaw tuiand send a chat message.127.0.0.1/completion/chat/completions.openclaw.jsonconfig file and fix thebaseUrlto the correct path (127.0.0.1/v1).openclaw tuiagain and send another message..openclaw/agent/main/agent/models.jsonand observe that the oldbaseUrlis still stored there.p.s.
127.0.0.1can change to your target server ip.Expected behavior
The
openclaw.jsonfile should be the single source of truth for global configuration. Modifying thebaseUrlinopenclaw.jsonand restarting the gateway should automatically update or override the internal agent configurations. The user should not have to manually edit files inside the hidden.openclaw/agent/main/agent/directory.Actual behavior
The configuration in
openclaw.jsondoes not sync with the agent's internal state. The agent continues to use the outdatedbaseUrlstored in.openclaw/agent/main/agent/models.json. The only way to fix the issue and successfully route requests tov1/chat/completionsis to manually find and edit this hiddenmodels.jsonfile.OpenClaw version
2026.3.2
Operating system
Ubuntu24.04
Install method
curl -fsSL https://openclaw.ai/install.sh | bash
(Package is managed by nvm)
Logs, screenshots, and evidence
Impact and severity
No response
Additional information
vLLM server: llama.cpp
vLLM model: unsloth/gpt-oss-20b-GGUF:Q4_K_M
vLLM config: