Bug Report: Model Override Ignored in Isolated Sessions and sessions_spawn
Summary
Model overrides are silently ignored across all isolated session creation paths. Both sessions_spawn and isolated cron jobs accept the model parameter (returning modelApplied: true) but always spawn sessions with the configured default primary model instead of the requested model.
This affects local Ollama models and likely all non-default model overrides, resulting in unexpected API costs when users expect to use free local models for background tasks.
Impact: Users expecting $0 cost for local model background tasks are instead charged at default model API rates (~$75-150/month for hourly checks).
Environment
- OpenClaw: 2026.2.9 (commit 65dae9a)
- OS: macOS Darwin 25.2.0 (arm64)
- Node: v25.5.0
- Ollama: v0.15.4 (running at localhost:11434, confirmed working)
- Shell: zsh
Configuration
openclaw.json:
{
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-sonnet-4-5"
},
"subagents": {
"model": "ollama/llama3.2:3b"
}
}
},
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"api": "openai-completions",
"models": [
{
"id": "llama3.2:3b",
"contextWindow": 8192
},
{
"id": "llama3.2-16k",
"contextWindow": 16384
}
]
}
}
}
}
Reproduction
Test 1: sessions_spawn (explicit model parameter)
sessions_spawn({
task: "Check session_status and report your model and context window.",
model: "ollama/llama3.2:3b",
label: "Ollama Override Test",
cleanup: "delete"
})
Expected:
{
"modelApplied": true,
"session": {
"model": "ollama/llama3.2:3b",
"contextWindow": 8192
}
}
Actual:
{
"status": "accepted",
"modelApplied": true, // ❌ FALSE POSITIVE
"childSessionKey": "agent:main:subagent:b981a7ca-...",
"session": {
"model": "anthropic/claude-sonnet-4-5", // ❌ Wrong model
"contextWindow": 200000 // ❌ Wrong context (should be 8192)
}
}
Test 2: Isolated cron job with model override
{
"name": "Ollama Health Check",
"schedule": { "kind": "every", "everyMs": 3600000 },
"payload": {
"kind": "agentTurn",
"message": "Check session_status and report your model.",
"model": "ollama/llama3.2-16k"
},
"sessionTarget": "isolated",
"enabled": true
}
Result: Session spawns with anthropic/claude-sonnet-4-5 and contextWindow: 1000000 instead of requested ollama/llama3.2-16k with contextWindow: 16384.
Test 3: Negative control (validation works)
sessions_spawn({
task: "Test",
model: "openai/gpt-4o-mini" // Not in catalog/allowlist
})
Result:
{
"modelApplied": false,
"warning": "model not allowed: openai/gpt-4o-mini"
}
✅ This proves validation layer works correctly and rejects disallowed models.
Key Findings
-
modelApplied: true is a false positive
- Validation layer accepts the model parameter
- Routing/spawning layer silently ignores it
- Session is created with default model instead
-
Validation works, routing doesn't
- Disallowed models are correctly rejected (
modelApplied: false)
- Allowed models are accepted but not used (
modelApplied: true but wrong model spawned)
-
All override methods fail
- Explicit
model parameter in sessions_spawn
payload.model in cron jobs (tried catalog ID, full provider/model, alias)
agents.defaults.subagents.model config setting
- Tested across 2026.2.6-3 and 2026.2.9
-
Context window proves override never reached session construction
- Requested:
contextWindow: 16384 (Ollama model)
- Actual:
contextWindow: 200000 or 1000000 (Sonnet)
- This confirms the model config was never applied to the spawned session
Suspected Root Cause
The model override successfully passes validation but is not carried through to session spawning. Likely candidates:
- Session factory re-reads model from agent defaults instead of using the validated override value
- Provider routing layer lacks proper path for non-default models in isolated session context, silently falls back to primary model
- Resolved model is stored but never consumed - spawn function creates session config from a different source than where the override was written
Related PRs (Didn't Resolve This)
Suggested Diagnostic
Add logging at the validation→spawner boundary:
// After validation (where modelApplied: true is set)
logger.debug(`[MODEL-ROUTING] Validated model: ${resolvedProvider}/${resolvedModel}`);
// Inside session factory/spawn function
logger.debug(`[MODEL-ROUTING] Session created with model: ${session.model}, context: ${session.contextWindow}`);
If the first log shows ollama/llama3.2:3b and the second shows claude-sonnet-4-5, the bug is in the handoff between validation and spawning.
Expected Behavior
When a user specifies a model override via:
sessions_spawn({ model: "ollama/llama3.2:3b", ... })
- Cron
payload.model: "ollama/llama3.2-16k"
- Config
agents.defaults.subagents.model
The spawned isolated session should use that model, not fall back to the primary default model.
Workaround
Currently no workaround exists within OpenClaw. Users must bypass OpenClaw's session system entirely and call local models directly via system cron + API calls.
Additional Context
- Ollama is running and accessible (confirmed via
ollama run llama3.2:3b and API at localhost:11434)
- Main session model override works correctly (user can chat with different models)
- Only isolated session creation paths are affected
- Bug persists across OpenClaw updates (2026.2.6-3 → 2026.2.9)
This bug prevents users from using free local models for background tasks, resulting in unexpected API costs.
Bug Report: Model Override Ignored in Isolated Sessions and sessions_spawn
Summary
Model overrides are silently ignored across all isolated session creation paths. Both
sessions_spawnand isolated cron jobs accept themodelparameter (returningmodelApplied: true) but always spawn sessions with the configured default primary model instead of the requested model.This affects local Ollama models and likely all non-default model overrides, resulting in unexpected API costs when users expect to use free local models for background tasks.
Impact: Users expecting $0 cost for local model background tasks are instead charged at default model API rates (~$75-150/month for hourly checks).
Environment
Configuration
openclaw.json:
{ "agents": { "defaults": { "model": { "primary": "anthropic/claude-sonnet-4-5" }, "subagents": { "model": "ollama/llama3.2:3b" } } }, "models": { "providers": { "ollama": { "baseUrl": "http://127.0.0.1:11434/v1", "api": "openai-completions", "models": [ { "id": "llama3.2:3b", "contextWindow": 8192 }, { "id": "llama3.2-16k", "contextWindow": 16384 } ] } } } }Reproduction
Test 1: sessions_spawn (explicit model parameter)
Expected:
{ "modelApplied": true, "session": { "model": "ollama/llama3.2:3b", "contextWindow": 8192 } }Actual:
{ "status": "accepted", "modelApplied": true, // ❌ FALSE POSITIVE "childSessionKey": "agent:main:subagent:b981a7ca-...", "session": { "model": "anthropic/claude-sonnet-4-5", // ❌ Wrong model "contextWindow": 200000 // ❌ Wrong context (should be 8192) } }Test 2: Isolated cron job with model override
{ "name": "Ollama Health Check", "schedule": { "kind": "every", "everyMs": 3600000 }, "payload": { "kind": "agentTurn", "message": "Check session_status and report your model.", "model": "ollama/llama3.2-16k" }, "sessionTarget": "isolated", "enabled": true }Result: Session spawns with
anthropic/claude-sonnet-4-5andcontextWindow: 1000000instead of requestedollama/llama3.2-16kwithcontextWindow: 16384.Test 3: Negative control (validation works)
Result:
{ "modelApplied": false, "warning": "model not allowed: openai/gpt-4o-mini" }✅ This proves validation layer works correctly and rejects disallowed models.
Key Findings
modelApplied: trueis a false positiveValidation works, routing doesn't
modelApplied: false)modelApplied: truebut wrong model spawned)All override methods fail
modelparameter insessions_spawnpayload.modelin cron jobs (tried catalog ID, full provider/model, alias)agents.defaults.subagents.modelconfig settingContext window proves override never reached session construction
contextWindow: 16384(Ollama model)contextWindow: 200000or1000000(Sonnet)Suspected Root Cause
The model override successfully passes validation but is not carried through to session spawning. Likely candidates:
Related PRs (Didn't Resolve This)
run.ts- correctly resolves model but value not consumed downstreamSuggested Diagnostic
Add logging at the validation→spawner boundary:
If the first log shows
ollama/llama3.2:3band the second showsclaude-sonnet-4-5, the bug is in the handoff between validation and spawning.Expected Behavior
When a user specifies a model override via:
sessions_spawn({ model: "ollama/llama3.2:3b", ... })payload.model: "ollama/llama3.2-16k"agents.defaults.subagents.modelThe spawned isolated session should use that model, not fall back to the primary default model.
Workaround
Currently no workaround exists within OpenClaw. Users must bypass OpenClaw's session system entirely and call local models directly via system cron + API calls.
Additional Context
ollama run llama3.2:3band API atlocalhost:11434)This bug prevents users from using free local models for background tasks, resulting in unexpected API costs.