feat: per-model thinkingDefault config and /think default directive#20458
feat: per-model thinkingDefault config and /think default directive#20458kmixter wants to merge 1 commit intoopenclaw:mainfrom
Conversation
04057a8 to
82292ae
Compare
| // Re-resolve thinking default if the model changed (per-model thinkingDefault support). | ||
| const newProvider = sessionEntry.providerOverride ?? provider; | ||
| const newModel = sessionEntry.modelOverride ?? model; | ||
| if (newProvider !== provider || newModel !== model) { | ||
| const catalog = await loadModelCatalog({ config: cfg }); | ||
| resolvedDefaultThinkLevel = | ||
| (sessionEntry?.thinkingLevel as ThinkLevel | undefined) ?? | ||
| resolveThinkingDefault({ cfg, provider: newProvider, model: newModel, catalog }); | ||
| } |
There was a problem hiding this comment.
per-model thinking default only re-resolved in directive-only path, not when directives are combined with message content
the re-resolution logic here only applies when the user sends a directive-only message (e.g., just /model openai/gpt-4). when directives are combined with content (e.g., /model openai/gpt-4 what is 2+2?), the code takes the hasAnyDirective path (line 217), which uses applyInlineDirectivesFastLane. that path calls resolveCurrentDirectiveLevels which invokes modelState.resolveDefaultThinkingLevel(), but this closure was created with the original provider/model values and won't reflect the per-model default for the newly switched model.
this means users who switch models inline with their message won't get the correct per-model thinking default until their next message.
consider moving the re-resolution logic to a shared location that both paths use, or updating the resolveDefaultThinkingLevel closure to check for provider/model overrides in the session entry.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/auto-reply/reply/get-reply-directives-apply.ts
Line: 166-174
Comment:
per-model thinking default only re-resolved in directive-only path, not when directives are combined with message content
the re-resolution logic here only applies when the user sends a directive-only message (e.g., just `/model openai/gpt-4`). when directives are combined with content (e.g., `/model openai/gpt-4 what is 2+2?`), the code takes the `hasAnyDirective` path (line 217), which uses `applyInlineDirectivesFastLane`. that path calls `resolveCurrentDirectiveLevels` which invokes `modelState.resolveDefaultThinkingLevel()`, but this closure was created with the *original* provider/model values and won't reflect the per-model default for the newly switched model.
this means users who switch models inline with their message won't get the correct per-model thinking default until their next message.
consider moving the re-resolution logic to a shared location that both paths use, or updating the `resolveDefaultThinkingLevel` closure to check for provider/model overrides in the session entry.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Already handled correctly. The resolveDefaultThinkingLevel closure in the fast-lane path (line 266) captures sessionEntry by reference and reads sessionEntry.providerOverride/modelOverride at call time. However, in this path the resolved thinking level is used as the "before" value passed into handleDirectiveOnly — it represents the current level before directives are applied, which is the correct input.
The actual thinking level for the LLM inference is resolved downstream after handleDirectiveOnly has mutated sessionEntry with the new model. The fast-lane path correctly propagates the updated provider/model values (lines 273-275), so the downstream thinking resolution uses the new model context.
The re-resolution on line 190 in the directive-only path exists specifically because that path builds a status reply (lines 198-217) showing the new thinking level — the inline-with-content path does not build a status reply, so it does not need that early re-resolution.
82292ae to
493652c
Compare
493652c to
240f2b6
Compare
|
The concern raised by the Greptile bot (per-model thinking default only re-resolved in directive-only path) has been addressed in the current revision:
|
240f2b6 to
ff31b97
Compare
ff31b97 to
bbfbe82
Compare
Summary
Add per-model thinkingDefault to resolve thinking level based on which
model is active, so users running different models (e.g. gemini-3-pro
for chat, gemini-3-flash for heartbeats) get appropriate thinking levels
automatically without manually toggling /think per session.
Resolution priority: per-model → global → catalog auto-detect (existing
fallback behavior unchanged). Config keys are normalized via parseModelRef
so aliases like "anthropic/opus-4.6" resolve correctly.
Also adds:
cascade back to the per-model default
directive-only and inline-with-content paths
Discussion: #20612
Related: #18152 (built independently, shares the per-model config concept
but adds /think default and model-switch re-resolution)
Config example
{ "agents": { "defaults": { "thinkingDefault": "low", "models": { "google/gemini-2.5-pro": {}, "google/gemini-3-flash": { "thinkingDefault": "high" }, "google/gemini-flash-lite-latest": { "thinkingDefault": "off" } } } } }Test plan
pnpm buildpassespnpm checkpasses (clean on changed files)pnpm testpasses (456 tests, 47 files)/think defaultresets level and per-model config resolves correctly viaopenclaw doctorAI-assisted (Claude Code), fully tested.