-
Notifications
You must be signed in to change notification settings - Fork 614
feat: add reasoning_effort parameter support for gpt-oss models #731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- add reasoning effort UI support across all components
WalkthroughAdds reasoningEffort and verbosity settings across UI and config. Updates GPT-OSS model defaults/provider settings (temperatures, token/context limits, reasoning flags). Wires reasoningEffort into Ollama chat payload as reasoning_effort. Extends ChatConfig, NewThread, TitleView, and ModelConfigDialog to surface and persist these options. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant ChatConfig
participant TitleView
participant Store as Chat Store
participant Presenter as Config Presenter
participant Provider as OllamaProvider
participant API as Ollama API
User->>ChatConfig: change reasoningEffort / verbosity
ChatConfig-->>TitleView: emit update events
TitleView->>Store: persist chatConfig (includes new fields)
TitleView->>Presenter: getModelDefaultConfig(modelId, providerId)
Presenter-->>TitleView: return defaults (reasoningEffort, verbosity)
User->>TitleView: create thread / send message
TitleView->>Provider: handleChatCompletion(config incl. reasoningEffort)
Provider->>API: POST chat { options.reasoning_effort, num_predict, ... }
API-->>Provider: response
Provider-->>TitleView: deliver message/result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used📓 Path-based instructions (9)**/*.{ts,tsx,js,jsx,vue}📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
src/renderer/src/**/*.vue📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
src/renderer/src/**/*.{ts,tsx,vue}📄 CodeRabbit Inference Engine (CLAUDE.md)
Files:
src/renderer/src/**/*📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)
Files:
src/renderer/**/*.{vue,ts,js,tsx,jsx}📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
Files:
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)
Files:
src/renderer/**/*.{ts,tsx,vue}📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
Files:
src/renderer/**/*.{vue,ts}📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
Files:
src/renderer/**/*.{ts,vue}📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
Files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (5)
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (6)
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (1)
495-497: Conditional wiring of reasoning_effort looks good; verify Ollama type supportPassing reasoning_effort through options only when present is correct. However, the official Ollama Options type may not declare reasoning_effort. If strict TS fails here, augment the Ollama types or narrow-cast options to avoid type errors.
If you hit TS errors, prefer declaration merging over broad any-casts:
Type augmentation (new file suggestion, e.g., src/types/ollama.d.ts):
declare module 'ollama' { interface Options { reasoning_effort?: 'minimal' | 'low' | 'medium' | 'high' } }src/renderer/src/components/ChatConfig.vue (1)
92-96: Gating UI by presence of reasoningEffort is pragmatic; consider capability-based gating laterUsing presence of the prop to gate visibility works. For future robustness, consider gating off a capability flag (e.g., supportsReasoningEffort) from model config rather than current value if you ever allow “unset” states.
src/renderer/src/components/TitleView.vue (2)
285-297: Race-safety: guard async loadModelConfig against rapid model/provider flipsThe watcher awaits loadModelConfig after detecting model/provider changes. If the user switches models quickly, late responses can overwrite newer state. Consider a simple request token to ignore stale responses.
Example approach (outside this range):
let lastLoadToken = 0 const loadModelConfig = async () => { const token = ++lastLoadToken // ... fetch if (token !== lastLoadToken) return // ignore stale // ... commit state }
366-369: Replace setTimeout with nextTick to avoid magic delaysA fixed 100ms delay is brittle. Use nextTick to run after initial render, or call loadModelConfig immediately since you already watch model/provider changes.
Apply:
- setTimeout(async () => { - await loadModelConfig() - }, 100) + await nextTick() + await loadModelConfig()Also add nextTick to imports:
-import { onMounted, onUnmounted, ref, watch, computed } from 'vue' +import { onMounted, onUnmounted, ref, watch, computed, nextTick } from 'vue'src/renderer/src/components/settings/ModelConfigDialog.vue (2)
136-166: Use English for comments in Vue templatesTemplate comments should follow the guideline to use English.
Apply:
- <!-- 推理努力程度 --> + <!-- Reasoning effort -->
497-500: Feature detection via 'in' may cause UI jank; tighten the gateBecause this component initializes config with a default that always includes reasoningEffort, supportsReasoningEffort will be true briefly for unsupported models until loadConfig completes. Prefer a more robust gate (e.g., metadata/provider capability or hasOwnProperty on the loaded config).
Option A (quick improvement):
-const supportsReasoningEffort = computed(() => { - return 'reasoningEffort' in config.value -}) +const supportsReasoningEffort = computed(() => { + return Object.prototype.hasOwnProperty.call(config.value, 'reasoningEffort') +})Option B (recommended if available): query capability from settingsStore/configPresenter using modelId/providerId and derive supportsReasoningEffort from model metadata rather than current config shape.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
src/main/presenter/configPresenter/modelDefaultSettings.ts(1 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts(1 hunks)src/renderer/src/components/ChatConfig.vue(5 hunks)src/renderer/src/components/NewThread.vue(4 hunks)src/renderer/src/components/TitleView.vue(4 hunks)src/renderer/src/components/settings/ModelConfigDialog.vue(2 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Use English for logs and comments
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/main/presenter/configPresenter/providerModelSettings.tssrc/renderer/src/components/ChatConfig.vue
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Strict type checking enabled for TypeScript
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
One presenter per functional domain
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Centralize configuration in configPresenter/
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/renderer/src/**/*.vue
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI componentsUse scoped styles to prevent CSS conflicts between components
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*.{ts,tsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*
📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
渲染进程代码放在
src/renderer
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{vue,ts}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
Implement lazy loading for routes and components.
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/renderer/src/components/TitleView.vuesrc/renderer/src/components/ChatConfig.vue
🧠 Learnings (1)
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (15)
src/main/presenter/configPresenter/modelDefaultSettings.ts (2)
674-682: Add reasoningEffort default for gpt-oss-120b: aligned with PR objective
- temperature/context/maxTokens updates are consistent
- reasoning enabled with a sane default reasoningEffort: 'medium'
No further issues.
686-694: Add reasoningEffort default for gpt-oss-20b: aligned and consistent with 120BGood consistency across both GPT-OSS variants. This enables the UI to surface reasoning effort without extra gating logic.
src/main/presenter/configPresenter/providerModelSettings.ts (2)
380-388: Provider override for gpt-oss:20b includes reasoningEffort and aligns with defaults
- temperature/context/maxTokens match the new defaults
- reasoning: true and reasoningEffort: 'medium' included
This ensures getProviderSpecificModelConfig returns reasoningEffort to providers.
392-400: Provider override for gpt-oss:120b mirrors the 20B setup correctlyConsistent with model defaults and guarantees the reasoningEffort flag reaches the provider layer.
src/renderer/src/components/ChatConfig.vue (5)
12-18: Select UI imports added correctlyImports match the usage in the template; no issues.
31-33: New props: reasoningEffort and verbosity added with precise typingProps are optional and use tight unions, which is ideal for UI controls.
42-44: Emits for reasoningEffort and verbosity correctly typedEvent names align with v-model usage in parents (Vue templates will kebab-case normalize).
295-341: Reasoning Effort UI: solid implementation with i18n and typed value passthrough
- Uses i18n keys
- Emits correctly cast unions
- Keeps UI minimal and consistent
Looks good.
342-382: Verbosity UI (GPT-5 only): correct conditional and event wiringThe gating with isGPT5Model and presence of verbosity prop prevents accidental exposure to non-GPT-5 models. No issues.
src/renderer/src/components/NewThread.vue (3)
93-95: v-model bindings for reasoningEffort/verbosity are correctBindings match child emits (Vue normalizes camelCase emits to kebab-case listeners in DOM templates).
162-164: Local state types for reasoningEffort/verbosity are preciseUnion types reflect available options and allow undefined initial state. Good.
182-184: Initialize from model defaults: good propagation pathPulling config.reasoningEffort/config.verbosity from configPresenter ensures ChatConfig renders options appropriately.
src/renderer/src/components/TitleView.vue (3)
68-69: Emits handlers added appropriatelyHandlers for update:reasoning-effort and update:verbosity are wired and scoped to local setters. Good.
216-223: Setters for reasoningEffort/verbosity are fineTyped union signatures are clear and keep assignments constrained. No issues.
59-60: Props & Emits Alignment VerifiedChatConfig.vue correctly defines the following, so the kebab-case bindings in TitleView.vue will map as intended:
- Props:
reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'verbosity?: 'low' | 'medium' | 'high'- Emits:
update:reasoningEffort→ [value: 'minimal' | 'low' | 'medium' | 'high']update:verbosity→ [value: 'low' | 'medium' | 'high']No further changes required.
| reasoningEffort: reasoningEffort.value, | ||
| verbosity: verbosity.value, | ||
| enabledMcpTools: chatStore.chatConfig.enabledMcpTools | ||
| }) | ||
| } as any) | ||
| console.log('threadId', threadId, activeModel.value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid any-casting the createThread payload; extend thread settings instead
Casting to any masks type drift and risks lost persistence/validation downstream. Prefer updating the shared Thread settings/type to include reasoningEffort and verbosity so they’re first-class citizens across the app.
Example follow-ups:
- Extend the thread creation DTO and settings model in @shared to include:
- reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
- verbosity?: 'low' | 'medium' | 'high'
- Plumb these through chatStore persistence and presentation.
Short-term (if types aren’t ready), constrain the cast to a narrower subset to avoid polluting unrelated properties:
- } as any)
+ } as unknown as {
+ providerId: string
+ modelId: string
+ systemPrompt: string
+ temperature: number
+ contextLength: number
+ maxTokens: number
+ artifacts: 0 | 1
+ enabledMcpTools: string[]
+ reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
+ verbosity?: 'low' | 'medium' | 'high'
+ })📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| reasoningEffort: reasoningEffort.value, | |
| verbosity: verbosity.value, | |
| enabledMcpTools: chatStore.chatConfig.enabledMcpTools | |
| }) | |
| } as any) | |
| console.log('threadId', threadId, activeModel.value) | |
| reasoningEffort: reasoningEffort.value, | |
| verbosity: verbosity.value, | |
| enabledMcpTools: chatStore.chatConfig.enabledMcpTools | |
| } as unknown as { | |
| providerId: string | |
| modelId: string | |
| systemPrompt: string | |
| temperature: number | |
| contextLength: number | |
| maxTokens: number | |
| artifacts: 0 | 1 | |
| enabledMcpTools: string[] | |
| reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high' | |
| verbosity?: 'low' | 'medium' | 'high' | |
| }) | |
| console.log('threadId', threadId, activeModel.value) |
| const reasoningEffort = ref((chatStore.chatConfig as any).reasoningEffort) | ||
| const verbosity = ref((chatStore.chatConfig as any).verbosity) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid any; add proper types for reasoningEffort and verbosity
Using any on chatStore.chatConfig weakens type-safety and can hide bugs. Type these refs explicitly and read from a typed Partial instead.
Apply:
-const reasoningEffort = ref((chatStore.chatConfig as any).reasoningEffort)
-const verbosity = ref((chatStore.chatConfig as any).verbosity)
+type ReasoningEffort = 'minimal' | 'low' | 'medium' | 'high'
+type Verbosity = 'low' | 'medium' | 'high'
+const chatCfg = chatStore.chatConfig as Partial<{
+ reasoningEffort: ReasoningEffort
+ verbosity: Verbosity
+}>
+const reasoningEffort = ref<ReasoningEffort | undefined>(chatCfg.reasoningEffort)
+const verbosity = ref<Verbosity | undefined>(chatCfg.verbosity)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const reasoningEffort = ref((chatStore.chatConfig as any).reasoningEffort) | |
| const verbosity = ref((chatStore.chatConfig as any).verbosity) | |
| <script lang="ts" setup> | |
| import { ref } from 'vue' | |
| import { useChatStore } from '@/stores/chat' | |
| const chatStore = useChatStore() | |
| type ReasoningEffort = 'minimal' | 'low' | 'medium' | 'high' | |
| type Verbosity = 'low' | 'medium' | 'high' | |
| const chatCfg = chatStore.chatConfig as Partial<{ | |
| reasoningEffort: ReasoningEffort | |
| verbosity: Verbosity | |
| }> | |
| const reasoningEffort = ref<ReasoningEffort | undefined>(chatCfg.reasoningEffort) | |
| const verbosity = ref<Verbosity | undefined>(chatCfg.verbosity) | |
| // …rest of your setup code | |
| </script> |
| // 获取模型配置来初始化默认值 | ||
| const loadModelConfig = async () => { | ||
| const modelId = chatStore.chatConfig.modelId | ||
| const providerId = chatStore.chatConfig.providerId | ||
| if (modelId && providerId) { | ||
| try { | ||
| const config = await configPresenter.getModelDefaultConfig(modelId, providerId) | ||
| if (config.reasoningEffort !== undefined) { | ||
| reasoningEffort.value = config.reasoningEffort | ||
| } else { | ||
| reasoningEffort.value = undefined | ||
| } | ||
| if (config.verbosity !== undefined) { | ||
| verbosity.value = config.verbosity | ||
| } else { | ||
| verbosity.value = undefined | ||
| } | ||
| } catch (error) { | ||
| console.error('Failed to load model config:', error) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Use English for comments; also ensure getModelDefaultConfig signature is used consistently
- Comment is currently in Chinese; guidelines require English for logs/comments.
- You now call getModelDefaultConfig(modelId, providerId) here. Elsewhere in this file (Line 359) it’s still called with a single argument, which can break if the signature changed.
Update the comment and verify all call sites:
-// 获取模型配置来初始化默认值
+// Load model config to initialize default valuesCheck all usages:
🏁 Script executed:
#!/bin/bash
# Find all usages of getModelDefaultConfig and show surrounding lines
rg -n -A 3 -B 1 'getModelDefaultConfig\s*\('Length of output: 2556
🏁 Script executed:
#!/bin/bash
# Inspect getModelConfig implementation to see how it handles missing providerId
rg -n -A5 -B2 'getModelConfig' src/main/presenter/configPresenter/index.tsLength of output: 1039
Translate comment to English and retain existing calls
- In src/renderer/src/components/TitleView.vue (at the top of loadModelConfig), replace:
with
// 获取模型配置来初始化默认值// Load model config to initialize default values - No edits are required for other
getModelDefaultConfigcalls (e.g. line 359) since its signature still accepts an optionalproviderId.
🤖 Prompt for AI Agents
In src/renderer/src/components/TitleView.vue around lines 110 to 132, replace
the Chinese comment "// 获取模型配置来初始化默认值" at the top of loadModelConfig with the
English comment "// Load model config to initialize default values" and make no
other code changes; leave all getModelDefaultConfig calls (including other
occurrences) as-is since their signatures remain unchanged.
| [ | ||
| temperature, | ||
| contextLength, | ||
| maxTokens, | ||
| systemPrompt, | ||
| artifacts, | ||
| thinkingBudget, | ||
| reasoningEffort, | ||
| verbosity | ||
| ], | ||
| ([ | ||
| newTemp, | ||
| newContext, | ||
| newMaxTokens, | ||
| newSystemPrompt, | ||
| newArtifacts, | ||
| newThinkingBudget, | ||
| newReasoningEffort, | ||
| newVerbosity | ||
| ]) => { | ||
| if ( | ||
| newTemp !== chatStore.chatConfig.temperature || | ||
| newContext !== chatStore.chatConfig.contextLength || | ||
| newMaxTokens !== chatStore.chatConfig.maxTokens || | ||
| newSystemPrompt !== chatStore.chatConfig.systemPrompt || | ||
| newArtifacts !== chatStore.chatConfig.artifacts || | ||
| newThinkingBudget !== chatStore.chatConfig.thinkingBudget | ||
| newThinkingBudget !== chatStore.chatConfig.thinkingBudget || | ||
| newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort || | ||
| newVerbosity !== (chatStore.chatConfig as any).verbosity | ||
| ) { | ||
| chatStore.updateChatConfig({ | ||
| temperature: newTemp, | ||
| contextLength: newContext, | ||
| maxTokens: newMaxTokens, | ||
| systemPrompt: newSystemPrompt, | ||
| artifacts: newArtifacts, | ||
| thinkingBudget: newThinkingBudget | ||
| }) | ||
| thinkingBudget: newThinkingBudget, | ||
| reasoningEffort: newReasoningEffort, | ||
| verbosity: newVerbosity | ||
| } as any) | ||
| } | ||
| } | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Don’t push undefined fields into chatConfig; build a partial update object
Updating the store with undefined for optional fields can introduce keys with undefined values and break “feature detection” (presence-based gating). Build a patch object that only includes defined optional fields.
Apply:
]) => {
if (
newTemp !== chatStore.chatConfig.temperature ||
newContext !== chatStore.chatConfig.contextLength ||
newMaxTokens !== chatStore.chatConfig.maxTokens ||
newSystemPrompt !== chatStore.chatConfig.systemPrompt ||
newArtifacts !== chatStore.chatConfig.artifacts ||
newThinkingBudget !== chatStore.chatConfig.thinkingBudget ||
newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort ||
newVerbosity !== (chatStore.chatConfig as any).verbosity
) {
- chatStore.updateChatConfig({
- temperature: newTemp,
- contextLength: newContext,
- maxTokens: newMaxTokens,
- systemPrompt: newSystemPrompt,
- artifacts: newArtifacts,
- thinkingBudget: newThinkingBudget,
- reasoningEffort: newReasoningEffort,
- verbosity: newVerbosity
- } as any)
+ const patch: Record<string, unknown> = {
+ temperature: newTemp,
+ contextLength: newContext,
+ maxTokens: newMaxTokens,
+ systemPrompt: newSystemPrompt,
+ artifacts: newArtifacts
+ }
+ if (newThinkingBudget !== undefined) patch.thinkingBudget = newThinkingBudget
+ if (newReasoningEffort !== undefined) patch.reasoningEffort = newReasoningEffort
+ if (newVerbosity !== undefined) patch.verbosity = newVerbosity
+ chatStore.updateChatConfig(patch as any)
}
}
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [ | |
| temperature, | |
| contextLength, | |
| maxTokens, | |
| systemPrompt, | |
| artifacts, | |
| thinkingBudget, | |
| reasoningEffort, | |
| verbosity | |
| ], | |
| ([ | |
| newTemp, | |
| newContext, | |
| newMaxTokens, | |
| newSystemPrompt, | |
| newArtifacts, | |
| newThinkingBudget, | |
| newReasoningEffort, | |
| newVerbosity | |
| ]) => { | |
| if ( | |
| newTemp !== chatStore.chatConfig.temperature || | |
| newContext !== chatStore.chatConfig.contextLength || | |
| newMaxTokens !== chatStore.chatConfig.maxTokens || | |
| newSystemPrompt !== chatStore.chatConfig.systemPrompt || | |
| newArtifacts !== chatStore.chatConfig.artifacts || | |
| newThinkingBudget !== chatStore.chatConfig.thinkingBudget | |
| newThinkingBudget !== chatStore.chatConfig.thinkingBudget || | |
| newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort || | |
| newVerbosity !== (chatStore.chatConfig as any).verbosity | |
| ) { | |
| chatStore.updateChatConfig({ | |
| temperature: newTemp, | |
| contextLength: newContext, | |
| maxTokens: newMaxTokens, | |
| systemPrompt: newSystemPrompt, | |
| artifacts: newArtifacts, | |
| thinkingBudget: newThinkingBudget | |
| }) | |
| thinkingBudget: newThinkingBudget, | |
| reasoningEffort: newReasoningEffort, | |
| verbosity: newVerbosity | |
| } as any) | |
| } | |
| } | |
| ) | |
| [ | |
| temperature, | |
| contextLength, | |
| maxTokens, | |
| systemPrompt, | |
| artifacts, | |
| thinkingBudget, | |
| reasoningEffort, | |
| verbosity | |
| ], | |
| ([ | |
| newTemp, | |
| newContext, | |
| newMaxTokens, | |
| newSystemPrompt, | |
| newArtifacts, | |
| newThinkingBudget, | |
| newReasoningEffort, | |
| newVerbosity | |
| ]) => { | |
| if ( | |
| newTemp !== chatStore.chatConfig.temperature || | |
| newContext !== chatStore.chatConfig.contextLength || | |
| newMaxTokens !== chatStore.chatConfig.maxTokens || | |
| newSystemPrompt !== chatStore.chatConfig.systemPrompt || | |
| newArtifacts !== chatStore.chatConfig.artifacts || | |
| newThinkingBudget !== chatStore.chatConfig.thinkingBudget || | |
| newReasoningEffort !== (chatStore.chatConfig as any).reasoningEffort || | |
| newVerbosity !== (chatStore.chatConfig as any).verbosity | |
| ) { | |
| const patch: Record<string, unknown> = { | |
| temperature: newTemp, | |
| contextLength: newContext, | |
| maxTokens: newMaxTokens, | |
| systemPrompt: newSystemPrompt, | |
| artifacts: newArtifacts | |
| } | |
| if (newThinkingBudget !== undefined) patch.thinkingBudget = newThinkingBudget | |
| if (newReasoningEffort !== undefined) patch.reasoningEffort = newReasoningEffort | |
| if (newVerbosity !== undefined) patch.verbosity = newVerbosity | |
| chatStore.updateChatConfig(patch as any) | |
| } | |
| } | |
| ) |
🤖 Prompt for AI Agents
In src/renderer/src/components/TitleView.vue around lines 230 to 272, the
watcher currently calls chatStore.updateChatConfig with all fields even when
some are undefined; instead construct a partial patch object that always
includes required fields (temperature, contextLength, maxTokens, systemPrompt)
and only adds optional fields (artifacts, thinkingBudget, reasoningEffort,
verbosity) when their new values are !== undefined, then pass that patch to
updateChatConfig so undefined keys are never written into chatConfig.
* fix: add AlertDialogDescription to resolve accessibility warning (#706) * fix: resolve focus flicker when creating new windows with Ctrl+Shift+N (#707) * feat: enhance window management by implementing main window ID handling (#709) * docs: update zhipu developer doc website link (#715) Co-authored-by: gongchao <chao.gong@aminer.cn> * refactor: better translate (#716) * chore: en-us i18n * chore(i18n): polish ja-JP translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fr-FR translations; keep chat.input.placeholder unchanged * chore(i18n): refine fr-FR MCP & Settings copy; idiomatic, concise, brand-consistent * chore(i18n): polish ru-RU translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fa-IR translations across UI; keep chat.input.placeholder unchanged * chore: fix format * chore: fix i18n * chore: lock rolldown-vite version * feat: add GPT-5 series model support (#717) * ci(vite): Bundle the main file into a single file to speed up loading. (#718) * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps (#721) * chore: bump deps * fix: rolldown-vite 7.1.0 and duckdb bundle issue * chore: back to vite * chore: update electron * chore: update versions * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps --------- Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * feat: add mcp sync and modelscope provider #615 (#723) * wip: add modelscope provider * feat: add mcp sync to modelscope * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * chore: i18n and format * feat: better style * fix: mcp tool display --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> * fix: move_files newPath parse issue (#725) * fix: move_files newPath 参数计算规则 * fix: move_files 移动前需要判断dest是目录还是文件 * feat: add Claude Opus 4.1 to anthropic default model list (#726) * feat: Add mcprouter's MCP marketplace api support (#727) * wip: add mcp market * feat: mcp market install * wip: mcp install status sync * feat: mcp server config mask * chore: remove working doc * chore: add translate * feat: add ESC key to close floating chat window (#728) * feat: add floating button position persistence with boundary validation (#729) * feat: add floating button position persistence with boundary validation * feat: refactor floating button to use electron-window-state * chore: bump to 0.3.0 * feat: add reasoning_effort parameter support for gpt-oss models (#731) * feat: add reasoning_effort parameter support for gpt-oss models - add reasoning effort UI support across all components * fix: preserve user reasoning effort settings and improve display logic * fix: artifacts code not streaming (#732) * fix: artifact react load failed * chore: remove log * fix: artifacts code not stream * fix: format --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com> Co-authored-by: tomsun28 <tomsun28@outlook.com> Co-authored-by: gongchao <chao.gong@aminer.cn> Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> Co-authored-by: wanna <wanna.w@binarywalk.com>
* fix: add AlertDialogDescription to resolve accessibility warning (#706) * fix: resolve focus flicker when creating new windows with Ctrl+Shift+N (#707) * feat: enhance window management by implementing main window ID handling (#709) * docs: update zhipu developer doc website link (#715) Co-authored-by: gongchao <chao.gong@aminer.cn> * refactor: better translate (#716) * chore: en-us i18n * chore(i18n): polish ja-JP translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fr-FR translations; keep chat.input.placeholder unchanged * chore(i18n): refine fr-FR MCP & Settings copy; idiomatic, concise, brand-consistent * chore(i18n): polish ru-RU translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fa-IR translations across UI; keep chat.input.placeholder unchanged * chore: fix format * chore: fix i18n * chore: lock rolldown-vite version * feat: add GPT-5 series model support (#717) * ci(vite): Bundle the main file into a single file to speed up loading. (#718) * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps (#721) * chore: bump deps * fix: rolldown-vite 7.1.0 and duckdb bundle issue * chore: back to vite * chore: update electron * chore: update versions * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps --------- Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * feat: add mcp sync and modelscope provider #615 (#723) * wip: add modelscope provider * feat: add mcp sync to modelscope * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * chore: i18n and format * feat: better style * fix: mcp tool display --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> * fix: move_files newPath parse issue (#725) * fix: move_files newPath 参数计算规则 * fix: move_files 移动前需要判断dest是目录还是文件 * feat: add Claude Opus 4.1 to anthropic default model list (#726) * feat: Add mcprouter's MCP marketplace api support (#727) * wip: add mcp market * feat: mcp market install * wip: mcp install status sync * feat: mcp server config mask * chore: remove working doc * chore: add translate * feat: add ESC key to close floating chat window (#728) * feat: add floating button position persistence with boundary validation (#729) * feat: add floating button position persistence with boundary validation * feat: refactor floating button to use electron-window-state * chore: bump to 0.3.0 * feat: add reasoning_effort parameter support for gpt-oss models (#731) * feat: add reasoning_effort parameter support for gpt-oss models - add reasoning effort UI support across all components * fix: preserve user reasoning effort settings and improve display logic * fix: artifacts code not streaming (#732) * fix: artifact react load failed * chore: remove log * fix: artifacts code not stream * fix: format * feat: disable automatic model enabling for better UX (#734) * feat: sync provider sorting from settings to model selection (#736) * feat: sync provider sorting from settings to model selection * feat: refactor ModelSelect to use computed providers for better reactivity --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com> Co-authored-by: tomsun28 <tomsun28@outlook.com> Co-authored-by: gongchao <chao.gong@aminer.cn> Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> Co-authored-by: wanna <wanna.w@binarywalk.com>
Summary by CodeRabbit
New Features
Chores