-
Notifications
You must be signed in to change notification settings - Fork 614
feat: add GPT-5 series model support #717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add GPT-5 series model support #717
Conversation
WalkthroughThe changes introduce new model configuration parameters— Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI (ChatConfig/ModelConfigDialog)
participant ModelConfigHelper
participant Provider
participant OpenAI API
User->>UI (ChatConfig/ModelConfigDialog): Selects GPT-5 model and configures options
UI (ChatConfig/ModelConfigDialog)->>ModelConfigHelper: Requests model config
ModelConfigHelper->>Provider: Supplies config with reasoningEffort, verbosity, maxCompletionTokens (no temperature)
Provider->>OpenAI API: Sends request with max_completion_tokens, reasoning, and text parameters (no temperature)
OpenAI API-->>Provider: Returns response
Provider-->>UI (ChatConfig/ModelConfigDialog): Delivers response
UI (ChatConfig/ModelConfigDialog)-->>User: Displays result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Assessment against linked issues
Assessment against linked issues: Out-of-scope changesNo out-of-scope changes detected. Possibly related PRs
Poem
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (12)
✅ Files skipped from review due to trivial changes (3)
🚧 Files skipped from review as they are similar to previous changes (9)
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
🔭 Outside diff range comments (5)
src/renderer/src/i18n/ja-JP/settings.json (1)
130-133: Add missing maxCompletionTokens i18n entries in all settings.jsonOur grep across src/renderer/src/i18n/*/settings.json confirms that maxCompletionTokens is not defined in any locale (only reasoningEffort and verbosity exist). To avoid fallback text when exposing the GPT-5 “Max completion tokens” control, please add these keys under modelConfig in every settings.json.
• Location:
– src/renderer/src/i18n//settings.json
– Inside the"modelConfig"object, immediately after the existing"maxTokens"block• Example diff (ja-JP):
"modelConfig": { ... "maxTokens": { "description": "モデルの単一出力に最大数のトークンを設定します", "label": "最大出力長" }, + "maxCompletionTokens": { + "description": "モデルが単一の完了で生成できる最大トークン数(GPT-5 用)を設定します", + "label": "最大生成トークン数" + }, ... "validation": { "maxTokensMax": "最大出力長は1000000を超えることはできません", "maxTokensMin": "最大出力長は0より大きくなければなりません", "maxTokensRequired": "最大出力長を空にすることはできません", + "maxCompletionTokensMax": "最大生成トークン数は1000000を超えることはできません", + "maxCompletionTokensMin": "最大生成トークン数は0より大きくなければなりません", + "maxCompletionTokensRequired": "最大生成トークン数を空にすることはできません", … } }• After adding, re-run:
fd settings.json src/renderer/src/i18n | xargs -I{} rg -n "maxCompletionTokens" {}to ensure every locale includes the new entries.
src/renderer/src/i18n/fr-FR/settings.json (1)
130-133: Add FR translations for the new GPT-5 “maxCompletionTokens” parameterThe UI now exposes a separate
maxCompletionTokenssetting (GPT-5).src/renderer/src/i18n/fr-FR/settings.jsonis missing its label/description and validation messages. Please:• In fr-FR/settings.json, locate the
"maxTokens"block (around line 130) and insert:"maxTokens": { "description": "Définissez le nombre maximum de jetons pour une seule sortie du modèle", "label": "Longueur de sortie maximale" }, + "maxCompletionTokens": { + "description": "Définissez le nombre maximal de jetons générés pour une seule complétion (GPT-5)", + "label": "Nombre maximal de jetons de complétion" + },• In the same file’s
"validation"section, add:"maxTokensRequired": "La longueur de sortie maximale ne peut pas être vide", + "maxCompletionTokensMax": "Le nombre maximal de jetons de complétion ne peut pas dépasser 1000000", + "maxCompletionTokensMin": "Le nombre maximal de jetons de complétion doit être supérieur à 0", + "maxCompletionTokensRequired": "Le nombre maximal de jetons de complétion ne peut pas être vide",• To keep all locales in sync, apply equivalent changes in each
*/settings.json.src/renderer/src/i18n/zh-CN/settings.json (1)
122-125: Enhance i18n: add “maxCompletionTokens” entries across all language bundlesThe new
max_completion_tokenssetting isn’t present in any of the existingsettings.jsonfiles—onlymaxTokensexists. To fully support the PR’s “max_tokens → max_completion_tokens” change, you need to:• In each locale under
src/renderer/src/i18n/*/settings.json(en-US, fa-IR, fr-FR, ja-JP, ko-KR, ru-RU, zh-CN, zh-HK, zh-TW):
- After the
"maxTokens"block, insert a"maxCompletionTokens"section withlabelanddescription.- In the
"validation"object, add the three new keys:
maxCompletionTokensRequiredmaxCompletionTokensMinmaxCompletionTokensMax• For zh-CN (example):
"maxTokens": { "label": "最大输出长度", "description": "设置模型单次输出的最大Token数量" }, + "maxCompletionTokens": { + "label": "最大补全长度", + "description": "设置模型单次补全生成的最大 Token 数量(GPT-5)" + }, … "validation": { "maxTokensRequired": "最大输出长度不能为空", "maxTokensMin": "最大输出长度必须大于0", "maxTokensMax": "最大输出长度不能超过1000000", + "maxCompletionTokensRequired": "最大补全长度不能为空", + "maxCompletionTokensMin": "最大补全长度必须大于0", + "maxCompletionTokensMax": "最大补全长度不能超过1000000", "contextLengthRequired": "上下文长度不能为空", … }Please replicate the above structure (with appropriate translations) in all other locale files.
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (2)
223-244:max_completion_tokensnot forwarded – GPT-5 requests will silently ignore token limit
requestParamsonly setsmax_output_tokens.
For GPT-5 you addedreasoning/textbut did not translate the newmaxCompletionTokensfield introduced in the typings.
Result: token limit is missing and the model will default, violating issue #713.- max_output_tokens: maxTokens, + ...(modelId.startsWith('gpt-5') + ? { max_completion_tokens: modelConfig.maxCompletionTokens ?? maxTokens } + : { max_output_tokens: maxTokens }),Also
delete requestParams.temperatureshould be followed bydelete requestParams.max_output_tokenswhen switching tomax_completion_tokensto avoid sending both.
563-586: Same omission in streaming pathThe streaming branch duplicates the earlier logic but again forgets
max_completion_tokens.
Please apply the same fix (and factor this GPT-5 specific patch into a helper to avoid drift).
♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
538-544: Keep token-param logic consistentSame branching duplicated here—once the helper above is introduced, replace this occurrence as well to guarantee both streaming and non-streaming paths stay aligned.
🧹 Nitpick comments (17)
src/renderer/src/components/ChatConfig.vue (1)
76-80: Consider centralising GPT-5 detection logic
isGPT5Modelworks but locally duplicating the string check risks future drift (e.g., Azure"gpt-5-32k"vs. vendor-prefixed IDs).
Move this into a small shared util such asisGPT5(id: string)so every layer (UI, presenter, request builder) reuses one canonical test.src/main/presenter/configPresenter/modelConfig.ts (1)
166-170: Small nit: omit explicitundefinedliteralsSetting new fields to
undefinedis redundant—omitting the key produces the same result and keeps the object smaller.- reasoningEffort: undefined, - verbosity: undefined, - maxCompletionTokens: undefinedsrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
44-49: Duplicate & unsorted entries in reasoning-model listThe array now contains both
o1-pro/o1-miniand bareo1, as well as four GPT-5 variants.
Having duplicates is harmless at runtime but bloats memory scans and makes manual maintenance harder.- 'o1-pro', - ... - 'o1', + // keep one canonical entry per family + 'o1', // covers o1-*, o1-preview, etc.Consider using a
Setor a RegExp instead of a growing literal list.src/renderer/src/i18n/ja-JP/settings.json (2)
170-180: Localize option labels; avoid mixed English + Japanese in UI optionsThe structure is good. To improve UX consistency, translate option captions fully instead of mixing English level words.
"options": { - "minimal": "Minimal - 最速応答", - "low": "Low - 低努力", - "medium": "Medium - 中程度の努力", - "high": "High - 高努力" + "minimal": "最小 - 最速の応答", + "low": "低 - 低い努力", + "medium": "中 - 中程度の努力", + "high": "高 - 高い努力" }
181-190: Localize verbosity option labels for consistencyAvoid English “Low/Medium/High” in end-user UI. Suggest fully localized options.
"options": { - "low": "Low - 簡潔な回答", - "medium": "Medium - バランスの取れた詳細", - "high": "High - 詳細な回答" + "low": "低 - 簡潔な回答", + "medium": "中 - バランスの取れた詳細", + "high": "高 - 詳細な回答" }src/renderer/src/i18n/fr-FR/settings.json (2)
170-180: Frenchify option labels; avoid mixed English in end-user UIKeep structure; translate level words to FR for better UX.
"options": { - "minimal": "Minimal - Réponse la plus rapide", - "low": "Low - Faible effort", - "medium": "Medium - Effort modéré", - "high": "High - Effort élevé" + "minimal": "Minimal - Réponse la plus rapide", + "low": "Faible - Faible effort", + "medium": "Moyen - Effort modéré", + "high": "Élevé - Effort élevé" }
181-190: Localize verbosity option labels (FR)Avoid English “Low/Medium/High”.
"options": { - "low": "Low - Réponses concises", - "medium": "Medium - Détail équilibré", - "high": "High - Réponses détaillées" + "low": "Faible - Réponses concises", + "medium": "Moyen - Détails équilibrés", + "high": "Élevé - Réponses détaillées" }src/renderer/src/i18n/zh-CN/settings.json (2)
193-203: 本地化选项标签,避免中英文混排结构正确。为了统一体验,建议将级别名称全部使用中文,并使措辞更自然。
"options": { - "minimal": "Minimal - 最快响应", - "low": "Low - 低努力", - "medium": "Medium - 中等努力", - "high": "High - 高努力" + "minimal": "最小 - 响应最快", + "low": "低 - 努力较低", + "medium": "中 - 努力适中", + "high": "高 - 努力较高" }
204-213: 本地化冗长度选项标签将“Low/Medium/High”改为中文更贴近用户。
"options": { - "low": "Low - 简洁回答", - "medium": "Medium - 平衡详细", - "high": "High - 详细回答" + "low": "低 - 简洁回答", + "medium": "中 - 适度详细", + "high": "高 - 详细回答" }src/renderer/src/i18n/zh-TW/settings.json (1)
181-190: Wording nit: make “Medium” option more natural“Medium - 平衡詳細” is understandable but slightly awkward. Consider:
- "medium": "Medium - 中等努力", + "medium": "Medium - 中等努力"- "medium": "Medium - 平衡詳細", + "medium": "Medium - 詳略均衡"src/renderer/src/i18n/fa-IR/settings.json (1)
181-190: Polish: prefer neutral term over “پرگویی”“پرگویی” can sound negative. “سطح جزئیات” is clearer for users. Also tweak placeholder for consistency.
- "verbosity": { - "label": "پرگویی", - "description": "سطح جزئیات و طول پاسخهای مدل را کنترل میکند", - "placeholder": "انتخاب سطح پرگویی", + "verbosity": { + "label": "سطح جزئیات", + "description": "سطح جزئیات و طول پاسخهای مدل را کنترل میکند", + "placeholder": "انتخاب سطح جزئیات", "options": { - "low": "Low - پاسخهای مختصر", - "medium": "Medium - جزئیات متعادل", - "high": "High - پاسخهای تفصیلی" + "low": "Low - پاسخهای مختصر", + "medium": "Medium - جزئیات متعادل", + "high": "High - پاسخهای تفصیلی" } },src/renderer/src/i18n/zh-HK/settings.json (1)
181-190: Wording nit: “Medium - 平衡詳細”Minor phrasing improvement for readability.
- "medium": "Medium - 平衡詳細", + "medium": "Medium - 詳略均衡",src/renderer/src/components/settings/ModelConfigDialog.vue (3)
55-56: Translate comments to English per renderer guidelinesReplace the Chinese comment with English to comply with “Use English for logs and comments”.
Apply:
- <!-- 温度 (GPT-5 系列模型不显示) --> + <!-- Temperature (hidden for GPT-5 series) -->
136-165: Translate comment to English- <!-- GPT-5 系列模型的推理努力程度 --> + <!-- Reasoning effort (GPT-5 series only) -->
167-190: Translate comment to English- <!-- GPT-5 系列模型的详细程度 --> + <!-- Verbosity (GPT-5 series only) -->src/shared/presenter.d.ts (1)
1276-1292: Duplicate GPT-5 fields but no enforcementThe new GPT-5 properties were copied into
DefaultModelSetting, yet code never references them.
Consider extracting a sharedGpt5Specificinterface and re-using it in bothModelConfigandDefaultModelSettingto avoid divergence.src/main/presenter/configPresenter/providerModelSettings.ts (1)
16-19: Comments must be in English per repo guidelines.Replace the Chinese comment with an English one to pass lint/guidelines.
- // GPT-5 系列新参数 + // GPT-5 series parameters (replaces max_tokens with max_completion_tokens; temperature unsupported)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
src/main/presenter/configPresenter/modelConfig.ts(2 hunks)src/main/presenter/configPresenter/modelDefaultSettings.ts(1 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(2 hunks)src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts(3 hunks)src/main/presenter/threadPresenter/index.ts(1 hunks)src/renderer/src/components/ChatConfig.vue(2 hunks)src/renderer/src/components/NewThread.vue(2 hunks)src/renderer/src/components/settings/ModelConfigDialog.vue(6 hunks)src/renderer/src/i18n/en-US/settings.json(1 hunks)src/renderer/src/i18n/fa-IR/settings.json(1 hunks)src/renderer/src/i18n/fr-FR/settings.json(1 hunks)src/renderer/src/i18n/ja-JP/settings.json(1 hunks)src/renderer/src/i18n/ko-KR/settings.json(1 hunks)src/renderer/src/i18n/ru-RU/settings.json(1 hunks)src/renderer/src/i18n/zh-CN/settings.json(1 hunks)src/renderer/src/i18n/zh-HK/settings.json(1 hunks)src/renderer/src/i18n/zh-TW/settings.json(1 hunks)src/shared/presenter.d.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (20)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Use English for logs and comments
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.tssrc/shared/presenter.d.tssrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/renderer/src/**/*.vue
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI componentsUse scoped styles to prevent CSS conflicts between components
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{ts,tsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*
📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.jsonsrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
渲染进程代码放在
src/renderer
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
Implement lazy loading for routes and components.
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.
Files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Strict type checking enabled for TypeScript
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.tssrc/shared/presenter.d.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
One presenter per functional domain
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.tssrc/shared/presenter.d.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Centralize configuration in configPresenter/
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/shared/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Shared types in src/shared/
Files:
src/shared/presenter.d.ts
src/shared/*.d.ts
📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
The shared/*.d.ts files are used to define the types of objects exposed by the main process to the renderer process
Files:
src/shared/presenter.d.ts
src/shared/**/*.{ts,tsx,d.ts}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
共享类型定义放在
shared目录
Files:
src/shared/presenter.d.ts
🧠 Learnings (35)
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use provide/inject for dependency injection when appropriate.
Applied to files:
src/renderer/src/components/NewThread.vue
📚 Learning: 2025-07-21T01:47:28.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:47:28.817Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Leverage Vue's built-in reactivity system for efficient data handling
Applied to files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Leverage ref, reactive, and computed for reactive state management.
Applied to files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-06-21T15:49:17.044Z
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both `text` and `content` properties in the `handleMeetingInstruction` function's `sendMessage` call, even though they are redundant, rather than removing the `content` property.
Applied to files:
src/renderer/src/components/NewThread.vuesrc/main/presenter/threadPresenter/index.ts
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use <script setup> syntax for concise component definitions.
Applied to files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/renderer/src/**/*.vue : Use Composition API with proper TypeScript typing for new UI components
Applied to files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Vue 3 with TypeScript, leveraging defineComponent and PropType.
Applied to files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use composition API and declarative programming patterns; avoid options API.
Applied to files:
src/renderer/src/components/NewThread.vuesrc/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
📚 Learning: 2025-07-21T01:47:03.479Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:47:03.479Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Use getters for computed state properties
Applied to files:
src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-21T01:47:28.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:47:28.817Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Use the Composition API for better code organization and reusability
Applied to files:
src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/renderer/src/**/*.vue : Follow existing component patterns in src/renderer/src/ when creating new UI components
Applied to files:
src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{ts,tsx,vue} : Use arrow functions for methods and computed properties.
Applied to files:
src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield stop events with appropriate `stop_reason` in the standardized format.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/i18n/*.json : Maintain consistent structure across all translation files
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.json
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/i18n/common.json : Shared translation keys must be placed in 'src/renderer/src/i18n/common.json'
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.json
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/i18n/*.json : When adding new translations, add shared keys to 'common.json' and language-specific keys to the respective language file; keep all language files' keys consistent
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.json
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/i18n/*.json : Regularly check for unused translation keys in i18n files
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.json
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/i18n/*.json : Each language must have a separate JSON file in 'src/renderer/src/i18n/'
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.json
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/i18n/*.json : Translation key naming must use dot-separated hierarchy, lowercase letters, and meaningful descriptive names (e.g., 'common.button.submit')
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.json
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/i18n/*.json : Language files must be located in 'src/renderer/src/i18n/' directory
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.json
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/**/* : All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.json
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/
Applied to files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/threadPresenter/index.tssrc/shared/presenter.d.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/threadPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.
Applied to files:
src/main/presenter/threadPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.
Applied to files:
src/main/presenter/threadPresenter/index.ts
📚 Learning: 2025-06-21T15:48:29.950Z
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:250-252
Timestamp: 2025-06-21T15:48:29.950Z
Learning: In the meeting server implementation (src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts), when multiple tabs have the same title, the user prefers to let the code silently select the first match without adding warnings or additional ambiguity handling.
Applied to files:
src/main/presenter/threadPresenter/index.ts
🧬 Code Graph Analysis (1)
src/main/presenter/threadPresenter/index.ts (1)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
defaultModelsSettings(3-1853)
🔇 Additional comments (11)
src/renderer/src/components/NewThread.vue (1)
398-400: Follow-up: renamemaxTokensprop chain for GPT-5The UI still binds
maxTokens; backend expectsmaxCompletionTokensfor GPT-5.
After adding the conditional payload (see above), audit presenter layers so that the new field is accepted and validated.src/main/presenter/threadPresenter/index.ts (1)
746-751: Do not default temperature; honor GPT‑5 compatibility and use maxCompletionTokensDefaulting to 0.7 re-enables temperature for models that intentionally omit it (GPT‑5), contradicting issue #713 and the PR’s intent. Also, prefer maxCompletionTokens over maxTokens when the model exposes it.
Apply this diff to gate temperature and switch tokens based on the model config:
if (defaultModelsSettings) { - mergedSettings.maxTokens = defaultModelsSettings.maxTokens - mergedSettings.contextLength = defaultModelsSettings.contextLength - mergedSettings.temperature = defaultModelsSettings.temperature ?? 0.7 - // 重置 thinkingBudget 为模型默认配置,如果模型配置中没有则设为 undefined - mergedSettings.thinkingBudget = defaultModelsSettings.thinkingBudget + mergedSettings.contextLength = defaultModelsSettings.contextLength + // GPT‑5 compatibility: prefer maxCompletionTokens and drop temperature entirely + const isGpt5Style = + (defaultModelsSettings as any).maxCompletionTokens !== undefined + + if (isGpt5Style) { + delete (mergedSettings as any).maxTokens + ;(mergedSettings as any).maxCompletionTokens = (defaultModelsSettings as any).maxCompletionTokens + delete (mergedSettings as any).temperature + } else { + mergedSettings.maxTokens = defaultModelsSettings.maxTokens + if ( + defaultModelsSettings.temperature !== undefined && + defaultModelsSettings.temperature !== null + ) { + mergedSettings.temperature = defaultModelsSettings.temperature + } else { + delete mergedSettings.temperature + } + } + // Reset thinkingBudget to model default (or undefined if absent) + mergedSettings.thinkingBudget = defaultModelsSettings.thinkingBudget }Additional follow-up outside this hunk (recommended):
- Guard application of user-provided settings and don’t drop valid 0 values; ignore temperature for GPT‑5 models; accept maxCompletionTokens if provided.
// After computing mergedSettings and defaultModelsSettings: const isGpt5Style = (defaultModelsSettings as any)?.maxCompletionTokens !== undefined if (settings.artifacts !== undefined) { mergedSettings.artifacts = settings.artifacts } if (isGpt5Style) { if ((settings as any).maxCompletionTokens !== undefined) { (mergedSettings as any).maxCompletionTokens = (settings as any).maxCompletionTokens } // GPT‑5: temperature not supported delete (mergedSettings as any).temperature delete (mergedSettings as any).maxTokens } else { if (settings.maxTokens !== undefined) { mergedSettings.maxTokens = settings.maxTokens } if (settings.temperature !== undefined) { mergedSettings.temperature = settings.temperature } } if (settings.contextLength !== undefined) { mergedSettings.contextLength = settings.contextLength } if (settings.systemPrompt !== undefined) { mergedSettings.systemPrompt = settings.systemPrompt }Verification (please run from repo root):
#!/bin/bash set -euo pipefail echo "Check that no default temperature is applied for GPT-5 models in presenter:" rg -n "temperature.*\?\? 0\.7|temperature.*= .*0\.7" src/main/presenter -S || true echo echo "Locate usages of temperature/maxTokens/maxCompletionTokens to ensure GPT-5 paths ignore temperature and use completion tokens:" rg -n "maxCompletionTokens|max_completion_tokens|maxTokens|temperature" -S -A 2 -B 2 echo echo "Find stream/completion call-sites to confirm parameters passed:" rg -n "startStreamCompletion\\(|generateCompletion\\(" -S -A 4 -B 2 src/main echo echo "Spot any hard-coded temperatures in flows (e.g., translate/askAI) that should be avoided for GPT-5:" rg -n "startStreamCompletion\\(.*,[^,]+,[^,]+,[^,]+, *[0-9]*\\.?[0-9]+" -S -n src/main -A 0 -B 2 || trueArchitecture note:
- Consider centralizing model-parameter compatibility (temperature/maxTokens vs. maxCompletionTokens) in configPresenter to avoid duplicating business rules in ThreadPresenter. This aligns with team learning to centralize configuration in configPresenter.
Likely an incorrect or invalid review comment.
src/main/presenter/configPresenter/modelConfig.ts (1)
145-149: Verify downstream handling of new GPT-5 fields
reasoningEffort,verbosity, andmaxCompletionTokensare now injected into the default config, but most callers (UI, token counters, providers) still readmaxTokens.
Double-check that every component that relies on length limits or displays response-length settings has been adapted to prefermaxCompletionTokensfor GPT-5 models; otherwise those paths will silently receiveundefined.If gaps exist, consider adding a helper such as:
export const getEffectiveMaxTokens = (cfg: ModelConfig) => cfg.maxCompletionTokens ?? cfg.maxTokens;src/renderer/src/i18n/en-US/settings.json (1)
170-190: Remember to sync new i18n keys across all languages
modelConfig.reasoningEffort.*andmodelConfig.verbosity.*must be added to every locale file to avoid fallback-to-key issues at runtime.
Run the usual “missing-keys” check script after merging.src/renderer/src/i18n/ko-KR/settings.json (2)
181-190: Verbosity section: LGTMKeys and structure are consistent. Confirm presence across all locales alongside reasoningEffort.
170-180: All locales contain reasoningEffort keys
The new reasoningEffort keys have been verified in every settings.json under src/renderer/src/i18n. No missing keys detected—no further action needed.src/renderer/src/i18n/ru-RU/settings.json (2)
181-190: Раздел verbosity: OKСтруктура и ключи соответствуют. Проверьте, что аналогичные ключи есть во всех локалях.
170-180: Проверено: ключи reasoningEffort присутствуют во всех локаляхВсе файлы
src/renderer/src/i18n/**/settings.jsonсодержат полный набор под-ключей (label,description,placeholder,options.minimal|low|medium|high). Дополнительных правок не требуется.src/renderer/src/components/settings/ModelConfigDialog.vue (3)
167-190: GPT-5 verbosity UI: looks correctConditional rendering and i18n usage align with the new model parameters.
492-496: GPT-5 detection: LGTMCase-insensitive prefix check covers gpt-5, gpt-5o, gpt-5.1 variants. No issues.
136-165: GPT-5 reasoningEffort UI: translations verified in all locales
Allsettings.model.modelConfig.reasoningEffort.*keys exist in every*/settings.json. The component’s i18n bindings are correct—no missing translations. Ready to merge.
| { | ||
| id: 'gpt-5-chat', | ||
| name: 'GPT-5 Chat', | ||
| maxTokens: 16384, | ||
| contextLength: 272000, | ||
| match: ['gpt-5-chat', 'gpt-5-chat-latest'], | ||
| vision: true, | ||
| functionCall: false, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 16384 | ||
| }, | ||
| { | ||
| id: 'gpt-5-mini', | ||
| name: 'GPT-5 Mini', | ||
| maxTokens: 128000, | ||
| contextLength: 272000, | ||
| match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'], | ||
| vision: true, | ||
| functionCall: true, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 128000 | ||
| }, | ||
| { | ||
| id: 'gpt-5-nano', | ||
| name: 'GPT-5 Nano', | ||
| maxTokens: 128000, | ||
| contextLength: 272000, | ||
| match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'], | ||
| vision: true, | ||
| functionCall: true, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 128000 | ||
| }, | ||
| { | ||
| id: 'gpt-5', | ||
| name: 'GPT-5', | ||
| maxTokens: 128000, | ||
| contextLength: 272000, | ||
| match: ['gpt-5', 'gpt-5-2025-08-07'], | ||
| vision: true, | ||
| functionCall: true, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 128000 | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Remove maxTokens for GPT‑5 defaults; keep only maxCompletionTokens.
To fully honor “replace max_tokens with max_completion_tokens” for GPT‑5, don’t set maxTokens here. This avoids UI/provider ambiguity. (Make maxTokens optional in DefaultModelSetting accordingly.)
{
id: 'gpt-5-chat',
name: 'GPT-5 Chat',
- maxTokens: 16384,
contextLength: 272000,
match: ['gpt-5-chat', 'gpt-5-chat-latest'],
vision: true,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 16384
},
{
id: 'gpt-5-mini',
name: 'GPT-5 Mini',
- maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5-nano',
name: 'GPT-5 Nano',
- maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5',
name: 'GPT-5',
- maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5', 'gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},Please also verify that the shared types support maxCompletionTokens and do not require maxTokens for GPT‑5:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Check shared types for maxCompletionTokens/maxTokens:"
rg -n --pretty "interface .*Model.*|type .*Model.*|maxCompletionTokens|maxTokens|temperature" src/shared
echo
echo "Ensure renderer uses maxCompletionTokens for GPT‑5 and hides temperature:"
rg -n -A3 -B3 --pretty "gpt-5|maxCompletionTokens|max_completion_tokens|temperature" src/rendererLength of output: 19610
Enforce GPT-5 defaults: remove maxTokens, use only maxCompletionTokens
The current changes only adjust defaults in modelDefaultSettings.ts, but the shared types and renderer haven’t been updated. Please address the following:
• src/main/presenter/configPresenter/modelDefaultSettings.ts
Remove maxTokens entries for all GPT-5 models, e.g.:
{
id: 'gpt-5-chat',
name: 'GPT-5 Chat',
- maxTokens: 16384,
contextLength: 272000,
…
maxCompletionTokens: 16384
},• src/shared/presenter.d.ts
Make maxTokens optional in both ModelConfig and DefaultModelSetting so GPT-5 can omit it:
export interface ModelConfig {
- maxTokens: number
+ maxTokens?: number
temperature?: number
maxCompletionTokens?: number // GPT-5 uses this instead of maxTokens
}
export interface DefaultModelSetting {
temperature?: number
- maxTokens: number
+ maxTokens?: number
maxCompletionTokens?: number // GPT-5 uses this instead of maxTokens
…
}• Renderer updates (add maxCompletionTokens, hide temperature/maxTokens for GPT-5):
– stores/chat.ts: extend CONVERSATION_SETTINGS to include maxCompletionTokens, and in your initialization logic, pull config.maxCompletionTokens instead of config.maxTokens for GPT-5.
– components/ChatConfig.vue, NewThread.vue, TitleView.vue:
- Wrap the temperature and max-tokens controls in v-if="!isGPT5Model"
- Add a slider/input bound to maxCompletionTokens shown only when isGPT5Model
– ModelConfigDialog.vue already hides temperature for GPT-5; mirror similar logic for max-tokens vs. max-completion-tokens.
After these changes, GPT-5 models will have a single, unambiguous completion-budget parameter across the app.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/modelDefaultSettings.ts lines 868-919,
remove all maxTokens properties from GPT-5 model entries to enforce using only
maxCompletionTokens. Then, in src/shared/presenter.d.ts, update the ModelConfig
and DefaultModelSetting types to make maxTokens optional so GPT-5 models can
omit it. Next, in stores/chat.ts, extend CONVERSATION_SETTINGS to include
maxCompletionTokens and update initialization logic to use
config.maxCompletionTokens instead of config.maxTokens for GPT-5 models.
Finally, in components/ChatConfig.vue, NewThread.vue, and TitleView.vue,
conditionally hide temperature and maxTokens controls for GPT-5 models and add a
slider/input bound to maxCompletionTokens visible only for GPT-5, ensuring
ModelConfigDialog.vue mirrors this logic for consistency.
| // GPT-5 系列新参数 | ||
| reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high' | ||
| verbosity?: 'low' | 'medium' | 'high' | ||
| maxCompletionTokens?: number // GPT-5 系列使用此参数替代 maxTokens |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make maxTokens optional in schema to align with GPT‑5 (no max_tokens).
Keeping maxTokens required forces GPT‑5 entries to set it, increasing the chance of accidentally sending max_tokens. Make it optional so GPT‑5 can omit it, and let providers prefer maxCompletionTokens.
Proposed interface tweak (outside the selected lines) to unblock GPT‑5 objects omitting maxTokens:
export interface ProviderModelSetting {
id: string
name: string
match: string[]
maxTokens?: number // now optional; GPT‑5 uses maxCompletionTokens
contextLength: number
temperature?: number
vision?: boolean
functionCall?: boolean
reasoning?: boolean
type?: ModelType
// GPT‑5 series parameters
reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high'
verbosity?: 'low' | 'medium' | 'high'
maxCompletionTokens?: number
}Also ensure DefaultModelSetting/ModelConfig mirror this optionality.
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 16
to 19, the maxTokens property in the ProviderModelSetting interface should be
made optional to align with GPT-5 usage where max_tokens is not required. Update
the interface definition to mark maxTokens as optional by adding a question
mark, and ensure that related types like DefaultModelSetting and ModelConfig
also reflect this optionality to allow GPT-5 models to omit maxTokens and use
maxCompletionTokens instead.
| models: [ | ||
| { | ||
| id: 'gpt-5-chat', | ||
| name: 'GPT-5 Chat', | ||
| maxTokens: 16384, | ||
| contextLength: 272000, | ||
| match: ['gpt-5-chat', 'gpt-5-chat-latest'], | ||
| vision: true, | ||
| functionCall: false, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 16384 | ||
| }, | ||
| { | ||
| id: 'gpt-5-mini', | ||
| name: 'GPT-5 Mini', | ||
| maxTokens: 128000, | ||
| contextLength: 272000, | ||
| match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'], | ||
| vision: true, | ||
| functionCall: true, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 128000 | ||
| }, | ||
| { | ||
| id: 'gpt-5-nano', | ||
| name: 'GPT-5 Nano', | ||
| maxTokens: 128000, | ||
| contextLength: 272000, | ||
| match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'], | ||
| vision: true, | ||
| functionCall: true, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 128000 | ||
| }, | ||
| { | ||
| id: 'gpt-5', | ||
| name: 'GPT-5', | ||
| maxTokens: 128000, | ||
| contextLength: 272000, | ||
| match: ['gpt-5', 'gpt-5-2025-08-07'], | ||
| vision: true, | ||
| functionCall: true, | ||
| reasoning: true, | ||
| reasoningEffort: 'medium', | ||
| verbosity: 'medium', | ||
| maxCompletionTokens: 128000 | ||
| } | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid setting maxTokens for GPT‑5 models; rely on maxCompletionTokens.
To prevent accidental use of max_tokens, drop maxTokens in GPT‑5 entries (once the type is optional). Temperature is already omitted — good.
{
id: 'gpt-5-chat',
name: 'GPT-5 Chat',
- maxTokens: 16384,
contextLength: 272000,
match: ['gpt-5-chat', 'gpt-5-chat-latest'],
vision: true,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 16384
},
{
id: 'gpt-5-mini',
name: 'GPT-5 Mini',
- maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5-nano',
name: 'GPT-5 Nano',
- maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
},
{
id: 'gpt-5',
name: 'GPT-5',
- maxTokens: 128000,
contextLength: 272000,
match: ['gpt-5', 'gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| models: [ | |
| { | |
| id: 'gpt-5-chat', | |
| name: 'GPT-5 Chat', | |
| maxTokens: 16384, | |
| contextLength: 272000, | |
| match: ['gpt-5-chat', 'gpt-5-chat-latest'], | |
| vision: true, | |
| functionCall: false, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 16384 | |
| }, | |
| { | |
| id: 'gpt-5-mini', | |
| name: 'GPT-5 Mini', | |
| maxTokens: 128000, | |
| contextLength: 272000, | |
| match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'], | |
| vision: true, | |
| functionCall: true, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 128000 | |
| }, | |
| { | |
| id: 'gpt-5-nano', | |
| name: 'GPT-5 Nano', | |
| maxTokens: 128000, | |
| contextLength: 272000, | |
| match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'], | |
| vision: true, | |
| functionCall: true, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 128000 | |
| }, | |
| { | |
| id: 'gpt-5', | |
| name: 'GPT-5', | |
| maxTokens: 128000, | |
| contextLength: 272000, | |
| match: ['gpt-5', 'gpt-5-2025-08-07'], | |
| vision: true, | |
| functionCall: true, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 128000 | |
| } | |
| ] | |
| models: [ | |
| { | |
| id: 'gpt-5-chat', | |
| name: 'GPT-5 Chat', | |
| contextLength: 272000, | |
| match: ['gpt-5-chat', 'gpt-5-chat-latest'], | |
| vision: true, | |
| functionCall: false, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 16384 | |
| }, | |
| { | |
| id: 'gpt-5-mini', | |
| name: 'GPT-5 Mini', | |
| contextLength: 272000, | |
| match: ['gpt-5-mini', 'gpt-5-mini-2025-08-07'], | |
| vision: true, | |
| functionCall: true, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 128000 | |
| }, | |
| { | |
| id: 'gpt-5-nano', | |
| name: 'GPT-5 Nano', | |
| contextLength: 272000, | |
| match: ['gpt-5-nano', 'gpt-5-nano-2025-08-07'], | |
| vision: true, | |
| functionCall: true, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 128000 | |
| }, | |
| { | |
| id: 'gpt-5', | |
| name: 'GPT-5', | |
| contextLength: 272000, | |
| match: ['gpt-5', 'gpt-5-2025-08-07'], | |
| vision: true, | |
| functionCall: true, | |
| reasoning: true, | |
| reasoningEffort: 'medium', | |
| verbosity: 'medium', | |
| maxCompletionTokens: 128000 | |
| } | |
| ] |
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts between lines 26
and 79, the GPT-5 model entries incorrectly include the maxTokens property.
Remove the maxTokens field from all GPT-5 model objects to rely solely on
maxCompletionTokens as intended, ensuring the type allows maxTokens to be
optional.
| temperature: config.temperature, // 保持可选,某些模型不支持 | ||
| vision: config.vision || false, | ||
| functionCall: config.functionCall || false, | ||
| reasoning: config.reasoning || false, | ||
| type: config.type || ModelType.Chat | ||
| type: config.type || ModelType.Chat, | ||
| reasoningEffort: config.reasoningEffort, | ||
| verbosity: config.verbosity, | ||
| maxCompletionTokens: config.maxCompletionTokens | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Guard against sending temperature for GPT‑5 and prefer maxCompletionTokens.
Even if upstream accidentally provides a temperature, GPT‑5 must not receive it. Also, ensure downstream logic prioritizes maxCompletionTokens for GPT‑5.
- temperature: config.temperature, // 保持可选,某些模型不支持
+ // GPT‑5 does not support temperature; drop it for GPT‑5 models
+ temperature: lowerModelId.includes('gpt-5') ? undefined : config.temperature,
vision: config.vision || false,
functionCall: config.functionCall || false,
reasoning: config.reasoning || false,
type: config.type || ModelType.Chat,
reasoningEffort: config.reasoningEffort,
verbosity: config.verbosity,
maxCompletionTokens: config.maxCompletionTokensRun the script below to confirm no code path still defaults/forces temperature for GPT‑5 and that providers send max_completion_tokens:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Search places that default temperature:"
rg -n --pretty "(default.*temperature|temperature\s*=\s*0\.7|temperature:\s*0\.7)"
echo
echo "Search GPT-5 handling in providers (ensure no temperature in payload):"
rg -n --pretty "gpt-5|gpt5|gpt_5" src | sed -n '1,200p'
echo
echo "Search payload keys used for token limits:"
rg -n --pretty "max_completion_tokens|maxCompletionTokens|max_tokens|maxTokens"
echo
echo "OpenAI provider request construction around token params:"
rg -n -A5 -B5 --pretty "max_completion_tokens|max_tokens" src/main/presenter/llmProviderPresenterLength of output: 54394
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Extract OPENAI_REASONING_MODELS constant"
rg -n --pretty "OPENAI_REASONING_MODELS" -A5 src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tsLength of output: 892
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Snippet around line 1580 in threadPresenter:"
sed -n '1550,1620p' src/main/presenter/threadPresenter/index.ts
echo
echo "Snippet around line 1640 in threadPresenter:"
sed -n '1620,1700p' src/main/presenter/threadPresenter/index.tsLength of output: 5336
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "===== Top of providerModelSettings.ts ====="
sed -n '1,200p' src/main/presenter/configPresenter/providerModelSettings.ts
echo
echo "===== Mapping function around lines 2300–2500 ====="
sed -n '2300,2500p' src/main/presenter/configPresenter/providerModelSettings.tsLength of output: 10143
Drop temperature for GPT-5 in providerModelSettings
The helper in src/main/presenter/configPresenter/providerModelSettings.ts still blindly returns config.temperature, which—even if undefined—could end up in payloads. Let’s explicitly omit it for any model ID containing “gpt-5” and rely on maxCompletionTokens instead.
Locations to update:
• getProviderSpecificModelConfig in src/main/presenter/configPresenter/providerModelSettings.ts (around the final return).
Suggested diff:
--- a/src/main/presenter/configPresenter/providerModelSettings.ts
+++ b/src/main/presenter/configPresenter/providerModelSettings.ts
@@ export function getProviderSpecificModelConfig(
- temperature: config.temperature, // 保持可选,某些模型不支持
+ // GPT-5 does not accept a temperature parameter—always drop it
+ temperature: lowerModelId.includes('gpt-5') ? undefined : config.temperature,
vision: config.vision || false,
functionCall: config.functionCall || false,
reasoning: config.reasoning || false,
type: config.type || ModelType.Chat,
reasoningEffort: config.reasoningEffort,
verbosity: config.verbosity,
maxCompletionTokens: config.maxCompletionTokensThis change ensures GPT-5 models never carry a temperature key, and downstream code will automatically use max_completion_tokens.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| temperature: config.temperature, // 保持可选,某些模型不支持 | |
| vision: config.vision || false, | |
| functionCall: config.functionCall || false, | |
| reasoning: config.reasoning || false, | |
| type: config.type || ModelType.Chat | |
| type: config.type || ModelType.Chat, | |
| reasoningEffort: config.reasoningEffort, | |
| verbosity: config.verbosity, | |
| maxCompletionTokens: config.maxCompletionTokens | |
| } | |
| // GPT-5 does not accept a temperature parameter—always drop it | |
| temperature: lowerModelId.includes('gpt-5') ? undefined : config.temperature, | |
| vision: config.vision || false, | |
| functionCall: config.functionCall || false, | |
| reasoning: config.reasoning || false, | |
| type: config.type || ModelType.Chat, | |
| reasoningEffort: config.reasoningEffort, | |
| verbosity: config.verbosity, | |
| maxCompletionTokens: config.maxCompletionTokens | |
| } |
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2450
to 2458, update the getProviderSpecificModelConfig function to conditionally
exclude the temperature property when the model ID contains "gpt-5". Instead of
always including temperature from config, add a check to omit temperature for
GPT-5 models so it does not appear in the payload. Keep maxCompletionTokens as
the controlling parameter for GPT-5. This prevents sending an unsupported
temperature field for GPT-5 models.
| ...(modelId.startsWith('o1') || | ||
| modelId.startsWith('o3') || | ||
| modelId.startsWith('o4') || | ||
| modelId.startsWith('gpt-5') | ||
| ? { max_completion_tokens: maxTokens } | ||
| : { max_tokens: maxTokens }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
DRY up token-limit parameter branching
The startsWith(...) ? { max_completion_tokens } : { max_tokens } logic now appears twice and will likely be copied again for future families.
Extracting into a helper prevents drift:
function getTokenParam(modelId: string, tokens: number) {
return modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
? { max_completion_tokens: tokens }
: { max_tokens: tokens };
}Then:
- ...,
- ...(modelId.startsWith('o1') || ... ? { ... } : { ... })
+ ...,
+ ...getTokenParam(modelId, maxTokens)🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 217 to 223, the logic for choosing between max_completion_tokens
and max_tokens based on modelId prefixes is duplicated. Refactor by extracting
this conditional into a helper function that takes modelId and token count as
parameters and returns the appropriate object. Replace the duplicated inline
logic with calls to this helper to improve maintainability and prevent code
duplication.
| // 验证温度 (仅对非 GPT-5 系列模型) | ||
| if (!isGPT5Model.value && config.value.temperature !== undefined) { | ||
| if (config.value.temperature < 0) { | ||
| errors.value.temperature = t('settings.model.modelConfig.validation.temperatureMin') | ||
| } else if (config.value.temperature > 2) { | ||
| errors.value.temperature = t('settings.model.modelConfig.validation.temperatureMax') | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Validation: add branch for GPT-5 maxCompletionTokens
Temperature gating is correct. Also validate maxCompletionTokens for GPT-5 (and skip maxTokens), else the field won’t be validated.
Example adjustment to validateForm (outside this hunk):
// Validate max output length / completion tokens
if (isGPT5Model.value) {
const v = (config.value as any).maxCompletionTokens
if (!v || v <= 0) {
(errors.value as any).maxCompletionTokens = t('settings.model.modelConfig.validation.maxTokensMin')
} else if (v > 1_000_000) {
(errors.value as any).maxCompletionTokens = t('settings.model.modelConfig.validation.maxTokensMax')
}
} else {
if (!config.value.maxTokens || config.value.maxTokens <= 0) {
errors.value.maxTokens = t('settings.model.modelConfig.validation.maxTokensMin')
} else if (config.value.maxTokens > 1_000_000) {
errors.value.maxTokens = t('settings.model.modelConfig.validation.maxTokensMax')
}
}Tip: When saving GPT-5, omit maxTokens from the payload to avoid sending unsupported params.
🤖 Prompt for AI Agents
In src/renderer/src/components/settings/ModelConfigDialog.vue around lines 403
to 410, the validation logic currently only checks temperature for non-GPT-5
models and does not validate maxCompletionTokens for GPT-5 models. To fix this,
add a conditional branch in the validateForm method that validates
maxCompletionTokens when isGPT5Model.value is true, ensuring it is greater than
0 and less than or equal to 1,000,000, and sets the appropriate error messages.
For non-GPT-5 models, continue validating maxTokens as before. Also, ensure that
when saving GPT-5 configurations, maxTokens is omitted from the payload to avoid
sending unsupported parameters.
| "reasoningEffort": { | ||
| "label": "تلاش استدلال", | ||
| "description": "عمق استدلال مدل را کنترل میکند؛ تلاش بیشتر نتایج بهتری تولید میکند اما پاسخهای کندتری دارد", | ||
| "placeholder": "انتخاب تلاش استدلال", | ||
| "options": { | ||
| "minimal": "Minimal - سریعترین پاسخ", | ||
| "low": "Low - تلاش کم", | ||
| "medium": "Medium - تلاش متوسط", | ||
| "high": "High - تلاش زیاد" | ||
| } | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Add i18n for maxCompletionTokens (GPT-5) and verify enum values
The GPT-5 parameter max_completion_tokens needs Persian labels/validation. Also, please confirm "minimal|low|medium|high" matches provider expectations.
Add under model.modelConfig and validation:
"maxTokens": {
"description": "حداکثر تعداد نشانه ها را برای یک خروجی واحد از مدل تنظیم کنید",
"label": "حداکثر طول خروجی"
},
+ "maxCompletionTokens": {
+ "description": "حداکثر تعداد توکنهای تکمیل در هر پاسخ مدل را تنظیم کنید (ویژه سری GPT‑5)",
+ "label": "حداکثر طول تکمیل"
+ },
"reasoning": {
"description": "آیا مدل از توانایی استدلال پشتیبانی می کند؟",
"label": "توانایی استدلال"
}, "validation": {
"contextLengthMax": "طول زمینه نمی تواند از 10000000 فراتر رود",
"contextLengthMin": "طول زمینه باید بیشتر از 0 باشد",
"contextLengthRequired": "طول زمینه نمی تواند خالی باشد",
"maxTokensMax": "حداکثر طول خروجی نمی تواند از 1000000 فراتر رود",
"maxTokensMin": "حداکثر طول خروجی باید بیشتر از 0 باشد",
"maxTokensRequired": "حداکثر طول خروجی نمی تواند خالی باشد",
+ "maxCompletionTokensMax": "حداکثر طول تکمیل نمیتواند از 1000000 فراتر رود",
+ "maxCompletionTokensMin": "حداکثر طول تکمیل باید بیشتر از 0 باشد",
+ "maxCompletionTokensRequired": "حداکثر طول تکمیل نمیتواند خالی باشد",
"temperatureMax": "دما باید کمتر از یا برابر با 2 باشد",
"temperatureMin": "دما باید بیشتر یا برابر با 0 باشد",
"temperatureRequired": "درجه حرارت نمی تواند خالی باشد"
},🏁 Script executed:
#!/bin/bash
set -eux
# Ensure the Persian settings file exists
test -f src/renderer/src/i18n/fa-IR/settings.json
# Find where modelConfig starts in the Persian file
rg -n '"modelConfig"' src/renderer/src/i18n/fa-IR/settings.json
# Show context around the existing maxTokens entry
rg -n -C3 '"maxTokens"' src/renderer/src/i18n/fa-IR/settings.json
# Locate the validation block
rg -n '"validation"' src/renderer/src/i18n/fa-IR/settings.json
# Show context around validation messages for maxTokens
rg -n -C3 '"maxTokens"' src/renderer/src/i18n/fa-IR/settings.json
# Check if maxCompletionTokens is already present in any translation
rg -l '"maxCompletionTokens"' -n src/renderer/src/i18n
# For comparison, inspect the reasoningEffort enum in English settings
rg -n -C3 '"reasoningEffort"' src/renderer/src/i18n/en-US/settings.jsonLength of output: 1227
🏁 Script executed:
#!/bin/bash
set -eux
# Check for maxCompletionTokens in English settings
rg -n -C3 '"maxCompletionTokens"' src/renderer/src/i18n/en-US/settings.json || echo "maxCompletionTokens not found in en-US"
# Show reasoningEffort enum and its options in English settings for comparison
rg -n -C10 '"reasoningEffort"' src/renderer/src/i18n/en-US/settings.jsonLength of output: 1258
Add Persian i18n entries for maxCompletionTokens in settings.json
Please update src/renderer/src/i18n/fa-IR/settings.json as follows:
• Under the modelConfig block (around line 130), insert:
"maxCompletionTokens": {
"description": "حداکثر تعداد توکنهای تکمیل در هر پاسخ مدل را تنظیم کنید (ویژه سری GPT-5)",
"label": "حداکثر طول تکمیل"
},• Under the validation block (around line 210), insert:
"maxCompletionTokensMax": "حداکثر طول تکمیل نمیتواند از 1000000 فراتر رود",
"maxCompletionTokensMin": "حداکثر طول تکمیل باید بیشتر از 0 باشد",
"maxCompletionTokensRequired": "حداکثر طول تکمیل نمیتواند خالی باشد",The existing reasoningEffort options (minimal|low|medium|high) already match the English defaults in en-US/settings.json.
🤖 Prompt for AI Agents
In src/renderer/src/i18n/fa-IR/settings.json around line 130, add the
"maxCompletionTokens" entry with its Persian label and description inside the
"modelConfig" block. Then, around line 210, add the three validation messages
"maxCompletionTokensMax", "maxCompletionTokensMin", and
"maxCompletionTokensRequired" with their respective Persian translations inside
the "validation" block. This will properly localize the maxCompletionTokens
setting and its validation messages in Persian.
| "reasoningEffort": { | ||
| "label": "推理努力程度", | ||
| "description": "控制模型的推理深度,更高的努力程度會產生更好的結果但響應更慢", | ||
| "placeholder": "選擇推理努力程度", | ||
| "options": { | ||
| "minimal": "Minimal - 最快響應", | ||
| "low": "Low - 低努力", | ||
| "medium": "Medium - 中等努力", | ||
| "high": "High - 高努力" | ||
| } | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Add i18n for maxCompletionTokens (GPT-5) and verify enum values
Same as other locales, zh-HK lacks maxCompletionTokens labels/validation. Please add them and confirm “minimal” is an accepted reasoningEffort value by the provider.
"maxTokens": {
"description": "設置模型單次輸出的最大Token數量",
"label": "最大輸出長度"
},
+ "maxCompletionTokens": {
+ "description": "設置模型單次完成的最大 Token 數量(僅 GPT‑5 系列)",
+ "label": "最大完成長度"
+ },
"reasoning": {
"description": "模型是否支持推理能力",
"label": "推理能力"
}, "validation": {
"contextLengthMax": "上下文長度不能超過10000000",
"contextLengthMin": "上下文長度必須大於0",
"contextLengthRequired": "上下文長度不能為空",
"maxTokensMax": "最大輸出長度不能超過1000000",
"maxTokensMin": "最大輸出長度必須大於0",
"maxTokensRequired": "最大輸出長度不能為空",
+ "maxCompletionTokensMax": "最大完成長度不能超過1000000",
+ "maxCompletionTokensMin": "最大完成長度必須大於0",
+ "maxCompletionTokensRequired": "最大完成長度不能為空",
"temperatureMax": "溫度必須小於等於2",
"temperatureMin": "溫度必須大於等於0",
"temperatureRequired": "溫度不能為空"
},🏁 Script executed:
#!/bin/bash
# 1. Check which locales already define maxCompletionTokens
rg -n '"maxCompletionTokens"' src/renderer/src/i18n
# 2. Inspect en-US file for the existing maxCompletionTokens and reasoningEffort blocks
sed -n '150,210p' src/renderer/src/i18n/en-US/settings.json
# 3. Locate where reasoningEffort enum is defined in the codebase
rg -n "reasoningEffort" -n src/renderer/srcLength of output: 5098
Add missing i18n entries for maxCompletionTokens in zh-HK/settings.json
Zh-HK (and in fact none of the locales) currently define the new maxCompletionTokens keys required for GPT-5 support. Please update src/renderer/src/i18n/zh-HK/settings.json as follows:
• After the existing “maxTokens” block:
"maxTokens": {
"description": "設置模型單次輸出的最大Token數量",
"label": "最大輸出長度"
},
+ "maxCompletionTokens": {
+ "description": "設置模型單次完成的最大 Token 數量(僅 GPT-5 系列)",
+ "label": "最大完成長度"
+ },
"reasoning": {• Inside the "validation" block:
"maxTokensRequired": "最大輸出長度不能為空",
+ "maxCompletionTokensMax": "最大完成長度不能超過1000000",
+ "maxCompletionTokensMin": "最大完成長度必須大於0",
+ "maxCompletionTokensRequired": "最大完成長度不能為空",
"temperatureMax": "溫度必須小於等於2",Finally, verify that the provider accepts "minimal" as a valid reasoningEffort enum (the UI lists minimal, low, medium, high). If "minimal" isn’t supported, please update the enum mapping across all locales accordingly.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/renderer/src/i18n/zh-HK/settings.json around lines 170 to 180, add the
missing i18n entries for maxCompletionTokens after the existing maxTokens block
and inside the validation block as required for GPT-5 support. Also, verify if
the provider supports "minimal" as a valid reasoningEffort enum value; if not,
update the enum mapping in all locale files to exclude or replace "minimal"
accordingly.
| "reasoningEffort": { | ||
| "label": "推理努力程度", | ||
| "description": "控制模型的推理深度,更高的努力程度會產生更好的結果但回應更慢", | ||
| "placeholder": "選擇推理努力程度", | ||
| "options": { | ||
| "minimal": "Minimal - 最快回應", | ||
| "low": "Low - 低努力", | ||
| "medium": "Medium - 中等努力", | ||
| "high": "High - 高努力" | ||
| } | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Add i18n for maxCompletionTokens (GPT-5) and verify enum values
GPT-5 replaces max_tokens with max_completion_tokens. These zh-TW settings don’t define labels/validation for maxCompletionTokens, which may cause fallback or missing UI text. Also, please confirm the provider accepts the four reasoningEffort values including "minimal".
Add these entries under model.modelConfig and validation:
"maxTokens": {
"description": "設置模型單次輸出的最大Token數量",
"label": "最大輸出長度"
},
+ "maxCompletionTokens": {
+ "description": "設置模型單次完成的最大 Token 數量(僅 GPT‑5 系列)",
+ "label": "最大完成長度"
+ },
"reasoning": {
"description": "模型是否支持推理能力",
"label": "推理能力"
}, "validation": {
"contextLengthMax": "上下文長度不能超過10000000",
"contextLengthMin": "上下文長度必須大於0",
"contextLengthRequired": "上下文長度不能為空",
"maxTokensMax": "最大輸出長度不能超過1000000",
"maxTokensMin": "最大輸出長度必須大於0",
"maxTokensRequired": "最大輸出長度不能為空",
+ "maxCompletionTokensMax": "最大完成長度不能超過1000000",
+ "maxCompletionTokensMin": "最大完成長度必須大於0",
+ "maxCompletionTokensRequired": "最大完成長度不能為空",
"temperatureMax": "溫度必須小於等於2",
"temperatureMin": "溫度必須大於等於0",
"temperatureRequired": "溫度不能為空"
},Run to verify usage and missing translations:
🏁 Script executed:
#!/bin/bash
# Find UI usage and ensure i18n exists
rg -n "maxCompletionTokens" src | sed -n '1,200p'
# Check translations across languages
rg -n '"maxCompletionTokens"' src/renderer/src/i18n | sed -n '1,200p'
# Check allowed values for reasoningEffort/verbosity
rg -n "reasoningEffort|verbosity" src/main src/shared src/renderer | sed -n '1,200p'Length of output: 9254
Add missing i18n entries for maxCompletionTokens in zh-TW settings
The renderer is already using maxCompletionTokens (GPT-5) but there are no translations defined in any locale. Please add these keys to src/renderer/src/i18n/zh-TW/settings.json (and mirror in other language files) under the existing settings.model.modelConfig and settings.validation sections:
• In settings.model.modelConfig:
"maxTokens": {
"description": "設置模型單次輸出的最大Token數量",
"label": "最大輸出長度"
},
+ "maxCompletionTokens": {
+ "description": "設置模型單次完成的最大 Token 數量(僅 GPT-5 系列)",
+ "label": "最大完成長度"
+ },
"reasoning": { … }• In settings.validation:
"maxTokensRequired": "最大輸出長度不能為空",
+ "maxCompletionTokensMax": "最大完成長度不能超過1000000",
+ "maxCompletionTokensMin": "最大完成長度必須大於0",
+ "maxCompletionTokensRequired": "最大完成長度不能為空",
"temperatureMax": "溫度必須小於等於2",• The four reasoningEffort values ("minimal" | "low" | "medium" | "high") are already supported by the provider and correctly translated in your snippet—no change needed there.
• Don’t forget to apply the same additions to the other locale files under src/renderer/src/i18n.
🤖 Prompt for AI Agents
In src/renderer/src/i18n/zh-TW/settings.json around lines 170 to 180, add
missing i18n entries for maxCompletionTokens under the
settings.model.modelConfig and settings.validation sections. Specifically,
define the label and description for maxCompletionTokens in
settings.model.modelConfig, and add appropriate validation messages in
settings.validation. After updating zh-TW, replicate these additions in all
other locale files under src/renderer/src/i18n to maintain consistency across
languages.
| export interface ModelConfig { | ||
| maxTokens: number | ||
| contextLength: number | ||
| temperature: number | ||
| temperature?: number | ||
| vision: boolean | ||
| functionCall: boolean | ||
| reasoning: boolean | ||
| type: ModelType | ||
| // Whether this config is user-defined (true) or default config (false) | ||
| isUserDefined?: boolean | ||
| thinkingBudget?: number | ||
| // GPT-5 系列新参数 | ||
| reasoningEffort?: 'minimal' | 'low' | 'medium' | 'high' | ||
| verbosity?: 'low' | 'medium' | 'high' | ||
| maxCompletionTokens?: number // GPT-5 系列使用此参数替代 maxTokens | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maxTokens is still mandatory – GPT-5 configs will have no valid value
maxTokens remains a required field even though GPT-5 models no longer accept a max_tokens style parameter.
Down-stream code will be forced to populate an irrelevant value, increasing the chance the wrong field is sent to the provider.
- maxTokens: number
+ /** For GPT-3/4 style models. Optional for GPT-5. */
+ maxTokens?: number
+ /** For GPT-5 style models (replaces maxTokens). */
+ maxCompletionTokens?: numberThe same relaxation is needed in DefaultModelSetting.
Failing to make the property optional breaks the PR objective “replace max_tokens with max_completion_tokens”.
Please update the typings and cascade the change through callers.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/shared/presenter.d.ts around lines 120 to 135, the maxTokens property in
the ModelConfig interface is currently mandatory but should be optional to align
with GPT-5 models that use maxCompletionTokens instead. Update the maxTokens
property to be optional by adding a question mark, and similarly update the
DefaultModelSetting type to make maxTokens optional. Then, review and update all
callers and usages of ModelConfig and DefaultModelSetting to handle maxTokens as
optional, ensuring no irrelevant or incorrect maxTokens values are sent to the
provider.
* fix: add AlertDialogDescription to resolve accessibility warning (#706) * fix: resolve focus flicker when creating new windows with Ctrl+Shift+N (#707) * feat: enhance window management by implementing main window ID handling (#709) * docs: update zhipu developer doc website link (#715) Co-authored-by: gongchao <chao.gong@aminer.cn> * refactor: better translate (#716) * chore: en-us i18n * chore(i18n): polish ja-JP translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fr-FR translations; keep chat.input.placeholder unchanged * chore(i18n): refine fr-FR MCP & Settings copy; idiomatic, concise, brand-consistent * chore(i18n): polish ru-RU translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fa-IR translations across UI; keep chat.input.placeholder unchanged * chore: fix format * chore: fix i18n * chore: lock rolldown-vite version * feat: add GPT-5 series model support (#717) * ci(vite): Bundle the main file into a single file to speed up loading. (#718) * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps (#721) * chore: bump deps * fix: rolldown-vite 7.1.0 and duckdb bundle issue * chore: back to vite * chore: update electron * chore: update versions * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps --------- Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * feat: add mcp sync and modelscope provider #615 (#723) * wip: add modelscope provider * feat: add mcp sync to modelscope * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * chore: i18n and format * feat: better style * fix: mcp tool display --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> * fix: move_files newPath parse issue (#725) * fix: move_files newPath 参数计算规则 * fix: move_files 移动前需要判断dest是目录还是文件 * feat: add Claude Opus 4.1 to anthropic default model list (#726) * feat: Add mcprouter's MCP marketplace api support (#727) * wip: add mcp market * feat: mcp market install * wip: mcp install status sync * feat: mcp server config mask * chore: remove working doc * chore: add translate * feat: add ESC key to close floating chat window (#728) * feat: add floating button position persistence with boundary validation (#729) * feat: add floating button position persistence with boundary validation * feat: refactor floating button to use electron-window-state * chore: bump to 0.3.0 * feat: add reasoning_effort parameter support for gpt-oss models (#731) * feat: add reasoning_effort parameter support for gpt-oss models - add reasoning effort UI support across all components * fix: preserve user reasoning effort settings and improve display logic * fix: artifacts code not streaming (#732) * fix: artifact react load failed * chore: remove log * fix: artifacts code not stream * fix: format --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com> Co-authored-by: tomsun28 <tomsun28@outlook.com> Co-authored-by: gongchao <chao.gong@aminer.cn> Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> Co-authored-by: wanna <wanna.w@binarywalk.com>
* fix: add AlertDialogDescription to resolve accessibility warning (#706) * fix: resolve focus flicker when creating new windows with Ctrl+Shift+N (#707) * feat: enhance window management by implementing main window ID handling (#709) * docs: update zhipu developer doc website link (#715) Co-authored-by: gongchao <chao.gong@aminer.cn> * refactor: better translate (#716) * chore: en-us i18n * chore(i18n): polish ja-JP translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fr-FR translations; keep chat.input.placeholder unchanged * chore(i18n): refine fr-FR MCP & Settings copy; idiomatic, concise, brand-consistent * chore(i18n): polish ru-RU translations across UI; keep chat.input.placeholder unchanged * chore(i18n): polish fa-IR translations across UI; keep chat.input.placeholder unchanged * chore: fix format * chore: fix i18n * chore: lock rolldown-vite version * feat: add GPT-5 series model support (#717) * ci(vite): Bundle the main file into a single file to speed up loading. (#718) * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps (#721) * chore: bump deps * fix: rolldown-vite 7.1.0 and duckdb bundle issue * chore: back to vite * chore: update electron * chore: update versions * fix(math): parser by upgrade vue-renderer-markdown (#722) * chore: bump deps --------- Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * feat: add mcp sync and modelscope provider #615 (#723) * wip: add modelscope provider * feat: add mcp sync to modelscope * fix: add scrollable support to PopoverContent to prevent overflow (#720) * feat: implement floating chat window system with performance optimization (#724) * chore: i18n and format * feat: better style * fix: mcp tool display --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> * fix: move_files newPath parse issue (#725) * fix: move_files newPath 参数计算规则 * fix: move_files 移动前需要判断dest是目录还是文件 * feat: add Claude Opus 4.1 to anthropic default model list (#726) * feat: Add mcprouter's MCP marketplace api support (#727) * wip: add mcp market * feat: mcp market install * wip: mcp install status sync * feat: mcp server config mask * chore: remove working doc * chore: add translate * feat: add ESC key to close floating chat window (#728) * feat: add floating button position persistence with boundary validation (#729) * feat: add floating button position persistence with boundary validation * feat: refactor floating button to use electron-window-state * chore: bump to 0.3.0 * feat: add reasoning_effort parameter support for gpt-oss models (#731) * feat: add reasoning_effort parameter support for gpt-oss models - add reasoning effort UI support across all components * fix: preserve user reasoning effort settings and improve display logic * fix: artifacts code not streaming (#732) * fix: artifact react load failed * chore: remove log * fix: artifacts code not stream * fix: format * feat: disable automatic model enabling for better UX (#734) * feat: sync provider sorting from settings to model selection (#736) * feat: sync provider sorting from settings to model selection * feat: refactor ModelSelect to use computed providers for better reactivity --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com> Co-authored-by: tomsun28 <tomsun28@outlook.com> Co-authored-by: gongchao <chao.gong@aminer.cn> Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> Co-authored-by: wanna <wanna.w@binarywalk.com>
close #713
add GPT-5 series model support
Summary by CodeRabbit
New Features
Localization
Bug Fixes
Improvements