-
Notifications
You must be signed in to change notification settings - Fork 614
feat: add support for new models #700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughA large number of new AI model configurations were added to preset and provider settings for the Ollama provider, including OpenAI GPT-OSS, DeepSeek, Gemma, Qwen, Llama, Mistral, and many others. These changes expand the lists of available models in static configuration arrays, without modifying logic or control flow. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant SettingsUI
participant ModelConfig
User->>SettingsUI: Open Ollama provider settings
SettingsUI->>ModelConfig: Load presetModels array
ModelConfig-->>SettingsUI: Return expanded model list
SettingsUI-->>User: Display new model options
Estimated code review effort🎯 2 (Simple) | ⏱️ ~9 minutes Possibly related PRs
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (2)
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue (2)
284-704: Non-English inline comments violate repo guidelines
All.vue/.tsfiles must use English for logs and comments, yet the block introducingpresetModelscontains dozens of Chinese comments (“// OpenAI开源模型”, “// Gemma3系列”, …).
Please translate or remove these comments to stay compliant with the coding-guideline section**/*.{ts,tsx,js,jsx,vue}.
282-704: Inline mega-array hurts maintainability & type-safety
presetModelsnow contains >150 hard-coded entries directly in the component. This:
- Bloats the SFC, making reviews & diff history noisy.
- Couples UI concerns with static data that belongs in a provider/config layer.
- Lacks an explicit type, so accidental
{ foo: 'bar' }will compile.Recommend extracting to a dedicated, typed module, e.g.:
-const presetModels = [ - { name: 'gpt-oss:20b' }, - … -] +import { OLLAMA_PRESET_MODELS } from '@/constants/ollamaPresetModels' +const presetModels = OLLAMA_PRESET_MODELSand in
@/constants/ollamaPresetModels.ts:export type OllamaPresetModel = { readonly name: string } export const OLLAMA_PRESET_MODELS: readonly OllamaPresetModel[] = [ { name: 'gpt-oss:20b' }, // … ] as constBenefits: single source-of-truth, reusability by backend presenter code, and stronger typing via
as const.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/main/presenter/configPresenter/modelDefaultSettings.ts(1 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(2 hunks)src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue(3 hunks)
🧰 Additional context used
📓 Path-based instructions (16)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Use English for logs and comments
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/renderer/src/components/settings/OllamaProviderSettingsDetail.vuesrc/main/presenter/configPresenter/providerModelSettings.ts
**/*.{ts,tsx}
📄 CodeRabbit Inference Engine (CLAUDE.md)
Strict type checking enabled for TypeScript
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
One presenter per functional domain
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit Inference Engine (CLAUDE.md)
Centralize configuration in configPresenter/
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
src/renderer/src/**/*.vue
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI componentsUse scoped styles to prevent CSS conflicts between components
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/src/**/*.{ts,tsx,vue}
📄 CodeRabbit Inference Engine (CLAUDE.md)
src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/src/**/*
📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
渲染进程代码放在
src/renderer
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{vue,ts}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
Implement lazy loading for routes and components.
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.
Files:
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
📚 Learning: applies to src/main/presenter/configpresenter/**/*.ts : centralize configuration in configpresenter/...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/
Applied to files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/configpresenter/providers.ts : add provider configuration in configpre...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
Applied to files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/renderer/src/components/settings/OllamaProviderSettingsDetail.vuesrc/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : create provider file in src/main...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider files should implement ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : each file in `src/main/presenter...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : all provider implementations mus...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : when a provider supports native ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield image data events in the standardized format when applicable.
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.
Applied to files:
src/main/presenter/configPresenter/providerModelSettings.ts
🔇 Additional comments (3)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
660-681: LGTM! New GPT-OSS model configurations added correctly.The new GPT-OSS model configurations follow the established pattern and have reasonable parameter values. The context lengths and token limits appear appropriate for their respective model sizes.
src/main/presenter/configPresenter/providerModelSettings.ts (1)
308-1559: Extensive model configuration expansion looks comprehensive and well-structured.The addition of numerous model configurations across multiple AI model families significantly expands the supported models for the Ollama provider. The configurations follow consistent patterns and appear to have reasonable parameter values for their respective model sizes.
Key observations:
- Proper categorization by model families (DeepSeek, Gemma, Qwen, Llama, etc.)
- Consistent configuration structure across all models
- Appropriate temperature, token limits, and context lengths for different model sizes
- Proper capability flags (vision, functionCall, reasoning) based on model characteristics
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue (1)
282-704: No duplicate model identifiers detected
I ran a Python-based scan of the entirepresetModelsarray insrc/renderer/src/components/settings/OllamaProviderSettingsDetail.vueand confirmed there are no repeatednameentries—yourSet-based filters remain valid.
| { | ||
| id: 'qwen3:30b', | ||
| name: 'Qwen3 30B', | ||
| temperature: 0.6, | ||
| maxTokens: 16384, | ||
| contextLength: 40960, | ||
| match: ['qwen3:30b'], | ||
| vision: false, | ||
| functionCall: true, | ||
| reasoning: true | ||
| }, | ||
| { | ||
| id: 'qwen3:32b', | ||
| name: 'Qwen3 32B', | ||
| temperature: 0.6, | ||
| maxTokens: 8192, | ||
| maxTokens: 16384, | ||
| contextLength: 40960, | ||
| match: ['qwen3:32b'], | ||
| vision: false, | ||
| functionCall: true, | ||
| reasoning: true | ||
| }, | ||
| { | ||
| id: 'qwen3:30b-a3b', | ||
| name: 'Qwen3 30B A3B', | ||
| id: 'qwen3:235b', | ||
| name: 'Qwen3 235B', | ||
| temperature: 0.6, | ||
| maxTokens: 8192, | ||
| maxTokens: 32768, | ||
| contextLength: 40960, | ||
| match: ['qwen3:30b-a3b'], | ||
| match: ['qwen3:235b'], | ||
| vision: false, | ||
| functionCall: true, | ||
| reasoning: true | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the Qwen3 model configuration updates.
Several Qwen3 models have been updated with increased maxTokens values and model ID/name changes. Please ensure these changes align with the actual capabilities of these models and that any dependent code or documentation is updated accordingly.
Specific changes:
qwen3:30bandqwen3:32bmaxTokens increased from 8192 to 16384qwen3:235breplaces previous configuration with updated parameters
🏁 Script executed:
#!/bin/bash
# Description: Check for any references to the old Qwen3 model IDs that might need updating
# Search for references to the old model configurations
rg -A 3 -B 3 "qwen3.*30b-a3b|qwen3.*8192" --type ts --type js --type vueLength of output: 102
🏁 Script executed:
#!/usr/bin/env bash
# Search for references to old Qwen3 model IDs and legacy maxTokens values across TS, JS, and Vue files.
rg --type-add 'vue:*.vue' --type vue --type ts --type js -A3 -B3 "qwen3:30b-a3b|qwen3:32b-a3b|8192"Length of output: 269695
Ensure Qwen3 maxTokens are updated in all default‐setting files
We’ve bumped Qwen3 providerModelSettings to 16 384/32 768, but the fallback/default configs and tests still assume 8 192. Please update these to stay in sync:
• src/main/presenter/configPresenter/modelDefaultSettings.ts
– Qwen3 30B A3B & Qwen3 32B → set maxTokens: 16384
– Qwen3 235B A22B → set maxTokens: 32768
• src/main/presenter/configPresenter/modelConfig.ts & test/main/presenter/modelConfig.test.ts
– Add/adjust assertions for the new Qwen3 defaults if you rely on hard-coded fallbacks
• src/renderer/src/components/ChatConfig.vue
– Verify the slider’s :max binding picks up the new limits (it currently falls back to 8192)
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts lines 590 to 622,
the maxTokens values for Qwen3 models have been updated to 16384 and 32768. To
keep consistency, update the maxTokens for Qwen3 30B and 32B to 16384 and for
Qwen3 235B to 32768 in
src/main/presenter/configPresenter/modelDefaultSettings.ts. Also, update or add
assertions in src/main/presenter/configPresenter/modelConfig.ts and
test/main/presenter/modelConfig.test.ts to reflect these new maxTokens values.
Finally, in src/renderer/src/components/ChatConfig.vue, ensure the slider's :max
binding uses the updated maxTokens limits instead of the old fallback of 8192.
* fix: prevent tooltip from overlapping HTML preview in ArtifactDialog (#683) * fix: unwatch is not a function error in knowledge settings components (#684) * fix: resolve message editing and markdown rendering issues (#685) * docs: update Electron-Vite link format to full URL (#687) * fix: built-in knowledge settings not loaded when first entering (#688) * fix: enhance lifecycle management by adding unmounted hook and MCP state watcher * fix: improve knowledge destruction confirmation and enhance unwatch handling in settings * fix: resolve database lock issue in data reset functionality (#689) * feat: implement provider-based request rate limit (#686) * feat: implement provider-based request rate limit * feat: add input validation with confirmation dialog * refactor: merge RateLimitPresenter into LLMProviderPresenter * Optimize/builtin knowledge interaction (#690) * feat: improve builtInKnowledge file icon style * feat: add builtInKnowledge resume icon tooltip * feat: format code * Update src/renderer/src/i18n/fa-IR/settings.json Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update src/renderer/src/i18n/ja-JP/settings.json Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update src/renderer/src/i18n/fr-FR/settings.json Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Update settings.json * Update src/renderer/src/i18n/fr-FR/settings.json Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * fix: Fix French and Korean translations for pause and delete file confirmation messages --------- Co-authored-by: sqsyli <sqsyli@qq.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * fix: fixed dirty data problems (#691) fixed the old session tool error caused by fixed the problem that the tool list was not reset when creating a new session * feat: add mentions support in deeplink (#692) * feat: add mentions support in chat input and store * style: format imports and clean up code for better readability * fix: improve mentions handling in handleStart method * fix: prompt attach files (#695) * fix: update file icon handling and improve file upload process * fix: remove redundant FileItem interface definition in PromptSetting.vue * fix: implement confirmation dialog for prompt deletion in PromptSetting.vue * fix: streamline file upload process by removing redundant file content reading logic * feat: add npm registry caching and optimization system (#697) * fix: resolve input focus loss issue after sending messages (#694) * fix: resolve deeplink message not displaying when triggered from settings page (#699) * fix: update current tools assignment to include tool names when MCP tools are not enabled (#698) * feat: add support for new models (#700) * chore: update ollama model settings * chore: add support for gpt-oss * chore: bump to 0.2.9 * fix: refine #698 (#701) * fix: update current tools assignment to include tool names when MCP tools are not enabled * fix: update enabled MCP tools assignment to include tool names on session switch * feat: add Claude Opus 4.1 support and fix anthropic model list dialog (#702) --------- Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: cycleccc <2991205548@qq.com> Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com> Co-authored-by: sqsyli <sqsyli@qq.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit