Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Aug 6, 2025

  • add gpt-oss
  • add ollama new models

Summary by CodeRabbit

  • New Features
    • Added a comprehensive set of new AI model options across multiple series (OpenAI, DeepSeek, Gemma, Qwen, Llama, Mistral, Phi, CodeLlama, and more) for the Ollama provider and preset model selections.
    • Expanded model configurations now include support for vision, function calls, reasoning, and embedding models.
    • Enhanced selection and customization options for users choosing models in settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 6, 2025

Walkthrough

A large number of new AI model configurations were added to preset and provider settings for the Ollama provider, including OpenAI GPT-OSS, DeepSeek, Gemma, Qwen, Llama, Mistral, and many others. These changes expand the lists of available models in static configuration arrays, without modifying logic or control flow.

Changes

Cohort / File(s) Change Summary
OpenAI GPT-OSS Model Additions
src/main/presenter/configPresenter/modelDefaultSettings.ts
Added two new OpenAI GPT-OSS model configurations (gpt-oss-120b, gpt-oss-20b) to the defaultModelsSettings array, specifying their properties such as temperature, max tokens, context length, and capabilities.
Ollama Provider Model Expansion
src/main/presenter/configPresenter/providerModelSettings.ts
Added extensive new model configurations to the ollama provider's models array, spanning OpenAI open-source, DeepSeek, Gemma, Qwen, QwQ, Llama, LLaVA, Mistral, Phi, CodeLlama, MiniCPM, TinyLlama, SmolLM2, Tulu3, OLMo2, Solar Pro, Dolphin, Command R, Magicoder, Mathstral, Falcon2, StableLM, Granite Guardian, ShieldGemma, and embedding models, with detailed properties for each. Some Qwen3 models were updated for naming and token limits.
Ollama Preset Model List Update
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
Greatly expanded the presetModels array with many new models and categories, reflecting the additions in provider settings. No changes to logic or exported entities; only the model list was updated and reorganized.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant SettingsUI
    participant ModelConfig

    User->>SettingsUI: Open Ollama provider settings
    SettingsUI->>ModelConfig: Load presetModels array
    ModelConfig-->>SettingsUI: Return expanded model list
    SettingsUI-->>User: Display new model options
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~9 minutes

Possibly related PRs

Poem

In the warren of models, so vast and bright,
New names hop in, both day and night.
From Qwen to Gemma, Llama too,
The Ollama meadow welcomes the new.
Rabbits rejoice, more choices abound—
With so many models, delight is found!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/ollama-add-gpt-oss

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (2)
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue (2)

284-704: Non-English inline comments violate repo guidelines
All .vue/.ts files must use English for logs and comments, yet the block introducing presetModels contains dozens of Chinese comments (“// OpenAI开源模型”, “// Gemma3系列”, …).
Please translate or remove these comments to stay compliant with the coding-guideline section **/*.{ts,tsx,js,jsx,vue}.


282-704: Inline mega-array hurts maintainability & type-safety
presetModels now contains >150 hard-coded entries directly in the component. This:

  1. Bloats the SFC, making reviews & diff history noisy.
  2. Couples UI concerns with static data that belongs in a provider/config layer.
  3. Lacks an explicit type, so accidental { foo: 'bar' } will compile.

Recommend extracting to a dedicated, typed module, e.g.:

-const presetModels = [
-  { name: 'gpt-oss:20b' },
-
-] 
+import { OLLAMA_PRESET_MODELS } from '@/constants/ollamaPresetModels'
+const presetModels = OLLAMA_PRESET_MODELS

and in @/constants/ollamaPresetModels.ts:

export type OllamaPresetModel = { readonly name: string }

export const OLLAMA_PRESET_MODELS: readonly OllamaPresetModel[] = [
  { name: 'gpt-oss:20b' },
  // …
] as const

Benefits: single source-of-truth, reusability by backend presenter code, and stronger typing via as const.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cc88dad and 258b2ed.

📒 Files selected for processing (3)
  • src/main/presenter/configPresenter/modelDefaultSettings.ts (1 hunks)
  • src/main/presenter/configPresenter/providerModelSettings.ts (2 hunks)
  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue (3 hunks)
🧰 Additional context used
📓 Path-based instructions (16)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
  • src/main/presenter/configPresenter/providerModelSettings.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Centralize configuration in configPresenter/

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/renderer/src/**/*.vue

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI components

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/src/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Files:

  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
🧠 Learnings (11)
📓 Common learnings
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider
📚 Learning: applies to src/main/presenter/configpresenter/**/*.ts : centralize configuration in configpresenter/...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/

Applied to files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/configpresenter/providers.ts : add provider configuration in configpre...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Applied to files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
  • src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue
  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : create provider file in src/main...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider files should implement ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : each file in `src/main/presenter...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : all provider implementations mus...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : when a provider supports native ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield image data events in the standardized format when applicable.

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
📚 Learning: applies to src/main/presenter/llmproviderpresenter/providers/*.ts : provider implementations should ...
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
🔇 Additional comments (3)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)

660-681: LGTM! New GPT-OSS model configurations added correctly.

The new GPT-OSS model configurations follow the established pattern and have reasonable parameter values. The context lengths and token limits appear appropriate for their respective model sizes.

src/main/presenter/configPresenter/providerModelSettings.ts (1)

308-1559: Extensive model configuration expansion looks comprehensive and well-structured.

The addition of numerous model configurations across multiple AI model families significantly expands the supported models for the Ollama provider. The configurations follow consistent patterns and appear to have reasonable parameter values for their respective model sizes.

Key observations:

  • Proper categorization by model families (DeepSeek, Gemma, Qwen, Llama, etc.)
  • Consistent configuration structure across all models
  • Appropriate temperature, token limits, and context lengths for different model sizes
  • Proper capability flags (vision, functionCall, reasoning) based on model characteristics
src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue (1)

282-704: No duplicate model identifiers detected
I ran a Python-based scan of the entire presetModels array in src/renderer/src/components/settings/OllamaProviderSettingsDetail.vue and confirmed there are no repeated name entries—your Set-based filters remain valid.

Comment on lines +590 to 622
{
id: 'qwen3:30b',
name: 'Qwen3 30B',
temperature: 0.6,
maxTokens: 16384,
contextLength: 40960,
match: ['qwen3:30b'],
vision: false,
functionCall: true,
reasoning: true
},
{
id: 'qwen3:32b',
name: 'Qwen3 32B',
temperature: 0.6,
maxTokens: 8192,
maxTokens: 16384,
contextLength: 40960,
match: ['qwen3:32b'],
vision: false,
functionCall: true,
reasoning: true
},
{
id: 'qwen3:30b-a3b',
name: 'Qwen3 30B A3B',
id: 'qwen3:235b',
name: 'Qwen3 235B',
temperature: 0.6,
maxTokens: 8192,
maxTokens: 32768,
contextLength: 40960,
match: ['qwen3:30b-a3b'],
match: ['qwen3:235b'],
vision: false,
functionCall: true,
reasoning: true
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify the Qwen3 model configuration updates.

Several Qwen3 models have been updated with increased maxTokens values and model ID/name changes. Please ensure these changes align with the actual capabilities of these models and that any dependent code or documentation is updated accordingly.

Specific changes:

  • qwen3:30b and qwen3:32b maxTokens increased from 8192 to 16384
  • qwen3:235b replaces previous configuration with updated parameters

🏁 Script executed:

#!/bin/bash
# Description: Check for any references to the old Qwen3 model IDs that might need updating

# Search for references to the old model configurations
rg -A 3 -B 3 "qwen3.*30b-a3b|qwen3.*8192" --type ts --type js --type vue

Length of output: 102


🏁 Script executed:

#!/usr/bin/env bash
# Search for references to old Qwen3 model IDs and legacy maxTokens values across TS, JS, and Vue files.
rg --type-add 'vue:*.vue' --type vue --type ts --type js -A3 -B3 "qwen3:30b-a3b|qwen3:32b-a3b|8192"

Length of output: 269695


Ensure Qwen3 maxTokens are updated in all default‐setting files

We’ve bumped Qwen3 providerModelSettings to 16 384/32 768, but the fallback/default configs and tests still assume 8 192. Please update these to stay in sync:

• src/main/presenter/configPresenter/modelDefaultSettings.ts
– Qwen3 30B A3B & Qwen3 32B → set maxTokens: 16384
– Qwen3 235B A22B → set maxTokens: 32768

• src/main/presenter/configPresenter/modelConfig.ts & test/main/presenter/modelConfig.test.ts
– Add/adjust assertions for the new Qwen3 defaults if you rely on hard-coded fallbacks

• src/renderer/src/components/ChatConfig.vue
– Verify the slider’s :max binding picks up the new limits (it currently falls back to 8192)

🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts lines 590 to 622,
the maxTokens values for Qwen3 models have been updated to 16384 and 32768. To
keep consistency, update the maxTokens for Qwen3 30B and 32B to 16384 and for
Qwen3 235B to 32768 in
src/main/presenter/configPresenter/modelDefaultSettings.ts. Also, update or add
assertions in src/main/presenter/configPresenter/modelConfig.ts and
test/main/presenter/modelConfig.test.ts to reflect these new maxTokens values.
Finally, in src/renderer/src/components/ChatConfig.vue, ensure the slider's :max
binding uses the updated maxTokens limits instead of the old fallback of 8192.

@zerob13 zerob13 merged commit 1de5759 into dev Aug 6, 2025
2 checks passed
zerob13 added a commit that referenced this pull request Aug 6, 2025
* fix: prevent tooltip from overlapping HTML preview in ArtifactDialog (#683)

* fix: unwatch is not a function error in knowledge settings components (#684)

* fix: resolve message editing and markdown rendering issues (#685)

* docs: update Electron-Vite link format to full URL (#687)

* fix: built-in knowledge settings not loaded when first entering (#688)

* fix: enhance lifecycle management by adding unmounted hook and MCP state watcher

* fix: improve knowledge destruction confirmation and enhance unwatch handling in settings

* fix: resolve database lock issue in data reset functionality (#689)

* feat: implement provider-based request rate limit (#686)

* feat: implement provider-based request rate limit

* feat: add input validation with confirmation dialog

* refactor: merge RateLimitPresenter into LLMProviderPresenter

* Optimize/builtin knowledge interaction (#690)

* feat: improve builtInKnowledge file icon style

* feat: add builtInKnowledge resume icon tooltip

* feat: format code

* Update src/renderer/src/i18n/fa-IR/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update src/renderer/src/i18n/ja-JP/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update src/renderer/src/i18n/fr-FR/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* Update settings.json

* Update src/renderer/src/i18n/fr-FR/settings.json

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* fix: Fix French and Korean translations for pause and delete file confirmation messages

---------

Co-authored-by: sqsyli <sqsyli@qq.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* fix: fixed dirty data problems (#691)

fixed the old session tool error caused by
fixed the problem that the tool list was not reset when creating a new session

* feat: add mentions support in deeplink (#692)

* feat: add mentions support in chat input and store

* style: format imports and clean up code for better readability

* fix: improve mentions handling in handleStart method

* fix: prompt attach files (#695)

* fix: update file icon handling and improve file upload process

* fix: remove redundant FileItem interface definition in PromptSetting.vue

* fix: implement confirmation dialog for prompt deletion in PromptSetting.vue

* fix: streamline file upload process by removing redundant file content reading logic

* feat: add npm registry caching and optimization system (#697)

* fix: resolve input focus loss issue after sending messages (#694)

* fix: resolve deeplink message not displaying when triggered from settings page (#699)

* fix: update current tools assignment to include tool names when MCP tools are not enabled (#698)

* feat: add support for new models  (#700)

* chore: update ollama model settings

* chore: add support for gpt-oss

* chore: bump to 0.2.9

* fix: refine #698 (#701)

* fix: update current tools assignment to include tool names when MCP tools are not enabled

* fix: update enabled MCP tools assignment to include tool names on session switch

* feat: add Claude Opus 4.1 support and fix anthropic model list dialog (#702)

---------

Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
Co-authored-by: cycleccc <2991205548@qq.com>
Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com>
Co-authored-by: sqsyli <sqsyli@qq.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@zerob13 zerob13 deleted the feature/ollama-add-gpt-oss branch September 21, 2025 15:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants