Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Sep 10, 2025

close #880
align DashScope thinking models support between frontend and backend

Summary by CodeRabbit

  • New Features
    • Expanded DashScope search support to more models, including qwen3-max-preview.
    • Enabled thinking budget controls for a broader set of DashScope Qwen models (qwen3 variants, qwen-plus, qwen-turbo, qwen-flash) with sensible defaults.
    • Chat configuration now detects model reasoning capability and shows thinking budget only when applicable.
  • Refactor
    • Unified logic for detecting supported DashScope models for search and thinking budget to improve consistency and reduce hard-coded checks.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 10, 2025

Walkthrough

Reorders a Dashscope search-model constant. Updates ChatConfig.vue to read model reasoning from settings, generalize Qwen thinking-budget visibility, and include qwen3-max-preview in Dashscope search models. Extends ModelConfigDialog.vue to support thinking budgets for more Qwen models, refactors visibility checks, and centralizes Dashscope search-model allowlist.

Changes

Cohort / File(s) Summary
Dashscope provider constants
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
Reordered ENABLE_SEARCH_MODELS to place qwen3-max-preview first; resulting set unchanged. No logic or API changes.
Chat config: reasoning & thinking-budget gating
src/renderer/src/components/ChatConfig.vue
Reads model reasoning from settings store; normalizes modelId; introduces generalized Qwen thinking-model detection; updates thinking-budget visibility to depend on provider=dashscope, supported Qwen thinking model, and reasoning enabled; adds qwen3-max-preview to Dashscope search list.
Model config dialog: thinking budget and search allowlists
src/renderer/src/components/settings/ModelConfigDialog.vue
Initializes Dashscope thinking budget regardless of qwen3 substring; adds default thinking-budget config for qwen-plus/turbo/flash; broadens supported-model checks via lower-cased list; refactors Dashscope search allowlist into array including qwen3-max-preview; visibility requires provider=dashscope, reasoning enabled, supported model, and valid config.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant ChatConfig as ChatConfig.vue
  participant Settings as SettingsStore
  participant ModelDialog as ModelConfigDialog.vue

  User->>ChatConfig: Select provider/model
  ChatConfig->>Settings: getModelConfig(modelId, providerId)
  Settings-->>ChatConfig: { reasoning: boolean, ... }
  ChatConfig->>ChatConfig: Compute isQwenThinking(modelId)
  alt Dashscope & Qwen-thinking & reasoning=true
    ChatConfig->>User: Show thinking-budget controls
  else Gemini thinking
    ChatConfig->>User: Show thinking-budget controls
  else
    ChatConfig->>User: Hide thinking-budget controls
  end

  Note over ModelDialog,Settings: On open/init
  ModelDialog->>Settings: getModelConfig(modelId, providerId)
  ModelDialog->>ModelDialog: Lookup thinkingBudget config for model
  alt Dashscope & model in supported list & reasoning=true
    ModelDialog->>User: Enable thinking-budget fields (with defaults)
  else
    ModelDialog->>User: Thinking-budget fields hidden
  end

  Note over ChatConfig: Dashscope search toggle uses allowlist incl. qwen3-max-preview
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Pre-merge checks (2 passed, 1 warning)

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The current description only includes an issue reference and a brief summary, but it does not follow the required repository template and lacks sections detailing the problem context, the proposed solution, UI/UX changes, compatibility considerations, or additional background. Please update the pull request description to use the provided template by describing the problem you’re solving, detailing the solution, noting any UI/UX or platform compatibility changes, and adding any relevant background context.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title clearly and concisely summarizes the primary change, namely aligning DashScope thinking model support between the frontend and backend, and it directly reflects the core purpose of the pull request without extraneous detail.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.

Poem

In burrows of code I twitch my nose,
Qwen thoughts blossom where logic grows.
Dashscope’s list now hops in line,
Budgets of thinking set to shine.
Preview stars peek, search in view—
A carrot of config, crisp and new. 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/dashscope-thinking-models

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (2)

36-49: Add qwen3-max to ENABLE_SEARCH_MODELS to fully align FE/BE.

Frontend lists include qwen3-max (ChatConfig.vue, ModelConfigDialog.vue), but the provider omit it, so enable_search won’t be sent for qwen3-max even if UI allows it.

Apply this diff:

   private static readonly ENABLE_SEARCH_MODELS: string[] = [
+    'qwen3-max',
     'qwen3-max-preview',
     'qwen-max',
     'qwen-plus',
     'qwen-plus-latest',
     'qwen-plus-2025-07-14',
     'qwen-flash',
     'qwen-flash-2025-07-28',
     'qwen-turbo',
     'qwen-turbo-latest',
     'qwen-turbo-2025-07-15',
     'qwq-plus'
   ]

36-49: De-duplicate model allowlists across layers.

These allowlists are duplicated in multiple renderer files; drift already occurred. Extract a shared source of truth (e.g., @shared/modelCaps/dashscope.ts) and import in both provider and renderer.

src/renderer/src/components/ChatConfig.vue (1)

160-174: Add ‘qwen3-max’ to DashscopeProvider.ENABLE_SEARCH_MODELS
Include 'qwen3-max' in the ENABLE_SEARCH_MODELS array in src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts to match the UI allowlist.

src/renderer/src/components/settings/ModelConfigDialog.vue (1)

745-772: Case-sensitivity bug: getThinkingBudgetConfig checks are case-sensitive.

Here you lowercase modelId for gating, but getThinkingBudgetConfig() uses the raw props.modelId. Mixed casing may cause hasValidConfig=false while isSupported=true.

Apply this diff to normalize inside getThinkingBudgetConfig:

-const getThinkingBudgetConfig = (modelId: string) => {
+const getThinkingBudgetConfig = (modelId: string) => {
+  const id = modelId.toLowerCase()
   // Gemini 系列模型配置
-  if (modelId.includes('gemini-2.5-pro')) {
+  if (id.includes('gemini-2.5-pro')) {
     return {
       min: 128,
       max: 32768,
       defaultValue: -1,
       canDisable: false
     }
   }
-  if (modelId.includes('gemini-2.5-flash-lite')) {
+  if (id.includes('gemini-2.5-flash-lite')) {
     return {
       min: 0,
       max: 24576,
       defaultValue: 0,
       canDisable: true
     }
   }
-  if (modelId.includes('gemini-2.5-flash')) {
+  if (id.includes('gemini-2.5-flash')) {
     return {
       min: 0,
       max: 24576,
       defaultValue: -1,
       canDisable: true
     }
   }
   // Qwen3 系列模型配置
-  if (modelId.includes('qwen3-235b-a22b')) {
+  if (id.includes('qwen3-235b-a22b')) {
     return { min: 0, max: 81920, defaultValue: 81920, canDisable: true }
   }
-  if (modelId.includes('qwen3-30b-a3b')) {
+  if (id.includes('qwen3-30b-a3b')) {
     return { min: 0, max: 81920, defaultValue: 81920, canDisable: true }
   }
-  if (modelId.includes('qwen3-32b') || modelId.includes('qwen3-14b') || modelId.includes('qwen3-8b') || modelId.includes('qwen3-4b')) {
+  if (id.includes('qwen3-32b') || id.includes('qwen3-14b') || id.includes('qwen3-8b') || id.includes('qwen3-4b')) {
     return { min: 0, max: 38912, defaultValue: 38912, canDisable: true }
   }
-  if (modelId.includes('qwen3-1.7b') || modelId.includes('qwen3-0.6b')) {
+  if (id.includes('qwen3-1.7b') || id.includes('qwen3-0.6b')) {
     return { min: 0, max: 20000, defaultValue: 20000, canDisable: true }
   }
-  if (modelId.includes('qwen-plus') || modelId.includes('qwen-turbo') || modelId.includes('qwen-flash')) {
+  if (id.includes('qwen-plus') || id.includes('qwen-turbo') || id.includes('qwen-flash')) {
     return { min: 0, max: 500000, defaultValue: 500000, canDisable: true }
   }
   return null
 }
🧹 Nitpick comments (1)
src/renderer/src/components/ChatConfig.vue (1)

73-91: Reasoning state wiring LGTM; minor race-guard optional.

Async watch can be out-of-order on rapid model switches. If flicker appears, guard with a requestId to ensure last-write-wins.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7f980d2 and 9161d71.

📒 Files selected for processing (3)
  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (2 hunks)
  • src/renderer/src/components/ChatConfig.vue (4 hunks)
  • src/renderer/src/components/settings/ModelConfigDialog.vue (3 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Use Pinia for frontend state management (do not introduce alternative state libraries)

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/renderer/{src,shell,floating}/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/{src,shell,floating}/**/*.vue: Use Vue 3 Composition API for all components
All user-facing strings must use i18n keys via vue-i18n (no hard-coded UI strings)
Use Tailwind CSS utilities and ensure styles are scoped in Vue components

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/components/**/*

📄 CodeRabbit inference engine (CLAUDE.md)

Organize UI components by feature within src/renderer/src/

Files:

  • src/renderer/src/components/ChatConfig.vue
  • src/renderer/src/components/settings/ModelConfigDialog.vue
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
🧠 Learnings (7)
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use <script setup> syntax for concise component definitions.

Applied to files:

  • src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-21T01:46:30.354Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-07-21T01:46:30.354Z
Learning: Applies to src/renderer/src/**/* : Use the 'vue-i18n' framework for all internationalization in the renderer

Applied to files:

  • src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Leverage ref, reactive, and computed for reactive state management.

Applied to files:

  • src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Import Icon component from Iconify/Vue.

Applied to files:

  • src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-07-21T01:47:28.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:47:28.817Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Leverage Vue's built-in reactivity system for efficient data handling

Applied to files:

  • src/renderer/src/components/ChatConfig.vue
📚 Learning: 2025-09-06T03:07:23.794Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.794Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
📚 Learning: 2025-09-06T03:07:23.794Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.794Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (3)
src/renderer/src/components/ChatConfig.vue (1)

3-4: Imports LGTM.

src/renderer/src/components/settings/ModelConfigDialog.vue (2)

547-553: Dashscope default thinkingBudget init LGTM.

Graceful default initialization when unset.


706-717: Add thinking caps for qwen-plus/turbo/flash: good alignment.

Comment on lines +129 to +151
// DashScope
const isDashscope = props.providerId === 'dashscope'
const isQwen3 = props.modelId?.includes('qwen3')
const modelId = props.modelId?.toLowerCase() || ''
const supportedQwenThinkingModels = [
// Open source versions
'qwen3-235b-a22b',
'qwen3-32b',
'qwen3-30b-a3b',
'qwen3-14b',
'qwen3-8b',
'qwen3-4b',
'qwen3-1.7b',
'qwen3-0.6b',
// Commercial versions
'qwen-plus',
'qwen-flash',
'qwen-turbo'
]
const isQwenThinking = supportedQwenThinkingModels.some((supportedModel) =>
modelId.includes(supportedModel)
)
return (isGemini && isGemini25) || (isDashscope && isQwen3)
return (isGemini && isGemini25) || (isDashscope && isQwenThinking && modelReasoning.value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Qwen thinking-budget range is inconsistent with ModelConfigDialog (caps too low for qwen-plus/turbo/flash).

Here you gate Qwen commercial models, but getQwen3MaxBudget() below only returns OS caps; UI cap becomes 81,920 vs 500,000 in ModelConfigDialog. Unify ranges.

Apply this diff to extend the cap used by the chat config:

 const getQwen3MaxBudget = (): number => {
   const modelId = props.modelId?.toLowerCase() || ''

   // 根据不同的 Qwen3 模型返回不同的最大值
   if (modelId.includes('qwen3-235b-a22b') || modelId.includes('qwen3-30b-a3b')) {
     return 81920
   } else if (
     modelId.includes('qwen3-32b') ||
     modelId.includes('qwen3-14b') ||
     modelId.includes('qwen3-8b') ||
     modelId.includes('qwen3-4b')
   ) {
     return 38912
   } else if (modelId.includes('qwen3-1.7b') || modelId.includes('qwen3-0.6b')) {
     return 20000
+  } else if (
+    modelId.includes('qwen-plus') ||
+    modelId.includes('qwen-turbo') ||
+    modelId.includes('qwen-flash')
+  ) {
+    // Align with ModelConfigDialog thinking budget for commercial Qwen models
+    return 500000
   }

   // 默认值
   return 81920
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// DashScope
const isDashscope = props.providerId === 'dashscope'
const isQwen3 = props.modelId?.includes('qwen3')
const modelId = props.modelId?.toLowerCase() || ''
const supportedQwenThinkingModels = [
// Open source versions
'qwen3-235b-a22b',
'qwen3-32b',
'qwen3-30b-a3b',
'qwen3-14b',
'qwen3-8b',
'qwen3-4b',
'qwen3-1.7b',
'qwen3-0.6b',
// Commercial versions
'qwen-plus',
'qwen-flash',
'qwen-turbo'
]
const isQwenThinking = supportedQwenThinkingModels.some((supportedModel) =>
modelId.includes(supportedModel)
)
return (isGemini && isGemini25) || (isDashscope && isQwen3)
return (isGemini && isGemini25) || (isDashscope && isQwenThinking && modelReasoning.value)
const getQwen3MaxBudget = (): number => {
const modelId = props.modelId?.toLowerCase() || ''
// 根据不同的 Qwen3 模型返回不同的最大值
if (modelId.includes('qwen3-235b-a22b') || modelId.includes('qwen3-30b-a3b')) {
return 81920
} else if (
modelId.includes('qwen3-32b') ||
modelId.includes('qwen3-14b') ||
modelId.includes('qwen3-8b') ||
modelId.includes('qwen3-4b')
) {
return 38912
} else if (modelId.includes('qwen3-1.7b') || modelId.includes('qwen3-0.6b')) {
return 20000
} else if (
modelId.includes('qwen-plus') ||
modelId.includes('qwen-turbo') ||
modelId.includes('qwen-flash')
) {
// Align with ModelConfigDialog thinking budget for commercial Qwen models
return 500000
}
// 默认值
return 81920
}

Comment on lines +779 to 792
const supportedSearchModels = [
'qwen3-max-preview',
'qwen3-max',
'qwen-max',
'qwen-plus',
'qwen-flash',
'qwen-turbo',
'qwq-plus'
]
const isSupported = supportedModels.some((supportedModel) => modelId.includes(supportedModel))
const isSupported = supportedSearchModels.some((supportedModel) =>
modelId.includes(supportedModel)
)
return isDashscope && isSupported
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

FE/BE mismatch: qwen3-max listed here but not in provider.

Same alignment issue as ChatConfig.vue; prefer adding qwen3-max to the provider allowlist.

Run to confirm:


🏁 Script executed:

#!/bin/bash
rg -n "qwen3-max" src/main src/renderer | sed 's/^/>> /'

Length of output: 624


Add ‘qwen3-max’ to backend provider allowlists
Add the pure 'qwen3-max' entry alongside 'qwen3-max-preview' in both
src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts

🤖 Prompt for AI Agents
src/renderer/src/components/settings/ModelConfigDialog.vue lines 779-792: the
renderer already checks for 'qwen3-max' but the backend allowlists are missing
the pure 'qwen3-max'; open
src/main/presenter/configPresenter/providerModelSettings.ts and
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts and add
'qwen3-max' (exact string) alongside 'qwen3-max-preview' in the provider/model
allowlist arrays, ensure formatting matches surrounding entries and update any
related type unions or checks if present.

@zerob13 zerob13 merged commit 28b1b43 into dev Sep 10, 2025
2 checks passed
@zerob13 zerob13 deleted the fix/dashscope-thinking-models branch January 6, 2026 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] 阿里云百炼的Qwen部分模型配置存在错误

3 participants