Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Sep 3, 2025

implement dedicated DashScope provider with enable_thinking support
Implement the enable_thinking portion of #836.

Electron 2025-09-03 21 43 53

Summary by CodeRabbit

  • New Features
    • Added DashScope as a selectable AI provider, supporting chat, text generation, and summaries.
    • Automatically enables enhanced “thinking/reasoning” mode on supported DashScope models to improve response quality.
    • Fetches and displays available DashScope models with key details (e.g., names, context limits, token caps).
  • Improvements
    • More accurate handling of provider types to ensure the correct capabilities are applied for each provider.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 3, 2025

Walkthrough

Adds Dashscope as a new LLM provider. Updates DEFAULT_PROVIDERS to use apiType "dashscope" for DashScope, extends the provider factory to instantiate DashscopeProvider, and introduces DashscopeProvider with model metadata fetching and reasoning-aware streaming via enable_thinking for supported models.

Changes

Cohort / File(s) Summary
Provider config update
src/main/presenter/configPresenter/providers.ts
Changes DashScope entry apiType from "openai" to "dashscope"; other fields unchanged.
Factory wiring
src/main/presenter/llmProviderPresenter/index.ts
Imports and registers DashscopeProvider in factory when provider.apiType === "dashscope".
New Dashscope provider
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
Adds DashscopeProvider extending OpenAICompatibleProvider; enables enable_thinking for select models during streaming; fetches model metadata; implements completions, summaries, generateText via OpenAI-compatible paths.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor UI as UI
  participant LPP as LLMProviderPresenter
  participant DSP as DashscopeProvider
  participant OAI as OpenAICompatibleProvider (base)
  participant API as DashScope API

  UI->>LPP: request(modelId, messages/prompt, opts)
  LPP->>LPP: select provider by apiType
  alt apiType == "dashscope"
    LPP->>DSP: instantiate & delegate
    alt reasoning requested AND model supports enable_thinking
      note over DSP: Wrap chat.completions.create to inject enable_thinking
      DSP->>OAI: super.coreStream(reasoning=false)
      OAI->>API: chat.completions.create(..., enable_thinking=true)
      API-->>OAI: stream tokens
      OAI-->>DSP: streamed response
      DSP->>DSP: restore original create
      DSP-->>LPP: LLMResponse
    else no reasoning or unsupported model
      DSP->>OAI: base coreStream / openAICompletion
      OAI->>API: chat.completions.create(...)
      API-->>OAI: response/stream
      OAI-->>DSP: LLMResponse
      DSP-->>LPP: LLMResponse
    end
  else other apiType
    LPP->>Other: delegate per existing logic
  end
  LPP-->>UI: result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

I hop through code, a nibbling sprite, 🐇
New paths for chats now spring to light.
Dashscope dawns—streams softly think,
With toggled minds that nod and wink.
I thump approval, merge delight—
Carrots compiled, the build’s just right!

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/dashscope-enable-thinking-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (2)

15-24: Use English for comments per repo guidelines

Translate comments to English to match “Write logs and comments in English”.

-  // 支持 enable_thinking 参数的模型列表(双模式模型)
+  // Models that support DashScope's enable_thinking (dual-mode models)
@@
-  /**
-   * 检查模型是否支持 enable_thinking 参数
-   * @param modelId 模型ID
-   * @returns boolean 是否支持 enable_thinking
-   */
+  /**
+   * Check whether a model supports the enable_thinking parameter.
+   * @param modelId Model ID
+   * @returns boolean
+   */

Also applies to: 30-40


15-40: Optional: tighten model matching and future‑proofing

Using includes() may yield false positives. Consider prefix/regex match on known families (e.g., /^qwen3(-|$)/ with size variants) and move the allowlist to config for easier updates.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 88b2de6 and 7d72ff6.

📒 Files selected for processing (3)
  • src/main/presenter/configPresenter/providers.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/index.ts (2 hunks)
  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Enable and adhere to strict TypeScript type checking across the codebase

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Write logs and comments in English

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Maintain one Presenter per functional domain under src/main/presenter/

Files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/llmProviderPresenter/index.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/index.ts: src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Implement the Agent Loop layer in llmProviderPresenter/index.ts

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
src/main/presenter/configPresenter/**

📄 CodeRabbit inference engine (CLAUDE.md)

Centralize configuration logic under configPresenter/

Files:

  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/configPresenter/providers.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Add new provider configuration entries in configPresenter/providers.ts

Files:

  • src/main/presenter/configPresenter/providers.ts
🧠 Learnings (11)
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
📚 Learning: 2025-09-01T09:16:58.043Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-01T09:16:58.043Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add new provider configuration entries in configPresenter/providers.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-09-01T09:16:58.043Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-01T09:16:58.043Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Place each LLM provider implementation under llmProviderPresenter/providers/*.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-09-01T09:16:58.043Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-01T09:16:58.043Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : Implement the Agent Loop layer in llmProviderPresenter/index.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-09-01T09:16:58.043Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-01T09:16:58.043Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement a coreStream method following the standardized event interface

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
🧬 Code graph analysis (2)
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (1)
src/shared/presenter.d.ts (8)
  • LLM_PROVIDER (525-545)
  • IConfigPresenter (361-496)
  • ChatMessage (1321-1333)
  • ModelConfig (121-136)
  • MCPToolDefinition (1063-1079)
  • LLMCoreStreamEvent (1280-1318)
  • MODEL_META (511-524)
  • LLMResponse (883-900)
src/main/presenter/llmProviderPresenter/index.ts (1)
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (1)
  • DashscopeProvider (13-147)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (3)
src/main/presenter/configPresenter/providers.ts (1)

386-386: DashScope provider type switch looks correct

Changing apiType to "dashscope" aligns with the factory case and enables the dedicated provider.

src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (1)

101-146: Non‑streaming paths won’t set enable_thinking

completions/summaries/generateText call openAICompletion, which currently won’t add enable_thinking unless the base also invokes the hook in non‑streaming mode. Ensure the base change above covers openAICompletion, or override it here (less ideal).

Would you like me to wire the same hook into openAICompletion as shown in the base diff so non‑stream requests also benefit?

src/main/presenter/llmProviderPresenter/index.ts (1)

21-21: Factory wiring for DashScope is correct

Import and switch case for apiType "dashscope" correctly instantiate DashscopeProvider.

Also applies to: 201-203

@yyhhyyyyyy yyhhyyyyyy merged commit 963612f into dev Sep 4, 2025
2 checks passed
zerob13 added a commit that referenced this pull request Sep 9, 2025
* fix: chat confg need sync to new value when change model in chat (#823)

* fix: gemini nano banana not read image from chatinput

* fix: remove file-type ,this will mark html as audio (#824)

* fix: Solve the problem of the window becoming larger when dragging floating button under Windows (#826)

* fix: improve OpenAI compatible provider compatibility with third-party services

* chore: update vue-renderer-markdown to v0.0.35 (#827)

* refactor: remove custom-prompts-server and decouple prompts from MCP lifecycle (#829)

- Remove custom-prompts-server service entirely including configuration
- Implement data source merging in MCP store to load prompts from both config and MCP
- Add upgrade migration logic for versions < 0.3.5 to clean up old configurations
- Ensure @ operations work independently of MCP state through config data source
- Update CLAUDE.md with prompt management guidelines

The @ prompt functionality now works completely independently of MCP,
loading custom prompts directly from config storage while maintaining
full compatibility with existing MCP prompt sources.

* chore: add better log for mcp tool name

* feat: ux update (#831)

* feat: ux update

* chore: format

* feat: setting provider ux update (#832)

* feat: add current datetime to system prompt

- Add current date and time information to user's system prompt when not empty
- Include complete datetime with timezone, year, month, day, hour, minute, second
- Apply to both preparePromptContent and buildContinueToolCallContext methods
- Update token calculation to use processed system prompt for accuracy
- Skip datetime addition for image generation models and empty prompts

* refactor: extract system prompt datetime enhancement to common method

- Add enhanceSystemPromptWithDateTime private method to reduce code duplication
- Update both preparePromptContent and buildContinueToolCallContext to use common method
- Improve code maintainability and ensure consistency across all system prompt processing
- Add comprehensive JSDoc documentation for the new method

* fix(markdown): auto-wrap hide scroll (#833)

* feat: add enable_thinking parameter support for siliconcloud (#835)

* chore: bump deps (#834)

* chore: bump up deps

* fix: change win arm to wasm32 sharp

* chore: revert sharp config

* feat: drop windows arm support

* fix(coderabbitai): remove action for windows arm64

* refactor: adjust scroll-to-bottom button glow effect (#837)

* feat: add mutual exclusive confirmation dialogs for DeepSeek-V3.1 (#838)

* feat: add sanitizeText utility for clipboard data handling (#843)

* feat: support canary upgrade (#840)

* feat: support canary upgrade

* feat: add update channel selection for stable/canary versions

- Add update channel configuration to config presenter
- Implement dynamic URL switching for version checks and downloads
- Add UI selector in AboutUsSettings for channel selection
- Support stable and canary update channels with different endpoints
- Add internationalization support for channel selection

* chore: change contributors charts to openomy

* refactor: improve update channel handling and network resilience

- Sanitize update channel input to prevent invalid values
- Add 10-second timeout to version check requests to prevent hanging
- Treat unknown channels as 'upgrade' (stable) for robustness
- Improve error handling for network timeouts and invalid channels

* feat: implement thinking parameter support for doubao models (#842)

* feat: implement dedicated DashScope provider with enable_thinking support (#844)

* feat: implement dedicated DashScope provider with enable_thinking support

* refactor: remove unnecessary API key status check methods from DashscopeProvider

* fix: prioritize provider.id over apiType in createProviderInstance (#846)

* feat: add qwen3 thinking budget support (#848)

* feat: add qwen3 thinking budget support

* fix: add missing gemini.onlySupported key in zh-CN locale

* refactor: merge duplicate silicon case statements in provider creation

* feat: add qwen3 thinking budget support in ChatConfig (#849)

* refactor(types): 🚀split monolithic presenter.d.ts into strict per-domain *.presenter.d.ts + typed core layer  (#847)

* docs: Add strong-typed message architecture and implementation guidelines

- Update message-architecture.md with strong-typed design, remove compatibility compromises
- Add event-to-UI mapping table and rendering checklist for contract compliance
- Create presenter-split-plan.md for type system refactoring
- Add implementation-tasks.md with phased rollout plan
- Create .cursor/rules/provider-guidelines.mdc for provider implementation guidance

This establishes a foundation for strong-typed, maintainable message architecture without legacy compatibility debt.

* types(core): add strong-typed core types and barrel exports\n\n- Add usage.ts (UsageStats, RateLimitInfo)\n- Add llm-events.ts (discriminated union + factories + guards)\n- Add agent-events.ts (LLMAgentEvent*, shared types)\n- Add chat.ts (Message/AssistantMessageBlock/UserMessageContent)\n- Add mcp.ts (MCP content/response/definition)\n- Add types/index.d.ts barrel exports\n\nNo compatibility shims included by design.

* refactor(types): move legacy presenters and add @shared/presenter stub; keep legacy exports in types/index to maintain build\n\n- Move legacy presenters to src/shared/types/presenters/legacy.presenters.d.ts\n- Add src/shared/presenter.d.ts re-export stub\n- Temporarily export only legacy presenters from types/index.d.ts to avoid type mismatches\n- Update implementation-tasks.md (Stage 2 done)\n\nNext: progressive import updates to new cores, then flip barrel to new types and delete legacy.

* refactor(types): alias legacy core message types to strong-typed core (B-plan)\n\n- legacy.presenters.d.ts now re-exports ChatMessage/ChatMessageContent/LLMAgentEvent/LLMAgentEventData/LLMCoreStreamEvent from core\n- Prepares for flipping interfaces without changing import sites

* docs(tasks): mark Phase 3 as completed\n\n- Successfully implemented B-plan approach with type aliasing\n- Unified core message types (ChatMessage, LLMAgentEvent, LLMCoreStreamEvent)\n- Created core model types and maintained build compatibility\n- All type checks passing with no breaking changes

* fix(types): revert to legacy-only exports and fix MESSAGE_ROLE\n\n- Revert types/index.d.ts to only export legacy presenters\n- Remove 'function' from MESSAGE_ROLE to match core definition\n- Maintain build stability while preserving type unification work

* feat(provider): implement factory functions for LLMCoreStreamEvent\n\n- Replace all manual event creation with createStreamEvent factory functions\n- Ensures type safety and consistent event structure\n- Updated OpenAICompatibleProvider with strong-typed events:\n  - text, reasoning, toolCallStart, toolCallChunk, toolCallEnd\n  - error, usage, stop, imageData events\n- All type checks passing\n- Phase 4.1 completed: Provider now outputs LLMCoreStreamEvent with factory construction

* feat(provider): update OllamaProvider with factory functions\n\n- Replace all manual event creation with createStreamEvent factory functions\n- Ensures consistent tool_call_start → tool_call_chunk → tool_call_end sequence\n- Updated all event types: text, reasoning, toolCall*, usage, stop, error\n- Maintains proper tool call ID aggregation and sequencing\n- Phase 4.2 completed: Tool call sequences now strictly follow start/chunk/end pattern

* docs(tasks): update Phase 4 progress\n\n- Completed Provider strong-typed event output with factory functions\n- Completed tool_call_* strict start/chunk/end sequences\n- Completed usage event sent before stop in all providers\n- Updated implementation tasks document with current progress

* feat(phase4): complete Provider strong-typed event integration\n\n- Added factory functions import to AwsBedrockProvider\n- Updated error handling to use createStreamEvent.error() + createStreamEvent.stop('error')\n- Created comprehensive unit tests for LLMCoreStreamEvent factory functions\n- Tests validate: event creation, tool call sequences, error+stop patterns, type guards\n- All 12 core event tests passing ✅\n\n## Phase 4 Completed:\n- ✅ Provider strong-typed event output (factory construction)\n- ✅ tool_call_* strict start/chunk/end sequences with ID aggregation\n- ✅ Usage events sent before stop in all providers\n- ✅ Rate limit events (handled at Presenter layer)\n- ✅ Unified error + stop event patterns\n- ✅ Comprehensive unit tests for sequences/fields/boundaries

* feat(phase5): complete Agent & UI alignment

## Phase 5 Completed: Agent & UI Alignment ✅

### 1. Agent Layer Enhancement
- ✅ Agent consumes only CoreEvent and outputs LLMAgentEvent
- ✅ Strict separation of response/error/end event types
- ✅ Added rate_limit event handling in Agent layer
- ✅ Maintained proper event transformation pipeline

### 2. UI Layer Unification
- ✅ Removed standalone tool_call_permission type
- ✅ Unified to action + action_type pattern
- ✅ Updated all UI components and thread presenter logic
- ✅ Preserved backward compatibility with conditional rendering

### 3. Renderer Mapping & Testing
- ✅ Created comprehensive renderer contract tests (17 tests)
- ✅ Added message block data structure snapshot tests (14 tests)
- ✅ Validated event-to-UI block mapping compliance
- ✅ Ensured consistent status transitions and type safety

### Key Technical Achievements:
- **Type Safety**: All event transformations now type-safe
- **Consistency**: Unified action_type handling across codebase
- **Testing**: 31 new tests covering contracts and snapshots
- **Architecture**: Clean separation between Agent/UI concerns
- **Quality**: All lint, format, and type checks passing ✅

### Files Modified:
- Agent layer: llmProviderPresenter/index.ts
- UI components: MessageItemAssistant.vue
- Thread logic: threadPresenter/index.ts
- Type definitions: shared/chat.d.ts
- Tests: rendererContract.test.ts, messageBlockSnapshot.test.ts

Ready for Phase 6: Quality & Tools 🚀

* feat(phase6): complete Quality & Tools phase

## Phase 6 Completed: Quality & Tools ✅

### 1. Table-Driven Contract Tests
- ✅ Created comprehensive event mapping contract tests (25 test cases)
- ✅ Validated all event types from architecture mapping table
- ✅ Verified status transitions and type constraints
- ✅ Added tool call ID aggregation and lifecycle validation
- ✅ Implemented error recovery pattern testing

### 2. Performance Evaluation & Stress Testing
- ✅ Large text content processing (50KB < 10ms)
- ✅ Large reasoning content handling (100KB < 15ms)
- ✅ Large image data processing (1MB < 20ms)
- ✅ Batch image processing (100 images < 50ms)
- ✅ High-frequency event processing (1000 events < 100ms)
- ✅ Mixed event type handling (500 events < 80ms)
- ✅ Memory leak prevention (5000 events < 500ms)
- ✅ Extreme parameter handling (10KB params < 5ms)
- ✅ Concurrent processing simulation (10 workers < 100ms)

### Key Technical Achievements:
- **Comprehensive Testing**: 67 tests passing across all scenarios
- **Performance Validation**: All benchmarks meet performance targets
- **Type Safety**: Full TypeScript compliance (0 errors)
- **Code Quality**: Lint and format checks passing ✅
- **Architecture Compliance**: All mapping table rules verified
- **Stress Testing**: System handles extreme loads efficiently

### Test Coverage Summary:
- Event mapping contract tests: 25 tests ✅
- Renderer contract tests: 17 tests ✅
- Performance evaluation tests: 9 tests ✅
- Core event factory tests: 12 tests ✅
- Message block snapshot tests: 14 tests ✅
- Shell integration tests: 8 tests ✅

### Files Added:
- test/renderer/message/eventMappingTable.test.ts (comprehensive mapping validation)
- test/renderer/message/performanceEvaluation.test.ts (stress & performance testing)

Ready for production deployment with full quality assurance! 🚀

* fix(providers): complete strong-typed event integration across all providers

* fix(vitest): modify test case

* fix: default settings

* chore: update doc

* fix(ci): remove duplicate check in pr ci

* feat: add pnpm cache for pr check

* fix(ci): pr check with pnpm cache

* fix(ci): change cache key to package.json

* ci: remove pnpm cache

* feat: add glow breathing effect to scroll-to-bottom button (#850)

* feat: add glow breathing effect to scroll-to-bottom button

* fix: ensure exclusive display between MessageList and ArtifactDialog

* fix: refine MessageList–ArtifactDialog interaction logic; correct z-order between dialog and ArtifactDialog

* chore: prettier .vue

* feat: add web search support with configurable options for dashscope (#851)

* feat: add web search support with configurable options for dashscope

* fix: correct qwen model parameters to match official documentation

* feat: add web search support with configurable options for dashscope (#852)

* feat: add web search support with configurable options for dashscope

* fix: correct qwen model parameters to match official documentation

* feat: add search configuration support to ChatConfig components

* fix: fix enableSearch state sync and parameter passing issues

* fix: preserve search settings during data import

* feat: add dashscope commercial models to enable_thinking support (#853)

* feat: add search capability icon for model list (#854)

* feat: add search capability icon for model list

* fix: clear search settings when creating new conversation

* feat(markdown): Thinking panel now supports LaTeX compilation for mathematical formulas & markdown performance optimization (#857)

* feat(markdown): 思考栏支持数学公式latex编译显示 & markdown 性能优化
close: #845

* chore: lint

* chore(ai): update claude code rules and agents

* fix(ui): revert Dialog z-index to z-50 to fix dropdown visibility

Reverts DialogContent z-index from z-[100] back to z-50 to resolve issue where Select and EmojiPicker dropdowns were not appearing. This maintains proper layering hierarchy without breaking other UI components.

* feat: upgrade vue-renderer-markdown & vue-use-monaco (#862)

1. ignore math-block warning
2. Compatible with the syntax issues of mermaid produced by AI, greatly reducing the probability of mermaid rendering errors

* feat(dashscope): add qwen3-max-preview model (#865)

* fix: mcp params support more types (#861)

* feat(mcp): enhance tool parameter display with enum type support

- Add enum parameter type detection and enhanced display
- Show enum parameters with distinct blue badge styling (enum(string), array[enum(string)])
- Display allowed values for both direct enum and array item enum parameters
- Add i18n support for "allowedValues" and "arrayItemValues" labels
- Maintain consistent UI design with existing parameter display patterns
- Improve developer experience when debugging MCP tools with constrained parameters

* fix: enum params support

* fix(context-menu): handle local file paths in image save functionality

- Fix URL parsing error when saving images from local file paths
- Add proper handling for http/https URLs, file:// URLs, and direct file paths
- Use fs.promises for reading local files instead of net.fetch for invalid URLs
- Prevent "Failed to parse URL from" error when saving local images

* fix(context-menu): improve URL handling robustness in image save

- Add try-catch around net.fetch to handle invalid URLs gracefully
- Implement fallback methods for file:// URLs and local file paths
- Add debug logging to track source URL values for troubleshooting
- Prevent "Failed to parse URL from" errors with comprehensive URL validation

* fix(context-menu): handle empty srcURL in image save functionality

- Add comprehensive URL detection when srcURL is empty
- Implement fallback URL sources (linkURL, pageURL) for better compatibility
- Add debug logging to track all available context menu parameters
- Prevent "Failed to parse URL from" errors caused by empty URLs
- Provide clear error message when no valid URL can be found

* chore: format code

* fix: ai review

* fix: prevent @ symbol remaining when deleting mentions (#867)

* Merge commit from fork

* feat: implement separated system and custom prompt management (#868)

* feat: implement separated system and custom prompt management

* style: code fmt

* fix: add migration for legacy default_system_prompt to system_prompts

* feat: add Moonshot model configurations (#869)

* refactor: translate all cn comments and log to en (#871)

* refactor: translate all cn comments and log to en

* fix: revert translate in params

* feat: add reasoning support for Grok thinking models (#873)

* feat: add reasoning support for Grok thinking models

* fix: code lint

* fix: escaping character issue

---------

Co-authored-by: zerob13 <zerob13@gmail.com>

---------

Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com>
Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
Co-authored-by: xiaomo <wegi866@gmail.com>
Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com>
Co-authored-by: luy <12696648@qq.com>
@zerob13 zerob13 deleted the feat/dashscope-enable-thinking-support branch January 6, 2026 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants