-
Notifications
You must be signed in to change notification settings - Fork 614
feat: add reasoning support for Grok thinking models #873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughAdds reasoningEffort support for Grok models across configuration, provider logic, and UI; grokProvider now detects reasoning-capable models, conditionally injects reasoning_effort into API calls, and renames tools → mcpTools in signatures. Also a small wording tweak in the GitHub Copilot device-flow message. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as UI
participant Provider as GrokProvider
participant API as Grok/OpenAI API
UI->>Provider: completions(messages, modelId, modelConfig, ..., mcpTools)
alt image-generation model
Provider->>API: image-generation request
API-->>Provider: image result
Provider-->>UI: RESPONSE events
Note over Provider,UI: small delay then return
else reasoning-capable model
Provider->>Provider: isReasoningModel / supportsReasoningEffort?
alt reasoningEffort configured & supported
Provider->>API: chat.completions.create(..., reasoning_effort)
else
Provider->>API: chat.completions.create(...)
end
API-->>Provider: streamed tokens/events
Provider-->>UI: STREAM events
else default streaming path
Provider->>API: coreStream(..., mcpTools)
API-->>Provider: streamed tokens/events
Provider-->>UI: STREAM events
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Flag potential breaking changes that are not documented:
1. Identify changes to public APIs/exports, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints (including removed/renamed items and changes to types, required params, return values, defaults, or behavior).
2. Ignore purely internal/private changes (e.g., code not exported from package entry points or marked internal).
3. Verify documentation exists: a "Breaking Change" section in the PR description and updates to CHANGELOG.md.📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/main/presenter/githubCopilotDeviceFlow.ts (1)
216-233: Fix ReferenceError: GITHUB_DEVICE_URL is undefined in the page context.Inside the injected HTML,
GITHUB_DEVICE_URLisn’t defined; this breaksopenBrowser(). Interpolate the constant into the string.Apply this diff:
- const githubUrl = GITHUB_DEVICE_URL; + const githubUrl = '${GITHUB_DEVICE_URL}';src/main/presenter/configPresenter/providerModelSettings.ts (1)
3180-3189: Mark image model type to drive correct UI/behavior.
grok-2-image-1212should be typed as ImageGeneration so renderer hides chat-only controls.Apply this diff:
{ id: 'grok-2-image-1212', name: 'Grok 2 Image 1212', temperature: 0.7, contextLength: 130_000, maxTokens: 100_000, match: ['grok-2-image', 'grok-2-image-latest', 'grok-2-image-1212'], vision: true, functionCall: false, - reasoning: false + reasoning: false, + type: ModelType.ImageGeneration },
🧹 Nitpick comments (7)
src/main/presenter/githubCopilotDeviceFlow.ts (2)
261-297: Use proper IPC via contextBridge instead of console-message shims.
executeJavaScript+ console parsing is brittle and violates the “proper IPC patterns” guideline. Expose a preload withcontextBridge.exposeInMainWorldand handle IPC on a dedicated channel.
107-107: Non-English comment.Please keep comments in English per repo guidelines.
src/renderer/src/components/settings/ModelConfigDialog.vue (1)
1-428: Non-English comments.Convert Chinese comments to English to comply with the repo’s logging/commenting guideline.
src/renderer/src/components/ChatConfig.vue (1)
1-304: Non-English comments.Please switch comments to English per the coding guidelines.
src/main/presenter/llmProviderPresenter/providers/grokProvider.ts (3)
10-15: Avoid hard-coded model lists; derive from config to prevent drift.Use
modelConfig.reasoning/modelConfig.reasoningEffort(already available incoreStream) instead of maintaining string arrays that can go stale.
165-186: LGTM: image path yields image_data and exits early.The brief delay is harmless; consider documenting why 300ms was chosen.
98-151: Prefer resilient message selection; avoidfindLastfor wider Node/Electron support.
Array.prototype.findLastmay not be available depending on Electron’s Node runtime. Use a reverse loop fallback.Apply this diff:
- const userMessage = messages.findLast((msg) => msg.role === 'user') + let userMessage = null as ChatMessage | null + for (let i = messages.length - 1; i >= 0; i--) { + if (messages[i].role === 'user') { userMessage = messages[i]; break } + }Please confirm your Electron/Node target includes
findLast; if yes, you can keep it.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/main/presenter/configPresenter/providerModelSettings.ts(2 hunks)src/main/presenter/githubCopilotDeviceFlow.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/grokProvider.ts(4 hunks)src/renderer/src/components/ChatConfig.vue(1 hunks)src/renderer/src/components/settings/ModelConfigDialog.vue(1 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
src/renderer/src/**/*
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
渲染进程代码放在
src/renderer
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
Use scoped styles to prevent CSS conflicts between components
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Implement lazy loading for routes and components.
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.Use Pinia for frontend state management (do not introduce alternative state libraries)
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/renderer/src/components/ChatConfig.vuesrc/main/presenter/githubCopilotDeviceFlow.tssrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/grokProvider.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Files:
src/renderer/src/components/ChatConfig.vuesrc/main/presenter/githubCopilotDeviceFlow.tssrc/renderer/src/components/settings/ModelConfigDialog.vuesrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/renderer/{src,shell,floating}/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/{src,shell,floating}/**/*.vue: Use Vue 3 Composition API for all components
All user-facing strings must use i18n keys via vue-i18n (no hard-coded UI strings)
Use Tailwind CSS utilities and ensure styles are scoped in Vue components
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/components/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
Organize UI components by feature within src/renderer/src/
Files:
src/renderer/src/components/ChatConfig.vuesrc/renderer/src/components/settings/ModelConfigDialog.vue
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/githubCopilotDeviceFlow.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/githubCopilotDeviceFlow.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/githubCopilotDeviceFlow.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/grokProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/githubCopilotDeviceFlow.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/githubCopilotDeviceFlow.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
🔇 Additional comments (5)
src/main/presenter/githubCopilotDeviceFlow.ts (1)
231-231: LGTM on text tweak.Literal apostrophe reads fine; no functional impact.
src/renderer/src/components/ChatConfig.vue (1)
649-672: Grok-only options look good; consider handling preexisting invalid values.UI limits to low/high correctly. Ensure upstream config never holds
minimal/mediumfor Grok so the select doesn’t render a non-existent choice (see suggested normalization in ModelConfigDialog).src/main/presenter/configPresenter/providerModelSettings.ts (1)
3142-3156: Good: default Grok reasoningEffort set to low.Matches the UI constraint (low/high). Helps avoid invalid values.
src/main/presenter/llmProviderPresenter/providers/grokProvider.ts (2)
25-37: LGTM: helper matchers are case-insensitive and scoped.
161-166: LGTM: early validation improves error clarity.
| // Handle reasoning models | ||
| if (this.isReasoningModel(modelId) && modelConfig?.reasoningEffort) { | ||
| const originalCreate = this.openai.chat.completions.create.bind(this.openai.chat.completions) | ||
| this.openai.chat.completions.create = ((params: any, options?: any) => { | ||
| const modifiedParams = { ...params } | ||
|
|
||
| if (this.supportsReasoningEffort(modelId)) { | ||
| modifiedParams.reasoning_effort = modelConfig.reasoningEffort | ||
| } | ||
|
|
||
| return originalCreate(modifiedParams, options) | ||
| }) as any | ||
|
|
||
| try { | ||
| const effectiveModelConfig = { | ||
| ...modelConfig, | ||
| reasoningEffort: undefined | ||
| } | ||
| yield* super.coreStream( | ||
| messages, | ||
| modelId, | ||
| effectiveModelConfig, | ||
| temperature, | ||
| maxTokens, | ||
| mcpTools | ||
| ) | ||
| } finally { | ||
| this.openai.chat.completions.create = originalCreate | ||
| } | ||
| } else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not monkey-patch the SDK method; concurrency and cross-request leakage risk.
Overwriting openai.chat.completions.create is not thread-safe. Parallel streams on the same provider instance can receive unintended reasoning_effort.
Suggested direction:
- Add an extension point in
OpenAICompatibleProvider(e.g.,protected getExtraCompletionParams(...)) and override it here to inject{ reasoning_effort }. - If refactoring base isn’t feasible now, at minimum guard with a per-instance mutex around the patch/restore to serialize calls, and document the limitation.
Would you like me to propose a small base-class hook PR to eliminate the patch?
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/grokProvider.ts around
lines 188-217 the code monkey-patches openai.chat.completions.create which can
leak reasoning_effort across concurrent requests; replace the patch by
adding/using an extension hook on the base provider (e.g., protected
getExtraCompletionParams(modelId, modelConfig, messages): Record<string, any> |
undefined) in OpenAICompatibleProvider, implement that hook in this grokProvider
to return { reasoning_effort: modelConfig.reasoningEffort } when applicable,
call the original completions.create with merged params (no runtime overwrite),
and remove the patch/restore block; if you cannot refactor the base now, protect
the existing patch with a per-instance mutex to serialize calls and clearly
document the limitation.
| <!-- Grok models only support low and high --> | ||
| <template v-if="props.providerId === 'grok'"> | ||
| <SelectItem value="low">{{ | ||
| t('settings.model.modelConfig.reasoningEffort.options.low') | ||
| }}</SelectItem> | ||
| <SelectItem value="high">{{ | ||
| t('settings.model.modelConfig.reasoningEffort.options.high') | ||
| }}</SelectItem> | ||
| </template> | ||
| <!-- Other models support all four options --> | ||
| <template v-else> | ||
| <SelectItem value="minimal">{{ | ||
| t('settings.model.modelConfig.reasoningEffort.options.minimal') | ||
| }}</SelectItem> | ||
| <SelectItem value="low">{{ | ||
| t('settings.model.modelConfig.reasoningEffort.options.low') | ||
| }}</SelectItem> | ||
| <SelectItem value="medium">{{ | ||
| t('settings.model.modelConfig.reasoningEffort.options.medium') | ||
| }}</SelectItem> | ||
| <SelectItem value="high">{{ | ||
| t('settings.model.modelConfig.reasoningEffort.options.high') | ||
| }}</SelectItem> | ||
| </template> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Constrain value: coerce invalid Grok reasoningEffort to a supported option.
If a stored value is minimal/medium, the select shows no option for Grok. Normalize to low/high on load.
Apply this diff near the end of loadConfig() after assigning config.value:
try {
const modelConfig = await settingsStore.getModelConfig(props.modelId, props.providerId)
config.value = { ...modelConfig }
} catch (error) {
console.error('Failed to load model config:', error)
…
}
+
+ // Normalize Grok reasoningEffort to supported values
+ if (props.providerId === 'grok') {
+ const allowed = new Set(['low', 'high'])
+ const cur = config.value.reasoningEffort as string | undefined
+ if (!cur || !allowed.has(cur)) {
+ config.value.reasoningEffort = 'low'
+ }
+ }🤖 Prompt for AI Agents
In src/renderer/src/components/settings/ModelConfigDialog.vue around lines
156-179, the Grok provider's select only supports "low" and "high" but stored
configs may be "minimal" or "medium", leaving the select empty; after assigning
config.value in loadConfig(), detect if props.providerId === 'grok' and if
config.value.reasoningEffort is not "low" or "high" then coerce/normalize it to
a supported option (e.g., set to "low"), assigning the normalized value back to
config.value.reasoningEffort so the select renders a valid choice; keep this
logic minimal and run only once during load.
* fix: chat confg need sync to new value when change model in chat (#823) * fix: gemini nano banana not read image from chatinput * fix: remove file-type ,this will mark html as audio (#824) * fix: Solve the problem of the window becoming larger when dragging floating button under Windows (#826) * fix: improve OpenAI compatible provider compatibility with third-party services * chore: update vue-renderer-markdown to v0.0.35 (#827) * refactor: remove custom-prompts-server and decouple prompts from MCP lifecycle (#829) - Remove custom-prompts-server service entirely including configuration - Implement data source merging in MCP store to load prompts from both config and MCP - Add upgrade migration logic for versions < 0.3.5 to clean up old configurations - Ensure @ operations work independently of MCP state through config data source - Update CLAUDE.md with prompt management guidelines The @ prompt functionality now works completely independently of MCP, loading custom prompts directly from config storage while maintaining full compatibility with existing MCP prompt sources. * chore: add better log for mcp tool name * feat: ux update (#831) * feat: ux update * chore: format * feat: setting provider ux update (#832) * feat: add current datetime to system prompt - Add current date and time information to user's system prompt when not empty - Include complete datetime with timezone, year, month, day, hour, minute, second - Apply to both preparePromptContent and buildContinueToolCallContext methods - Update token calculation to use processed system prompt for accuracy - Skip datetime addition for image generation models and empty prompts * refactor: extract system prompt datetime enhancement to common method - Add enhanceSystemPromptWithDateTime private method to reduce code duplication - Update both preparePromptContent and buildContinueToolCallContext to use common method - Improve code maintainability and ensure consistency across all system prompt processing - Add comprehensive JSDoc documentation for the new method * fix(markdown): auto-wrap hide scroll (#833) * feat: add enable_thinking parameter support for siliconcloud (#835) * chore: bump deps (#834) * chore: bump up deps * fix: change win arm to wasm32 sharp * chore: revert sharp config * feat: drop windows arm support * fix(coderabbitai): remove action for windows arm64 * refactor: adjust scroll-to-bottom button glow effect (#837) * feat: add mutual exclusive confirmation dialogs for DeepSeek-V3.1 (#838) * feat: add sanitizeText utility for clipboard data handling (#843) * feat: support canary upgrade (#840) * feat: support canary upgrade * feat: add update channel selection for stable/canary versions - Add update channel configuration to config presenter - Implement dynamic URL switching for version checks and downloads - Add UI selector in AboutUsSettings for channel selection - Support stable and canary update channels with different endpoints - Add internationalization support for channel selection * chore: change contributors charts to openomy * refactor: improve update channel handling and network resilience - Sanitize update channel input to prevent invalid values - Add 10-second timeout to version check requests to prevent hanging - Treat unknown channels as 'upgrade' (stable) for robustness - Improve error handling for network timeouts and invalid channels * feat: implement thinking parameter support for doubao models (#842) * feat: implement dedicated DashScope provider with enable_thinking support (#844) * feat: implement dedicated DashScope provider with enable_thinking support * refactor: remove unnecessary API key status check methods from DashscopeProvider * fix: prioritize provider.id over apiType in createProviderInstance (#846) * feat: add qwen3 thinking budget support (#848) * feat: add qwen3 thinking budget support * fix: add missing gemini.onlySupported key in zh-CN locale * refactor: merge duplicate silicon case statements in provider creation * feat: add qwen3 thinking budget support in ChatConfig (#849) * refactor(types): 🚀split monolithic presenter.d.ts into strict per-domain *.presenter.d.ts + typed core layer (#847) * docs: Add strong-typed message architecture and implementation guidelines - Update message-architecture.md with strong-typed design, remove compatibility compromises - Add event-to-UI mapping table and rendering checklist for contract compliance - Create presenter-split-plan.md for type system refactoring - Add implementation-tasks.md with phased rollout plan - Create .cursor/rules/provider-guidelines.mdc for provider implementation guidance This establishes a foundation for strong-typed, maintainable message architecture without legacy compatibility debt. * types(core): add strong-typed core types and barrel exports\n\n- Add usage.ts (UsageStats, RateLimitInfo)\n- Add llm-events.ts (discriminated union + factories + guards)\n- Add agent-events.ts (LLMAgentEvent*, shared types)\n- Add chat.ts (Message/AssistantMessageBlock/UserMessageContent)\n- Add mcp.ts (MCP content/response/definition)\n- Add types/index.d.ts barrel exports\n\nNo compatibility shims included by design. * refactor(types): move legacy presenters and add @shared/presenter stub; keep legacy exports in types/index to maintain build\n\n- Move legacy presenters to src/shared/types/presenters/legacy.presenters.d.ts\n- Add src/shared/presenter.d.ts re-export stub\n- Temporarily export only legacy presenters from types/index.d.ts to avoid type mismatches\n- Update implementation-tasks.md (Stage 2 done)\n\nNext: progressive import updates to new cores, then flip barrel to new types and delete legacy. * refactor(types): alias legacy core message types to strong-typed core (B-plan)\n\n- legacy.presenters.d.ts now re-exports ChatMessage/ChatMessageContent/LLMAgentEvent/LLMAgentEventData/LLMCoreStreamEvent from core\n- Prepares for flipping interfaces without changing import sites * docs(tasks): mark Phase 3 as completed\n\n- Successfully implemented B-plan approach with type aliasing\n- Unified core message types (ChatMessage, LLMAgentEvent, LLMCoreStreamEvent)\n- Created core model types and maintained build compatibility\n- All type checks passing with no breaking changes * fix(types): revert to legacy-only exports and fix MESSAGE_ROLE\n\n- Revert types/index.d.ts to only export legacy presenters\n- Remove 'function' from MESSAGE_ROLE to match core definition\n- Maintain build stability while preserving type unification work * feat(provider): implement factory functions for LLMCoreStreamEvent\n\n- Replace all manual event creation with createStreamEvent factory functions\n- Ensures type safety and consistent event structure\n- Updated OpenAICompatibleProvider with strong-typed events:\n - text, reasoning, toolCallStart, toolCallChunk, toolCallEnd\n - error, usage, stop, imageData events\n- All type checks passing\n- Phase 4.1 completed: Provider now outputs LLMCoreStreamEvent with factory construction * feat(provider): update OllamaProvider with factory functions\n\n- Replace all manual event creation with createStreamEvent factory functions\n- Ensures consistent tool_call_start → tool_call_chunk → tool_call_end sequence\n- Updated all event types: text, reasoning, toolCall*, usage, stop, error\n- Maintains proper tool call ID aggregation and sequencing\n- Phase 4.2 completed: Tool call sequences now strictly follow start/chunk/end pattern * docs(tasks): update Phase 4 progress\n\n- Completed Provider strong-typed event output with factory functions\n- Completed tool_call_* strict start/chunk/end sequences\n- Completed usage event sent before stop in all providers\n- Updated implementation tasks document with current progress * feat(phase4): complete Provider strong-typed event integration\n\n- Added factory functions import to AwsBedrockProvider\n- Updated error handling to use createStreamEvent.error() + createStreamEvent.stop('error')\n- Created comprehensive unit tests for LLMCoreStreamEvent factory functions\n- Tests validate: event creation, tool call sequences, error+stop patterns, type guards\n- All 12 core event tests passing ✅\n\n## Phase 4 Completed:\n- ✅ Provider strong-typed event output (factory construction)\n- ✅ tool_call_* strict start/chunk/end sequences with ID aggregation\n- ✅ Usage events sent before stop in all providers\n- ✅ Rate limit events (handled at Presenter layer)\n- ✅ Unified error + stop event patterns\n- ✅ Comprehensive unit tests for sequences/fields/boundaries * feat(phase5): complete Agent & UI alignment ## Phase 5 Completed: Agent & UI Alignment ✅ ### 1. Agent Layer Enhancement - ✅ Agent consumes only CoreEvent and outputs LLMAgentEvent - ✅ Strict separation of response/error/end event types - ✅ Added rate_limit event handling in Agent layer - ✅ Maintained proper event transformation pipeline ### 2. UI Layer Unification - ✅ Removed standalone tool_call_permission type - ✅ Unified to action + action_type pattern - ✅ Updated all UI components and thread presenter logic - ✅ Preserved backward compatibility with conditional rendering ### 3. Renderer Mapping & Testing - ✅ Created comprehensive renderer contract tests (17 tests) - ✅ Added message block data structure snapshot tests (14 tests) - ✅ Validated event-to-UI block mapping compliance - ✅ Ensured consistent status transitions and type safety ### Key Technical Achievements: - **Type Safety**: All event transformations now type-safe - **Consistency**: Unified action_type handling across codebase - **Testing**: 31 new tests covering contracts and snapshots - **Architecture**: Clean separation between Agent/UI concerns - **Quality**: All lint, format, and type checks passing ✅ ### Files Modified: - Agent layer: llmProviderPresenter/index.ts - UI components: MessageItemAssistant.vue - Thread logic: threadPresenter/index.ts - Type definitions: shared/chat.d.ts - Tests: rendererContract.test.ts, messageBlockSnapshot.test.ts Ready for Phase 6: Quality & Tools 🚀 * feat(phase6): complete Quality & Tools phase ## Phase 6 Completed: Quality & Tools ✅ ### 1. Table-Driven Contract Tests - ✅ Created comprehensive event mapping contract tests (25 test cases) - ✅ Validated all event types from architecture mapping table - ✅ Verified status transitions and type constraints - ✅ Added tool call ID aggregation and lifecycle validation - ✅ Implemented error recovery pattern testing ### 2. Performance Evaluation & Stress Testing - ✅ Large text content processing (50KB < 10ms) - ✅ Large reasoning content handling (100KB < 15ms) - ✅ Large image data processing (1MB < 20ms) - ✅ Batch image processing (100 images < 50ms) - ✅ High-frequency event processing (1000 events < 100ms) - ✅ Mixed event type handling (500 events < 80ms) - ✅ Memory leak prevention (5000 events < 500ms) - ✅ Extreme parameter handling (10KB params < 5ms) - ✅ Concurrent processing simulation (10 workers < 100ms) ### Key Technical Achievements: - **Comprehensive Testing**: 67 tests passing across all scenarios - **Performance Validation**: All benchmarks meet performance targets - **Type Safety**: Full TypeScript compliance (0 errors) - **Code Quality**: Lint and format checks passing ✅ - **Architecture Compliance**: All mapping table rules verified - **Stress Testing**: System handles extreme loads efficiently ### Test Coverage Summary: - Event mapping contract tests: 25 tests ✅ - Renderer contract tests: 17 tests ✅ - Performance evaluation tests: 9 tests ✅ - Core event factory tests: 12 tests ✅ - Message block snapshot tests: 14 tests ✅ - Shell integration tests: 8 tests ✅ ### Files Added: - test/renderer/message/eventMappingTable.test.ts (comprehensive mapping validation) - test/renderer/message/performanceEvaluation.test.ts (stress & performance testing) Ready for production deployment with full quality assurance! 🚀 * fix(providers): complete strong-typed event integration across all providers * fix(vitest): modify test case * fix: default settings * chore: update doc * fix(ci): remove duplicate check in pr ci * feat: add pnpm cache for pr check * fix(ci): pr check with pnpm cache * fix(ci): change cache key to package.json * ci: remove pnpm cache * feat: add glow breathing effect to scroll-to-bottom button (#850) * feat: add glow breathing effect to scroll-to-bottom button * fix: ensure exclusive display between MessageList and ArtifactDialog * fix: refine MessageList–ArtifactDialog interaction logic; correct z-order between dialog and ArtifactDialog * chore: prettier .vue * feat: add web search support with configurable options for dashscope (#851) * feat: add web search support with configurable options for dashscope * fix: correct qwen model parameters to match official documentation * feat: add web search support with configurable options for dashscope (#852) * feat: add web search support with configurable options for dashscope * fix: correct qwen model parameters to match official documentation * feat: add search configuration support to ChatConfig components * fix: fix enableSearch state sync and parameter passing issues * fix: preserve search settings during data import * feat: add dashscope commercial models to enable_thinking support (#853) * feat: add search capability icon for model list (#854) * feat: add search capability icon for model list * fix: clear search settings when creating new conversation * feat(markdown): Thinking panel now supports LaTeX compilation for mathematical formulas & markdown performance optimization (#857) * feat(markdown): 思考栏支持数学公式latex编译显示 & markdown 性能优化 close: #845 * chore: lint * chore(ai): update claude code rules and agents * fix(ui): revert Dialog z-index to z-50 to fix dropdown visibility Reverts DialogContent z-index from z-[100] back to z-50 to resolve issue where Select and EmojiPicker dropdowns were not appearing. This maintains proper layering hierarchy without breaking other UI components. * feat: upgrade vue-renderer-markdown & vue-use-monaco (#862) 1. ignore math-block warning 2. Compatible with the syntax issues of mermaid produced by AI, greatly reducing the probability of mermaid rendering errors * feat(dashscope): add qwen3-max-preview model (#865) * fix: mcp params support more types (#861) * feat(mcp): enhance tool parameter display with enum type support - Add enum parameter type detection and enhanced display - Show enum parameters with distinct blue badge styling (enum(string), array[enum(string)]) - Display allowed values for both direct enum and array item enum parameters - Add i18n support for "allowedValues" and "arrayItemValues" labels - Maintain consistent UI design with existing parameter display patterns - Improve developer experience when debugging MCP tools with constrained parameters * fix: enum params support * fix(context-menu): handle local file paths in image save functionality - Fix URL parsing error when saving images from local file paths - Add proper handling for http/https URLs, file:// URLs, and direct file paths - Use fs.promises for reading local files instead of net.fetch for invalid URLs - Prevent "Failed to parse URL from" error when saving local images * fix(context-menu): improve URL handling robustness in image save - Add try-catch around net.fetch to handle invalid URLs gracefully - Implement fallback methods for file:// URLs and local file paths - Add debug logging to track source URL values for troubleshooting - Prevent "Failed to parse URL from" errors with comprehensive URL validation * fix(context-menu): handle empty srcURL in image save functionality - Add comprehensive URL detection when srcURL is empty - Implement fallback URL sources (linkURL, pageURL) for better compatibility - Add debug logging to track all available context menu parameters - Prevent "Failed to parse URL from" errors caused by empty URLs - Provide clear error message when no valid URL can be found * chore: format code * fix: ai review * fix: prevent @ symbol remaining when deleting mentions (#867) * Merge commit from fork * feat: implement separated system and custom prompt management (#868) * feat: implement separated system and custom prompt management * style: code fmt * fix: add migration for legacy default_system_prompt to system_prompts * feat: add Moonshot model configurations (#869) * refactor: translate all cn comments and log to en (#871) * refactor: translate all cn comments and log to en * fix: revert translate in params * feat: add reasoning support for Grok thinking models (#873) * feat: add reasoning support for Grok thinking models * fix: code lint * fix: escaping character issue --------- Co-authored-by: zerob13 <zerob13@gmail.com> --------- Co-authored-by: hllshiro <40970081+hllshiro@users.noreply.github.com> Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com> Co-authored-by: xiaomo <wegi866@gmail.com> Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com> Co-authored-by: luy <12696648@qq.com>
add reasoning support for Grok thinking models
Summary by CodeRabbit
New Features
Improvements
Bug Fixes