-
Notifications
You must be signed in to change notification settings - Fork 614
Feature/302 provider model api #582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis update introduces a dynamic model metadata fetching and configuration update mechanism for the 302AI provider, allowing it to retrieve and merge model information from the 302AI API. Other changes are limited to formatting, indentation, and minor syntax adjustments across several files, with no functional impact on logic, control flow, or public interfaces. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant _302AIProvider
participant 302AI API
participant configPresenter
User->>_302AIProvider: Request available models
_302AIProvider->>302AI API: Fetch model list
302AI API-->>_302AIProvider: Return model metadata
_302AIProvider->>configPresenter: Get existing model config
_302AIProvider->>configPresenter: Update config if API data differs
_302AIProvider-->>User: Return processed model metadata
Possibly related PRs
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
src/renderer/src/components/editor/mention/PromptParamsDialog.vue (1)
5-6: Replace hardcoded Chinese text with i18n keys.The component violates the coding guidelines by using hardcoded user-facing text. All user-visible strings must use vue-i18n translation keys.
Replace the hardcoded Chinese text with appropriate i18n keys:
- <DialogTitle>{{ promptName }} 参数设置</DialogTitle> - <DialogDescription> 请填写以下参数,带 * 的为必填项 </DialogDescription> + <DialogTitle>{{ t('prompt.paramsDialog.title', { name: promptName }) }}</DialogTitle> + <DialogDescription>{{ t('prompt.paramsDialog.description') }}</DialogDescription>- <Button variant="outline" @click="$emit('close')"> 取消 </Button> - <Button :disabled="hasErrors" @click="handleSubmit"> 确认 </Button> + <Button variant="outline" @click="$emit('close')">{{ t('common.cancel') }}</Button> + <Button :disabled="hasErrors" @click="handleSubmit">{{ t('common.confirm') }}</Button>- errors.value[param.name] = '此参数为必填项' + errors.value[param.name] = t('prompt.paramsDialog.requiredError')Don't forget to import
useI18nand add the corresponding translation keys to your i18n files.Also applies to: 33-34, 88-88
🧹 Nitpick comments (3)
src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts (1)
1228-1230: Remove unnecessary continue statement.The
continuestatement is redundant here as it's the last statement in the catch block.} catch (error) { // Skip paths outside allowed directories console.error(`[globSearch] Path validation failed for ${result}:`, error) - continue }src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (1)
151-162: Use optional chaining for cleaner code.The vision detection logic is comprehensive, but can be simplified using optional chaining as suggested by static analysis.
const hasVision = modelId.includes('vision') || modelId.includes('gpt-4o') || (_302aiModel.description && _302aiModel.description.includes('vision')) || - (_302aiModel.description_en && - _302aiModel.description_en.toLowerCase().includes('vision')) || + _302aiModel.description_en?.toLowerCase().includes('vision') || modelId.includes('claude') || // Some Claude models support vision modelId.includes('gemini') || // Gemini models often support vision (modelId.includes('qwen') && modelId.includes('vl')) // Qwen VL modelssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
1202-1202: Consider using asystemrole for the prompt wrapperPlacing the summarisation instructions in a
systemmessage more closely matches OpenAI’s recommended pattern and avoids conflating user content with instructions.-const fullMessage: ChatMessage[] = [{ role: 'user', content: summaryText }] +const fullMessage: ChatMessage[] = [{ role: 'system', content: summaryText }]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
src/main/presenter/llmProviderPresenter/baseProvider.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts(1 hunks)src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts(10 hunks)src/renderer/src/components/ChatInput.vue(1 hunks)src/renderer/src/components/editor/mention/PromptParamsDialog.vue(1 hunks)src/renderer/src/i18n/ko-KR/settings.json(1 hunks)test/main/presenter/filesystem.test.ts(11 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
`src/renderer/src/**/*`: All user-facing strings in the renderer must use i18n k...
src/renderer/src/**/*: All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings
📄 Source: CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)
List of files the instruction was applied to:
src/renderer/src/components/ChatInput.vuesrc/renderer/src/components/editor/mention/PromptParamsDialog.vuesrc/renderer/src/i18n/ko-KR/settings.json
`src/renderer/**`: 渲染进程代码放在 `src/renderer`
src/renderer/**: 渲染进程代码放在src/renderer
📄 Source: CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
List of files the instruction was applied to:
src/renderer/src/components/ChatInput.vuesrc/renderer/src/components/editor/mention/PromptParamsDialog.vuesrc/renderer/src/i18n/ko-KR/settings.json
`**/*.{js,jsx,ts,tsx}`: 使用 OxLint 进行代码检查 Log和注释使用英文书写
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
📄 Source: CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)
List of files the instruction was applied to:
test/main/presenter/filesystem.test.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/baseProvider.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`**/*.{ts,tsx}`: 始终使用 try-catch 处理可能的错误 提供有意义的错误信息 记录详细的错误日志 优雅降级处理 日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息 使用结构化日志 避免记录敏感信息 设置适当的日志级别 不要吞掉错误 提供用户友好的错误信息 实现错误重试机制
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
使用结构化日志
避免记录敏感信息
设置适当的日志级别
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
📄 Source: CodeRabbit Inference Engine (.cursor/rules/error-logging.mdc)
List of files the instruction was applied to:
test/main/presenter/filesystem.test.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/baseProvider.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`src/main/presenter/**/*.ts`: Use context isolation for improved security Use El...
src/main/presenter/**/*.ts: Use context isolation for improved security
Use Electron's built-in APIs for file system and native dialogs
Optimize application startup time with lazy loading
📄 Source: CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
List of files the instruction was applied to:
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/baseProvider.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`{src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts}`: Implement proper inter-process communication (IPC) patterns Implement proper error handling and logging for debugging
{src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts}: Implement proper inter-process communication (IPC) patterns
Implement proper error handling and logging for debugging
📄 Source: CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)
List of files the instruction was applied to:
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/baseProvider.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`src/main/presenter/llmProviderPresenter/providers/*.ts`: Each file in `src/main...
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Provider implementations must use a standardized interface in theircoreStreammethod toyieldevents, decoupling the main Agent loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files must implement thecoreStream(messages, modelId, temperature, maxTokens)method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Provider files should include provider-specific helper functions such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/llm-agent-loop.mdc)
List of files the instruction was applied to:
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
`src/main/**`: 主进程代码放在 `src/main`
src/main/**: 主进程代码放在src/main
📄 Source: CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)
List of files the instruction was applied to:
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/baseProvider.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`src/renderer/src/i18n/**/*.json`: Translation key naming must use dot-separated...
src/renderer/src/i18n/**/*.json: Translation key naming must use dot-separated hierarchy, lowercase letters, and meaningful descriptive names (e.g., 'common.button.submit')
Each language must have a separate JSON file in src/renderer/src/i18n/, and shared translation keys must be placed in common.json
When adding a new translation, add shared translations to common.json and language-specific translations to the respective language file; keep all language files' keys consistent
Keep the structure of translation files consistent across all languages
Regularly check for and remove unused translation keys from translation files
📄 Source: CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)
List of files the instruction was applied to:
src/renderer/src/i18n/ko-KR/settings.json
🧠 Learnings (9)
src/renderer/src/components/ChatInput.vue (3)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both `text` and `content` properties in the `handleMeetingInstruction` function's `sendMessage` call, even though they are redundant, rather than removing the `content` property.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
test/main/presenter/filesystem.test.ts (9)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/main/presenter/**/*.ts : Use Electron's built-in APIs for file system and native dialogs
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/main/presenter/**/*.ts : Use context isolation for improved security
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 优雅降级处理
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper error handling and logging for debugging
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper inter-process communication (IPC) patterns
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 避免记录敏感信息
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 提供有意义的错误信息
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
src/renderer/src/components/editor/mention/PromptParamsDialog.vue (2)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (8)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both `text` and `content` properties in the `handleMeetingInstruction` function's `sendMessage` call, even though they are redundant, rather than removing the `content` property.
src/renderer/src/i18n/ko-KR/settings.json (7)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Keep the structure of translation files consistent across all languages
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Each language must have a separate JSON file in src/renderer/src/i18n/, and shared translation keys must be placed in common.json
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Regularly check for and remove unused translation keys from translation files
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : When adding a new translation, add shared translations to common.json and language-specific translations to the respective language file; keep all language files' keys consistent
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Translation key naming must use dot-separated hierarchy, lowercase letters, and meaningful descriptive names (e.g., 'common.button.submit')
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (8)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Maintain separation of concerns by centralizing Agent loop logic in `index.ts` and keeping provider files focused on API interaction and event standardization.
src/main/presenter/llmProviderPresenter/baseProvider.ts (10)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/streamEvents.ts : Define the standardized stream event interface (`LLMCoreStreamEvent`) in a shared location, such as `src/main/presenter/llmProviderPresenter/streamEvents.ts`.
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:250-252
Timestamp: 2025-06-21T15:48:29.950Z
Learning: In the meeting server implementation (src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts), when multiple tabs have the same title, the user prefers to let the code silently select the first match without adding warnings or additional ambiguity handling.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper error handling and logging for debugging
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (11)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/streamEvents.ts : Define the standardized stream event interface (`LLMCoreStreamEvent`) in a shared location, such as `src/main/presenter/llmProviderPresenter/streamEvents.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Maintain separation of concerns by centralizing Agent loop logic in `index.ts` and keeping provider files focused on API interaction and event standardization.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/renderer/src/composables/usePresenter.ts : The IPC in the renderer process is implemented in usePresenter.ts, allowing direct calls to the presenter-related interfaces exposed by the main process
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper inter-process communication (IPC) patterns
src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts (10)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/main/presenter/**/*.ts : Use Electron's built-in APIs for file system and native dialogs
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper error handling and logging for debugging
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper inter-process communication (IPC) patterns
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 优雅降级处理
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/development-setup.mdc:0-0
Timestamp: 2025-06-30T12:23:01.752Z
Learning: Node.js >= 22
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 实现错误重试机制
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 始终使用 try-catch 处理可能的错误
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 不要吞掉错误
🧬 Code Graph Analysis (3)
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)
src/shared/presenter.d.ts (1)
ChatMessage(1066-1078)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)
src/main/presenter/llmProviderPresenter/baseProvider.ts (1)
SUMMARY_TITLES_PROMPT(602-604)src/shared/presenter.d.ts (1)
ChatMessage(1066-1078)
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (1)
src/shared/presenter.d.ts (1)
MODEL_META(442-455)
🪛 Biome (1.9.4)
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
[error] 156-156: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
[error] 157-158: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
[error] 1259-1259: Unnecessary continue statement
Unsafe fix: Delete the unnecessary continue statement
(lint/correctness/noUnnecessaryContinue)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (12)
src/renderer/src/components/ChatInput.vue (1)
273-273: LGTM! Minor syntax cleanup.The removal of the trailing comma after the configuration object improves code consistency without any functional impact.
src/renderer/src/i18n/ko-KR/settings.json (1)
99-99: LGTM! Formatting consistency improvement.The indentation adjustments align the translation keys properly without changing any translation content.
Also applies to: 103-103
src/main/presenter/llmProviderPresenter/baseProvider.ts (1)
604-604: LGTM! Proper file ending format.Adding a newline at the end of the file follows standard formatting practices and improves compatibility with various tools.
src/renderer/src/components/editor/mention/PromptParamsDialog.vue (1)
10-31: LGTM! Improved component structure.The template restructuring better groups related input elements (label, description, input field, and error message) within each parameter container, improving the visual hierarchy and organization.
src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts (3)
15-18: LGTM! Improved readability with multi-line formatting.The expansion of Zod schema method chains improves readability without affecting functionality.
906-908: Good formatting improvement.The multi-line error message formatting enhances readability.
949-954: Clean formatting improvements.The multi-line destructuring and map function formatting enhance code clarity and maintainability.
Also applies to: 962-978
test/main/presenter/filesystem.test.ts (1)
25-40: Consistent formatting improvements.The formatting changes align with the style improvements in the implementation file, maintaining consistency across the codebase.
Also applies to: 167-173
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (3)
1-1: Well-structured interface definition for 302AI API response.The
_302AIModelResponseinterface properly captures the 302AI-specific model metadata with clear field names and helpful comments explaining the mapping (e.g.,content_lengthmaps to context length).Also applies to: 12-33
164-213: Excellent configuration management implementation.The configuration update logic properly:
- Prioritizes API values over existing configuration
- Only updates when actual changes are detected
- Provides comprehensive logging for debugging configuration changes
- Maintains existing settings (reasoning, temperature, type) that aren't provided by the API
This ensures the model configurations stay up-to-date with the 302AI API while preserving user customizations.
234-238: Good error handling with graceful fallback.The error handling properly logs issues and falls back to the parent implementation, ensuring the provider continues to function even if the 302AI-specific model fetching fails.
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)
1056-1056: Clean formatting improvement.The single-line array declaration is more concise for a simple array with one element.
| const fullMessage: ChatMessage[] = [ | ||
| { role: 'user', content: summaryText } | ||
| ] | ||
| const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map((m) => `${m.role}: ${m.content}`).join('\n')}` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Array content is coerced to [object Object], breaking the summary prompt
m.content can be ChatMessageContent[]; string-interpolation blindly coerces it to a string, producing [object Object] and polluting the prompt.
Extract the actual text parts instead.
-const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map((m) => `${m.role}: ${m.content}`).join('\n')}`
+const stringifyMessage = (m: ChatMessage) => {
+ if (typeof m.content === 'string') return `${m.role}: ${m.content}`
+ if (Array.isArray(m.content)) {
+ const text = m.content
+ .filter((p) => p.type === 'text' && 'text' in p && p.text)
+ .map((p) => (p as any).text)
+ .join(' ')
+ return `${m.role}: ${text}`
+ }
+ return `${m.role}:`
+}
+const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map(stringifyMessage).join('\n')}`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map((m) => `${m.role}: ${m.content}`).join('\n')}` | |
| const stringifyMessage = (m: ChatMessage) => { | |
| if (typeof m.content === 'string') { | |
| return `${m.role}: ${m.content}` | |
| } | |
| if (Array.isArray(m.content)) { | |
| const text = m.content | |
| .filter((p) => p.type === 'text' && 'text' in p && p.text) | |
| .map((p) => (p as any).text) | |
| .join(' ') | |
| return `${m.role}: ${text}` | |
| } | |
| return `${m.role}:` | |
| } | |
| const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map(stringifyMessage).join('\n')}` |
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
at line 1201, the code uses string interpolation on m.content, which can be an
array of ChatMessageContent objects, causing it to convert to "[object Object]"
and corrupt the summary prompt. To fix this, check if m.content is an array and
extract the text parts from each ChatMessageContent object, concatenating them
into a single string before interpolation. This ensures the summary prompt
contains the actual message text instead of object representations.
支持302 模型解析的api,可以拿到官方定义的模型配置
Summary by CodeRabbit
New Features
Style
Bug Fixes
Tests