Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Jul 7, 2025

支持302 模型解析的api,可以拿到官方定义的模型配置

Summary by CodeRabbit

  • New Features

    • Enhanced support for 302AI models with dynamic model metadata parsing and automatic configuration updates.
  • Style

    • Improved code formatting and readability across several components and test files.
    • Adjusted indentation and grouping in UI components for clearer structure.
    • Updated JSON formatting for consistency in settings.
  • Bug Fixes

    • Minor syntax correction in the chat input editor extension configuration.
  • Tests

    • Reformatted test code for better clarity without changing test coverage or logic.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 7, 2025

Walkthrough

This update introduces a dynamic model metadata fetching and configuration update mechanism for the 302AI provider, allowing it to retrieve and merge model information from the 302AI API. Other changes are limited to formatting, indentation, and minor syntax adjustments across several files, with no functional impact on logic, control flow, or public interfaces.

Changes

File(s) Change Summary
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts Adds _302AIModelResponse interface and dynamic fetchOpenAIModels method for 302AI; minor whitespace fix in check.
src/main/presenter/llmProviderPresenter/baseProvider.ts Adds newline at end of file for formatting; no logic change.
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts,
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
Refactors summaryTitles method to simplify array construction and formatting; no logic change.
src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts,
test/main/presenter/filesystem.test.ts
Reformatting for readability, indentation, and consistency; no logic or test behavior changes.
src/renderer/src/components/ChatInput.vue Removes trailing comma in configuration object; syntax only.
src/renderer/src/components/editor/mention/PromptParamsDialog.vue Adjusts template structure for better grouping of input elements; no script changes.
src/renderer/src/i18n/ko-KR/settings.json Fixes indentation of two keys for consistency; no content changes.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant _302AIProvider
    participant 302AI API
    participant configPresenter

    User->>_302AIProvider: Request available models
    _302AIProvider->>302AI API: Fetch model list
    302AI API-->>_302AIProvider: Return model metadata
    _302AIProvider->>configPresenter: Get existing model config
    _302AIProvider->>configPresenter: Update config if API data differs
    _302AIProvider-->>User: Return processed model metadata
Loading

Possibly related PRs

  • ThinkInAIXYZ/deepchat#527: Adds dynamic model metadata fetching and configuration update for OpenRouter and PPIO providers, similar to the new 302AI logic in this PR.

Poem

A hop and a skip, new models appear,
302AI now fetches them clear!
Formatting’s tidy, arrays in a line,
Inputs grouped neatly, everything’s fine.
With configs refreshed and code looking bright,
This bunny approves—what a delight!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
src/renderer/src/components/editor/mention/PromptParamsDialog.vue (1)

5-6: Replace hardcoded Chinese text with i18n keys.

The component violates the coding guidelines by using hardcoded user-facing text. All user-visible strings must use vue-i18n translation keys.

Replace the hardcoded Chinese text with appropriate i18n keys:

-        <DialogTitle>{{ promptName }} 参数设置</DialogTitle>
-        <DialogDescription> 请填写以下参数,带 * 的为必填项 </DialogDescription>
+        <DialogTitle>{{ t('prompt.paramsDialog.title', { name: promptName }) }}</DialogTitle>
+        <DialogDescription>{{ t('prompt.paramsDialog.description') }}</DialogDescription>
-        <Button variant="outline" @click="$emit('close')"> 取消 </Button>
-        <Button :disabled="hasErrors" @click="handleSubmit"> 确认 </Button>
+        <Button variant="outline" @click="$emit('close')">{{ t('common.cancel') }}</Button>
+        <Button :disabled="hasErrors" @click="handleSubmit">{{ t('common.confirm') }}</Button>
-      errors.value[param.name] = '此参数为必填项'
+      errors.value[param.name] = t('prompt.paramsDialog.requiredError')

Don't forget to import useI18n and add the corresponding translation keys to your i18n files.

Also applies to: 33-34, 88-88

🧹 Nitpick comments (3)
src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts (1)

1228-1230: Remove unnecessary continue statement.

The continue statement is redundant here as it's the last statement in the catch block.

   } catch (error) {
     // Skip paths outside allowed directories
     console.error(`[globSearch] Path validation failed for ${result}:`, error)
-    continue
   }
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (1)

151-162: Use optional chaining for cleaner code.

The vision detection logic is comprehensive, but can be simplified using optional chaining as suggested by static analysis.

 const hasVision =
   modelId.includes('vision') ||
   modelId.includes('gpt-4o') ||
   (_302aiModel.description && _302aiModel.description.includes('vision')) ||
-  (_302aiModel.description_en &&
-    _302aiModel.description_en.toLowerCase().includes('vision')) ||
+  _302aiModel.description_en?.toLowerCase().includes('vision') ||
   modelId.includes('claude') || // Some Claude models support vision
   modelId.includes('gemini') || // Gemini models often support vision
   (modelId.includes('qwen') && modelId.includes('vl')) // Qwen VL models
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)

1202-1202: Consider using a system role for the prompt wrapper

Placing the summarisation instructions in a system message more closely matches OpenAI’s recommended pattern and avoids conflating user content with instructions.

-const fullMessage: ChatMessage[] = [{ role: 'user', content: summaryText }]
+const fullMessage: ChatMessage[] = [{ role: 'system', content: summaryText }]
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb931ca and 8a1a78a.

📒 Files selected for processing (9)
  • src/main/presenter/llmProviderPresenter/baseProvider.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (3 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1 hunks)
  • src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts (10 hunks)
  • src/renderer/src/components/ChatInput.vue (1 hunks)
  • src/renderer/src/components/editor/mention/PromptParamsDialog.vue (1 hunks)
  • src/renderer/src/i18n/ko-KR/settings.json (1 hunks)
  • test/main/presenter/filesystem.test.ts (11 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
`src/renderer/src/**/*`: All user-facing strings in the renderer must use i18n k...

src/renderer/src/**/*: All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings

📄 Source: CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

List of files the instruction was applied to:

  • src/renderer/src/components/ChatInput.vue
  • src/renderer/src/components/editor/mention/PromptParamsDialog.vue
  • src/renderer/src/i18n/ko-KR/settings.json
`src/renderer/**`: 渲染进程代码放在 `src/renderer`

src/renderer/**: 渲染进程代码放在 src/renderer

📄 Source: CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

List of files the instruction was applied to:

  • src/renderer/src/components/ChatInput.vue
  • src/renderer/src/components/editor/mention/PromptParamsDialog.vue
  • src/renderer/src/i18n/ko-KR/settings.json
`**/*.{js,jsx,ts,tsx}`: 使用 OxLint 进行代码检查 Log和注释使用英文书写

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

📄 Source: CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

List of files the instruction was applied to:

  • test/main/presenter/filesystem.test.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
  • src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`**/*.{ts,tsx}`: 始终使用 try-catch 处理可能的错误 提供有意义的错误信息 记录详细的错误日志 优雅降级处理 日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息 使用结构化日志 避免记录敏感信息 设置适当的日志级别 不要吞掉错误 提供用户友好的错误信息 实现错误重试机制

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
使用结构化日志
避免记录敏感信息
设置适当的日志级别
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制

📄 Source: CodeRabbit Inference Engine (.cursor/rules/error-logging.mdc)

List of files the instruction was applied to:

  • test/main/presenter/filesystem.test.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
  • src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`src/main/presenter/**/*.ts`: Use context isolation for improved security Use El...

src/main/presenter/**/*.ts: Use context isolation for improved security
Use Electron's built-in APIs for file system and native dialogs
Optimize application startup time with lazy loading

📄 Source: CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

List of files the instruction was applied to:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
  • src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`{src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts}`: Implement proper inter-process communication (IPC) patterns Implement proper error handling and logging for debugging

{src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts}: Implement proper inter-process communication (IPC) patterns
Implement proper error handling and logging for debugging

📄 Source: CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

List of files the instruction was applied to:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
  • src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`src/main/presenter/llmProviderPresenter/providers/*.ts`: Each file in `src/main...

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Provider implementations must use a standardized interface in their coreStream method to yield events, decoupling the main Agent loop from provider-specific details.
The coreStream method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files must implement the coreStream(messages, modelId, temperature, maxTokens) method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Provider files should include provider-specific helper functions such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/llm-agent-loop.mdc)

List of files the instruction was applied to:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
`src/main/**`: 主进程代码放在 `src/main`

src/main/**: 主进程代码放在 src/main

📄 Source: CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

List of files the instruction was applied to:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts
  • src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts
`src/renderer/src/i18n/**/*.json`: Translation key naming must use dot-separated...

src/renderer/src/i18n/**/*.json: Translation key naming must use dot-separated hierarchy, lowercase letters, and meaningful descriptive names (e.g., 'common.button.submit')
Each language must have a separate JSON file in src/renderer/src/i18n/, and shared translation keys must be placed in common.json
When adding a new translation, add shared translations to common.json and language-specific translations to the respective language file; keep all language files' keys consistent
Keep the structure of translation files consistent across all languages
Regularly check for and remove unused translation keys from translation files

📄 Source: CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

List of files the instruction was applied to:

  • src/renderer/src/i18n/ko-KR/settings.json
🧠 Learnings (9)
src/renderer/src/components/ChatInput.vue (3)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both `text` and `content` properties in the `handleMeetingInstruction` function's `sendMessage` call, even though they are redundant, rather than removing the `content` property.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
test/main/presenter/filesystem.test.ts (9)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/main/presenter/**/*.ts : Use Electron's built-in APIs for file system and native dialogs
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/main/presenter/**/*.ts : Use context isolation for improved security
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 优雅降级处理
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper error handling and logging for debugging
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper inter-process communication (IPC) patterns
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 避免记录敏感信息
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 提供有意义的错误信息
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
src/renderer/src/components/editor/mention/PromptParamsDialog.vue (2)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (8)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both `text` and `content` properties in the `handleMeetingInstruction` function's `sendMessage` call, even though they are redundant, rather than removing the `content` property.
src/renderer/src/i18n/ko-KR/settings.json (7)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Keep the structure of translation files consistent across all languages
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Each language must have a separate JSON file in src/renderer/src/i18n/, and shared translation keys must be placed in common.json
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Regularly check for and remove unused translation keys from translation files
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : When adding a new translation, add shared translations to common.json and language-specific translations to the respective language file; keep all language files' keys consistent
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Translation key naming must use dot-separated hierarchy, lowercase letters, and meaningful descriptive names (e.g., 'common.button.submit')
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : All user-facing strings in the renderer must use i18n keys (do not hardcode user-visible text in code; use vue-i18n translation keys instead)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-06-30T12:23:45.479Z
Learning: Applies to src/renderer/src/**/* : Do not hardcode user-facing text in code; always use the translation system (vue-i18n) for all user-visible strings
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (8)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Maintain separation of concerns by centralizing Agent loop logic in `index.ts` and keeping provider files focused on API interaction and event standardization.
src/main/presenter/llmProviderPresenter/baseProvider.ts (10)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/streamEvents.ts : Define the standardized stream event interface (`LLMCoreStreamEvent`) in a shared location, such as `src/main/presenter/llmProviderPresenter/streamEvents.ts`.
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:250-252
Timestamp: 2025-06-21T15:48:29.950Z
Learning: In the meeting server implementation (src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts), when multiple tabs have the same title, the user prefers to let the code silently select the first match without adding warnings or additional ambiguity handling.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper error handling and logging for debugging
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (11)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files must implement the `coreStream(messages, modelId, temperature, maxTokens)` method, which receives formatted messages and generation parameters, handles tool support (native or via prompt wrapping), makes a single streaming API call, parses provider-specific data, and yields standardized stream events.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a standardized interface in their `coreStream` method to `yield` events, decoupling the main Agent loop from provider-specific details.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and communication with the frontend via `eventBus`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop, including the `while` loop for conversation flow, state management, provider interaction, event handling, and frontend communication, must be implemented in `src/main/presenter/llmProviderPresenter/index.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single-pass streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/streamEvents.ts : Define the standardized stream event interface (`LLMCoreStreamEvent`) in a shared location, such as `src/main/presenter/llmProviderPresenter/streamEvents.ts`.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Maintain separation of concerns by centralizing Agent loop logic in `index.ts` and keeping provider files focused on API interaction and event standardization.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/renderer/src/composables/usePresenter.ts : The IPC in the renderer process is implemented in usePresenter.ts, allowing direct calls to the presenter-related interfaces exposed by the main process
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper inter-process communication (IPC) patterns
src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts (10)
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, management of native/non-native tool call mechanisms (prompt wrapping), and standardizing output streams to a common event format.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to src/main/presenter/**/*.ts : Use Electron's built-in APIs for file system and native dialogs
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper error handling and logging for debugging
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-06-30T12:24:03.565Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should include provider-specific helper functions such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed.
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-06-30T12:23:13.338Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/stores/**/*.ts} : Implement proper inter-process communication (IPC) patterns
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 优雅降级处理
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/development-setup.mdc:0-0
Timestamp: 2025-06-30T12:23:01.752Z
Learning: Node.js >= 22
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 实现错误重试机制
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 始终使用 try-catch 处理可能的错误
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-06-30T12:23:33.814Z
Learning: Applies to **/*.{ts,tsx} : 不要吞掉错误
🧬 Code Graph Analysis (3)
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)
src/shared/presenter.d.ts (1)
  • ChatMessage (1066-1078)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)
src/main/presenter/llmProviderPresenter/baseProvider.ts (1)
  • SUMMARY_TITLES_PROMPT (602-604)
src/shared/presenter.d.ts (1)
  • ChatMessage (1066-1078)
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (1)
src/shared/presenter.d.ts (1)
  • MODEL_META (442-455)
🪛 Biome (1.9.4)
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts

[error] 156-156: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)


[error] 157-158: Change to an optional chain.

Unsafe fix: Change to an optional chain.

(lint/complexity/useOptionalChain)

src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts

[error] 1259-1259: Unnecessary continue statement

Unsafe fix: Delete the unnecessary continue statement

(lint/correctness/noUnnecessaryContinue)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (12)
src/renderer/src/components/ChatInput.vue (1)

273-273: LGTM! Minor syntax cleanup.

The removal of the trailing comma after the configuration object improves code consistency without any functional impact.

src/renderer/src/i18n/ko-KR/settings.json (1)

99-99: LGTM! Formatting consistency improvement.

The indentation adjustments align the translation keys properly without changing any translation content.

Also applies to: 103-103

src/main/presenter/llmProviderPresenter/baseProvider.ts (1)

604-604: LGTM! Proper file ending format.

Adding a newline at the end of the file follows standard formatting practices and improves compatibility with various tools.

src/renderer/src/components/editor/mention/PromptParamsDialog.vue (1)

10-31: LGTM! Improved component structure.

The template restructuring better groups related input elements (label, description, input field, and error message) within each parameter container, improving the visual hierarchy and organization.

src/main/presenter/mcpPresenter/inMemoryServers/filesystem.ts (3)

15-18: LGTM! Improved readability with multi-line formatting.

The expansion of Zod schema method chains improves readability without affecting functionality.


906-908: Good formatting improvement.

The multi-line error message formatting enhances readability.


949-954: Clean formatting improvements.

The multi-line destructuring and map function formatting enhance code clarity and maintainability.

Also applies to: 962-978

test/main/presenter/filesystem.test.ts (1)

25-40: Consistent formatting improvements.

The formatting changes align with the style improvements in the implementation file, maintaining consistency across the codebase.

Also applies to: 167-173

src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (3)

1-1: Well-structured interface definition for 302AI API response.

The _302AIModelResponse interface properly captures the 302AI-specific model metadata with clear field names and helpful comments explaining the mapping (e.g., content_length maps to context length).

Also applies to: 12-33


164-213: Excellent configuration management implementation.

The configuration update logic properly:

  • Prioritizes API values over existing configuration
  • Only updates when actual changes are detected
  • Provides comprehensive logging for debugging configuration changes
  • Maintains existing settings (reasoning, temperature, type) that aren't provided by the API

This ensures the model configurations stay up-to-date with the 302AI API while preserving user customizations.


234-238: Good error handling with graceful fallback.

The error handling properly logs issues and falls back to the parent implementation, ensuring the provider continues to function even if the 302AI-specific model fetching fails.

src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)

1056-1056: Clean formatting improvement.

The single-line array declaration is more concise for a simple array with one element.

const fullMessage: ChatMessage[] = [
{ role: 'user', content: summaryText }
]
const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map((m) => `${m.role}: ${m.content}`).join('\n')}`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Array content is coerced to [object Object], breaking the summary prompt

m.content can be ChatMessageContent[]; string-interpolation blindly coerces it to a string, producing [object Object] and polluting the prompt.
Extract the actual text parts instead.

-const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map((m) => `${m.role}: ${m.content}`).join('\n')}`
+const stringifyMessage = (m: ChatMessage) => {
+  if (typeof m.content === 'string') return `${m.role}: ${m.content}`
+  if (Array.isArray(m.content)) {
+    const text = m.content
+      .filter((p) => p.type === 'text' && 'text' in p && p.text)
+      .map((p) => (p as any).text)
+      .join(' ')
+    return `${m.role}: ${text}`
+  }
+  return `${m.role}:`
+}
+const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map(stringifyMessage).join('\n')}`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map((m) => `${m.role}: ${m.content}`).join('\n')}`
const stringifyMessage = (m: ChatMessage) => {
if (typeof m.content === 'string') {
return `${m.role}: ${m.content}`
}
if (Array.isArray(m.content)) {
const text = m.content
.filter((p) => p.type === 'text' && 'text' in p && p.text)
.map((p) => (p as any).text)
.join(' ')
return `${m.role}: ${text}`
}
return `${m.role}:`
}
const summaryText = `${SUMMARY_TITLES_PROMPT}\n\n${messages.map(stringifyMessage).join('\n')}`
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
at line 1201, the code uses string interpolation on m.content, which can be an
array of ChatMessageContent objects, causing it to convert to "[object Object]"
and corrupt the summary prompt. To fix this, check if m.content is an array and
extract the text parts from each ChatMessageContent object, concatenating them
into a single string before interpolation. This ensures the summary prompt
contains the actual message text instead of object representations.

@zerob13 zerob13 merged commit 1bd37fb into dev Jul 7, 2025
2 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Jul 17, 2025
@zerob13 zerob13 deleted the feature/302-provider-model-api branch September 21, 2025 15:15
@coderabbitai coderabbitai bot mentioned this pull request Jan 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants