-
Notifications
You must be signed in to change notification settings - Fork 614
feat: add cherryin provider #993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds a new CherryIn LLM provider: registers it in default providers, wires CherryInProvider into the provider factory (id and apiType fallback), implements CherryInProvider extending OpenAI-compatible fetching with remapping/fallback, adds CherryIn models to the model DB, and maps a CherryIn icon. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant App
participant ProviderFactory as LLMProviderPresenter
participant CherryIn as CherryInProvider
participant OAICP as OpenAICompatibleProvider
participant CherryAPI as CherryIn API
User->>App: Request models / select CherryIn
App->>ProviderFactory: createProviderInstance(provider)
alt provider.id == "cherryin" or provider.apiType == "cherryin-responses"
ProviderFactory-->>App: CherryInProvider instance
end
App->>CherryIn: fetchOpenAIModels()
CherryIn->>OAICP: super.fetchOpenAIModels()
OAICP->>CherryAPI: GET /models
alt API returns models
CherryAPI-->>OAICP: models[]
OAICP-->>CherryIn: models[]
CherryIn-->>App: models[] remapped (group -> "cherryin", providerId set)
else API fails or empty
Note over CherryIn: warn and apply fallback models
CherryIn-->>App: fallback models[] with providerId
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/main/presenter/configPresenter/providers.ts (1)
128-128: Consider using kebab-case for consistency.The provider id uses camelCase (
cherryInProvider), which is inconsistent with similar providers likeopenai-responses,vercel-ai-gateway, andgithub-copilotthat use kebab-case. Consider renaming tocherryin-providerorcherryinfor consistency.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
src/renderer/src/assets/llm-icons/cherryin-color.pngis excluded by!**/*.png
📒 Files selected for processing (4)
src/main/presenter/configPresenter/providers.ts(1 hunks)src/main/presenter/llmProviderPresenter/index.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts(1 hunks)src/renderer/src/components/icons/ModelIcon.vue(2 hunks)
🧰 Additional context used
📓 Path-based instructions (25)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
**/*.{js,jsx,ts,tsx}: Use OxLint for JS/TS code; pre-commit hooks run lint-staged and typecheck
Use camelCase for variables and functions
Use PascalCase for types and classes
Use SCREAMING_SNAKE_CASE for constants
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.ts
src/main/presenter/llmProviderPresenter/index.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/index.ts:src/main/presenter/llmProviderPresenter/index.tsshould manage the overall Agent loop, conversation history, tool execution viaMcpPresenter, and frontend communication viaeventBus.
The main Agent loop inllmProviderPresenter/index.tsshould handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop withneedContinueConversationandtoolCallCount.
The main Agent loop should send standardizedSTREAM_EVENTS(RESPONSE,END,ERROR) to the frontend viaeventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Files:
src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.tssrc/renderer/src/components/icons/ModelIcon.vue
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.tssrc/renderer/src/components/icons/ModelIcon.vue
src/main/**
📄 CodeRabbit inference engine (AGENTS.md)
Place all Electron main-process code under src/main/
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.ts
src/main/presenter/**
📄 CodeRabbit inference engine (AGENTS.md)
src/main/presenter/**: Organize main-process presenters under src/main/presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider)
Follow the Presenter pattern for main-process modules
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.ts
**/*.{js,jsx,ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
Apply Prettier formatting: single quotes, no semicolons, max width 100
Files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/cherryInProvider.tssrc/main/presenter/configPresenter/providers.tssrc/renderer/src/components/icons/ModelIcon.vue
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/presenter/configPresenter/providers.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Add provider configuration entries in src/main/presenter/configPresenter/providers.ts
Files:
src/main/presenter/configPresenter/providers.ts
src/renderer/src/**/*
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
渲染进程代码放在
src/renderer
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
Use scoped styles to prevent CSS conflicts between components
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Implement lazy loading for routes and components.
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.Use Pinia for frontend state management (do not introduce alternative state libraries)
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/{src,shell,floating}/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/{src,shell,floating}/**/*.vue: Use Vue 3 Composition API for all components
All user-facing strings must use i18n keys via vue-i18n (no hard-coded UI strings)
Use Tailwind CSS utilities and ensure styles are scoped in Vue components
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/components/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
Organize UI components by feature within src/renderer/src/
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**
📄 CodeRabbit inference engine (AGENTS.md)
Put application code for the Vue app under src/renderer/src (components, stores, views, i18n, lib)
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.{vue,ts}
📄 CodeRabbit inference engine (AGENTS.md)
All user-facing strings in the renderer must use vue-i18n keys defined in src/renderer/src/i18n
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.vue
📄 CodeRabbit inference engine (AGENTS.md)
Name Vue components in PascalCase (e.g., ChatInput.vue)
Files:
src/renderer/src/components/icons/ModelIcon.vue
🧠 Learnings (6)
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files
Applied to files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts
Applied to files:
src/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/configPresenter/providers.ts
📚 Learning: 2025-09-04T11:03:30.184Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry
Applied to files:
src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Import Icon component from Iconify/Vue.
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Iconify/Vue for icon implementation.
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use the Icon component with lucide icons.
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
🧬 Code graph analysis (2)
src/main/presenter/llmProviderPresenter/index.ts (1)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (1)
CherryInProvider(62-89)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
OpenAICompatibleProvider(69-1499)src/shared/types/presenters/legacy.presenters.d.ts (1)
IConfigPresenter(378-540)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (7)
src/renderer/src/components/icons/ModelIcon.vue (2)
3-3: LGTM!The icon import follows the established pattern for other provider icons in this file.
66-66: LGTM!The icon mapping correctly adds CherryIn icon support following the established pattern.
src/main/presenter/llmProviderPresenter/index.ts (3)
43-43: LGTM!The import statement correctly adds the CherryInProvider class following the established pattern for provider imports.
205-206: LGTM!The explicit provider-id case correctly instantiates CherryInProvider, following the established factory pattern.
257-258: LGTM!The apiType fallback case correctly handles the
cherryin-responsestype, ensuring provider instantiation even when the explicit id isn't matched. This aligns with the apiType defined in the provider configuration.src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)
62-65: LGTM!The constructor correctly delegates to the parent
OpenAICompatibleProviderclass, following the established pattern for provider implementations.
67-88: LGTM with a minor observation.The method correctly:
- Attempts to fetch models from the API via the parent class
- Remaps the
groupfield to 'cherryin' for better categorization- Falls back gracefully to predefined models on failure
- Ensures
providerIdis assigned consistentlyThe fallback approach ensures the provider remains functional even when the API is unavailable, which is a good resilience pattern.
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
resources/model-db/providers.json(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
{build,resources,out,dist}/**
📄 CodeRabbit inference engine (AGENTS.md)
Keep build outputs and assets within build/, resources/, out/, and dist/
Files:
resources/model-db/providers.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
| "limit": { | ||
| "context": 163800 | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the DeepSeek V3.1 (free) context window
The context window is off by 40 tokens (163800 vs. the expected 163840). Please fix the value so downstream UI limits stay consistent with the provider’s published specs.
- "context": 163800
+ "context": 163840📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "limit": { | |
| "context": 163800 | |
| }, | |
| "limit": { | |
| "context": 163840 | |
| }, |
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 38434 to 38436, the DeepSeek
V3.1 (free) provider's "limit.context" is set to 163800 but should be 163840;
update the numeric value to 163840 so the context window matches the published
spec and downstream UI limits remain consistent.
| "context": 1047576, | ||
| "output": 32768 | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix the 1M-token context typos for the GPT‑4.1 family
Each GPT‑4.1 variant lists 1047576 tokens for context; the documented limit is 1,048,576. Please correct all three entries so we don't understate the available window.
- "context": 1047576,
+ "context": 1048576,
@@
- "context": 1047576,
+ "context": 1048576,
@@
- "context": 1047576,
+ "context": 1048576,Also applies to: 38771-38773, 38804-38806
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 38738-38740, 38771-38773, and
38804-38806, three GPT‑4.1 context token values are mistyped as 1047576; update
each of those numeric values to the correct documented limit 1048576 so the
context window is accurately represented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)
4-7: Architectural mismatch already flagged.This implementation extends
OpenAICompatibleProvider(which uses/chat/completions), but the provider is configured withapiType: 'cherryin-responses'. This mismatch was already identified in previous review comments.
9-27: Critical: Provide fallback models to ensure provider remains functional when API fails.Returning an empty array when the API call fails or returns no models will render the CherryIn provider completely unusable, as it will have no models available for selection. Previous review comments identified that fallback models should include actual CherryIn model IDs such as
GLM-4.5-Air,glm-4v-flash,Zhipu_glm, and360gpt-pro, not the placeholder OpenAI model names.Define a
CHERRYIN_FALLBACK_MODELSconstant with actual CherryIn models and return it when the API call fails:+const CHERRYIN_FALLBACK_MODELS: MODEL_META[] = [ + { + id: 'GLM-4.5-Air', + name: 'GLM-4.5-Air', + group: 'cherryin', + providerId: 'cherryin', + isCustom: false, + contextLength: 128000, + maxTokens: 8192, + vision: false, + functionCall: true, + reasoning: false + }, + { + id: 'glm-4v-flash', + name: 'GLM-4V Flash', + group: 'cherryin', + providerId: 'cherryin', + isCustom: false, + contextLength: 2000, + maxTokens: 1000, + vision: true, + functionCall: false, + reasoning: false + }, + // Add other CherryIn models as needed +] + export class CherryInProvider extends OpenAICompatibleProvider { constructor(provider: LLM_PROVIDER, configPresenter: IConfigPresenter) { super(provider, configPresenter) } protected async fetchOpenAIModels(options?: { timeout: number }): Promise<MODEL_META[]> { try { const models = await super.fetchOpenAIModels(options) if (models.length > 0) { return models.map((model) => ({ ...model, group: model.group === 'default' ? 'cherryin' : model.group, providerId: this.provider.id })) } } catch (error) { console.warn( '[CherryInProvider] Failed to fetch models via API, falling back to defaults', error ) } - return [] + return CHERRYIN_FALLBACK_MODELS } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
**/*.{js,jsx,ts,tsx}: Use OxLint for JS/TS code; pre-commit hooks run lint-staged and typecheck
Use camelCase for variables and functions
Use PascalCase for types and classes
Use SCREAMING_SNAKE_CASE for constants
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/**
📄 CodeRabbit inference engine (AGENTS.md)
Place all Electron main-process code under src/main/
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/presenter/**
📄 CodeRabbit inference engine (AGENTS.md)
src/main/presenter/**: Organize main-process presenters under src/main/presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider)
Follow the Presenter pattern for main-process modules
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{js,jsx,ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
Apply Prettier formatting: single quotes, no semicolons, max width 100
Files:
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
OpenAICompatibleProvider(69-1499)src/shared/types/presenters/legacy.presenters.d.ts (1)
IConfigPresenter(378-540)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (1)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (1)
13-17: CherryIn default group remapping
CherryIn provider remaps models withgroup === 'default'to'cherryin', whereas other providers use static group values. Confirm this matches the intended grouping for CherryIn models.
| } catch (error) { | ||
| console.warn( | ||
| '[CherryInProvider] Failed to fetch models via API, falling back to defaults', | ||
| error | ||
| ) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Improve error logging to follow structured logging guidelines.
The current error handling uses basic console.warn, which does not follow the coding guidelines that require structured logging with timestamps, log levels, error codes, error descriptions, stack traces, and relevant context information.
As per coding guidelines, implement structured error logging:
} catch (error) {
- console.warn(
- '[CherryInProvider] Failed to fetch models via API, falling back to defaults',
- error
- )
+ console.error('[CherryInProvider] Failed to fetch models via API', {
+ timestamp: new Date().toISOString(),
+ level: 'ERROR',
+ errorCode: 'CHERRYIN_FETCH_MODELS_FAILED',
+ message: 'Failed to fetch models from CherryIn API, falling back to defaults',
+ error: error instanceof Error ? error.message : String(error),
+ stack: error instanceof Error ? error.stack : undefined,
+ context: {
+ providerId: this.provider.id,
+ timeout: options?.timeout
+ }
+ })
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| console.warn( | |
| '[CherryInProvider] Failed to fetch models via API, falling back to defaults', | |
| error | |
| ) | |
| } | |
| } catch (error) { | |
| console.error('[CherryInProvider] Failed to fetch models via API', { | |
| timestamp: new Date().toISOString(), | |
| level: 'ERROR', | |
| errorCode: 'CHERRYIN_FETCH_MODELS_FAILED', | |
| message: 'Failed to fetch models from CherryIn API, falling back to defaults', | |
| error: error instanceof Error ? error.message : String(error), | |
| stack: error instanceof Error ? error.stack : undefined, | |
| context: { | |
| providerId: this.provider.id, | |
| timeout: options?.timeout | |
| } | |
| }) | |
| } |
Summary by CodeRabbit
New Features
Style