Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Oct 13, 2025

Summary by CodeRabbit

  • New Features

    • Added CherryIn as a selectable AI provider with preset configuration and automatic detection by provider selection or API type.
    • Automatic model discovery for CherryIn with graceful fallback when remote listings are unavailable.
    • Expanded model catalog to include CherryIn-specific model entries and metadata.
  • Style

    • Added a CherryIn provider icon for visual consistency in the model/provider selector.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 13, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds a new CherryIn LLM provider: registers it in default providers, wires CherryInProvider into the provider factory (id and apiType fallback), implements CherryInProvider extending OpenAI-compatible fetching with remapping/fallback, adds CherryIn models to the model DB, and maps a CherryIn icon.

Changes

Cohort / File(s) Summary
Default provider registry
src/main/presenter/configPresenter/providers.ts
Adds CherryIn provider entry (id: "cherryin", apiType: "cherryin", baseUrl: "https://open.cherryin.ai/v1", disabled by default, plus website URLs).
Provider factory wiring
src/main/presenter/llmProviderPresenter/index.ts
Imports and registers CherryInProvider; instantiates it when provider id/api matches (cherryin) and adds fallback branch for apiType == "cherryin-responses".
CherryIn provider implementation
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
New CherryInProvider extends OpenAICompatibleProvider; overrides fetchOpenAIModels to call super, remap models (set group to cherryin when default, add providerId), log warnings and return an empty array on failure.
Model DB resource
resources/model-db/providers.json
Adds a comprehensive cherryin provider entry with many model definitions and detailed metadata (modalities, context/output sizes, temps, reasoning/attachment flags, dates, costs).
UI icon mapping
src/renderer/src/components/icons/ModelIcon.vue
Imports new cherryin colored icon asset and adds cherryin key to the icons map.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant App
  participant ProviderFactory as LLMProviderPresenter
  participant CherryIn as CherryInProvider
  participant OAICP as OpenAICompatibleProvider
  participant CherryAPI as CherryIn API

  User->>App: Request models / select CherryIn
  App->>ProviderFactory: createProviderInstance(provider)
  alt provider.id == "cherryin" or provider.apiType == "cherryin-responses"
    ProviderFactory-->>App: CherryInProvider instance
  end

  App->>CherryIn: fetchOpenAIModels()
  CherryIn->>OAICP: super.fetchOpenAIModels()
  OAICP->>CherryAPI: GET /models
  alt API returns models
    CherryAPI-->>OAICP: models[]
    OAICP-->>CherryIn: models[]
    CherryIn-->>App: models[] remapped (group -> "cherryin", providerId set)
  else API fails or empty
    Note over CherryIn: warn and apply fallback models
    CherryIn-->>App: fallback models[] with providerId
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

I hop to endpoints, cherry-bright and keen,
I fetch the models where their tokens preen.
If endpoints wobble, fallbacks bloom like spring,
I tag each model with a rosy ring.
🍒🐇 Hooray — new provider, soft and keen!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title clearly and concisely summarizes the primary change, which is the addition of the CherryIn provider, and it aligns with the content of the pull request without unnecessary detail or ambiguity.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/add-cherryin-provider

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/main/presenter/configPresenter/providers.ts (1)

128-128: Consider using kebab-case for consistency.

The provider id uses camelCase (cherryInProvider), which is inconsistent with similar providers like openai-responses, vercel-ai-gateway, and github-copilot that use kebab-case. Consider renaming to cherryin-provider or cherryin for consistency.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a51d5eb and ee7761e.

⛔ Files ignored due to path filters (1)
  • src/renderer/src/assets/llm-icons/cherryin-color.png is excluded by !**/*.png
📒 Files selected for processing (4)
  • src/main/presenter/configPresenter/providers.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/index.ts (3 hunks)
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (1 hunks)
  • src/renderer/src/components/icons/ModelIcon.vue (2 hunks)
🧰 Additional context used
📓 Path-based instructions (25)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

**/*.{js,jsx,ts,tsx}: Use OxLint for JS/TS code; pre-commit hooks run lint-staged and typecheck
Use camelCase for variables and functions
Use PascalCase for types and classes
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/llmProviderPresenter/index.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/index.ts: src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/renderer/src/components/icons/ModelIcon.vue
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/renderer/src/components/icons/ModelIcon.vue
src/main/**

📄 CodeRabbit inference engine (AGENTS.md)

Place all Electron main-process code under src/main/

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
src/main/presenter/**

📄 CodeRabbit inference engine (AGENTS.md)

src/main/presenter/**: Organize main-process presenters under src/main/presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider)
Follow the Presenter pattern for main-process modules

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
**/*.{js,jsx,ts,tsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

Apply Prettier formatting: single quotes, no semicolons, max width 100

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/renderer/src/components/icons/ModelIcon.vue
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/presenter/configPresenter/providers.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Add provider configuration entries in src/main/presenter/configPresenter/providers.ts

Files:

  • src/main/presenter/configPresenter/providers.ts
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Use Pinia for frontend state management (do not introduce alternative state libraries)

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/{src,shell,floating}/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/{src,shell,floating}/**/*.vue: Use Vue 3 Composition API for all components
All user-facing strings must use i18n keys via vue-i18n (no hard-coded UI strings)
Use Tailwind CSS utilities and ensure styles are scoped in Vue components

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/components/**/*

📄 CodeRabbit inference engine (CLAUDE.md)

Organize UI components by feature within src/renderer/src/

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**

📄 CodeRabbit inference engine (AGENTS.md)

Put application code for the Vue app under src/renderer/src (components, stores, views, i18n, lib)

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/src/**/*.{vue,ts}

📄 CodeRabbit inference engine (AGENTS.md)

All user-facing strings in the renderer must use vue-i18n keys defined in src/renderer/src/i18n

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.vue

📄 CodeRabbit inference engine (AGENTS.md)

Name Vue components in PascalCase (e.g., ChatInput.vue)

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
🧠 Learnings (6)
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/configPresenter/providers.ts
📚 Learning: 2025-09-04T11:03:30.184Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Import Icon component from Iconify/Vue.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use Iconify/Vue for icon implementation.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use the Icon component with lucide icons.

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
🧬 Code graph analysis (2)
src/main/presenter/llmProviderPresenter/index.ts (1)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (1)
  • CherryInProvider (62-89)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
  • OpenAICompatibleProvider (69-1499)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • IConfigPresenter (378-540)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (7)
src/renderer/src/components/icons/ModelIcon.vue (2)

3-3: LGTM!

The icon import follows the established pattern for other provider icons in this file.


66-66: LGTM!

The icon mapping correctly adds CherryIn icon support following the established pattern.

src/main/presenter/llmProviderPresenter/index.ts (3)

43-43: LGTM!

The import statement correctly adds the CherryInProvider class following the established pattern for provider imports.


205-206: LGTM!

The explicit provider-id case correctly instantiates CherryInProvider, following the established factory pattern.


257-258: LGTM!

The apiType fallback case correctly handles the cherryin-responses type, ensuring provider instantiation even when the explicit id isn't matched. This aligns with the apiType defined in the provider configuration.

src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)

62-65: LGTM!

The constructor correctly delegates to the parent OpenAICompatibleProvider class, following the established pattern for provider implementations.


67-88: LGTM with a minor observation.

The method correctly:

  • Attempts to fetch models from the API via the parent class
  • Remaps the group field to 'cherryin' for better categorization
  • Falls back gracefully to predefined models on failure
  • Ensures providerId is assigned consistently

The fallback approach ensures the provider remains functional even when the API is unavailable, which is a good resilience pattern.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aea6c83 and 97688f9.

📒 Files selected for processing (1)
  • resources/model-db/providers.json (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
{build,resources,out,dist}/**

📄 CodeRabbit inference engine (AGENTS.md)

Keep build outputs and assets within build/, resources/, out/, and dist/

Files:

  • resources/model-db/providers.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)

Comment on lines +38434 to +38436
"limit": {
"context": 163800
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Correct the DeepSeek V3.1 (free) context window

The context window is off by 40 tokens (163800 vs. the expected 163840). Please fix the value so downstream UI limits stay consistent with the provider’s published specs.

-          "context": 163800
+          "context": 163840
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"limit": {
"context": 163800
},
"limit": {
"context": 163840
},
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 38434 to 38436, the DeepSeek
V3.1 (free) provider's "limit.context" is set to 163800 but should be 163840;
update the numeric value to 163840 so the context window matches the published
spec and downstream UI limits remain consistent.

Comment on lines +38738 to +38740
"context": 1047576,
"output": 32768
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix the 1M-token context typos for the GPT‑4.1 family

Each GPT‑4.1 variant lists 1047576 tokens for context; the documented limit is 1,048,576. Please correct all three entries so we don't understate the available window.

-          "context": 1047576,
+          "context": 1048576,
@@
-          "context": 1047576,
+          "context": 1048576,
@@
-          "context": 1047576,
+          "context": 1048576,

Also applies to: 38771-38773, 38804-38806

🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 38738-38740, 38771-38773, and
38804-38806, three GPT‑4.1 context token values are mistyped as 1047576; update
each of those numeric values to the correct documented limit 1048576 so the
context window is accurately represented.

@zerob13 zerob13 merged commit 2e974c4 into dev Oct 14, 2025
1 of 2 checks passed
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)

4-7: Architectural mismatch already flagged.

This implementation extends OpenAICompatibleProvider (which uses /chat/completions), but the provider is configured with apiType: 'cherryin-responses'. This mismatch was already identified in previous review comments.


9-27: Critical: Provide fallback models to ensure provider remains functional when API fails.

Returning an empty array when the API call fails or returns no models will render the CherryIn provider completely unusable, as it will have no models available for selection. Previous review comments identified that fallback models should include actual CherryIn model IDs such as GLM-4.5-Air, glm-4v-flash, Zhipu_glm, and 360gpt-pro, not the placeholder OpenAI model names.

Define a CHERRYIN_FALLBACK_MODELS constant with actual CherryIn models and return it when the API call fails:

+const CHERRYIN_FALLBACK_MODELS: MODEL_META[] = [
+  {
+    id: 'GLM-4.5-Air',
+    name: 'GLM-4.5-Air',
+    group: 'cherryin',
+    providerId: 'cherryin',
+    isCustom: false,
+    contextLength: 128000,
+    maxTokens: 8192,
+    vision: false,
+    functionCall: true,
+    reasoning: false
+  },
+  {
+    id: 'glm-4v-flash',
+    name: 'GLM-4V Flash',
+    group: 'cherryin',
+    providerId: 'cherryin',
+    isCustom: false,
+    contextLength: 2000,
+    maxTokens: 1000,
+    vision: true,
+    functionCall: false,
+    reasoning: false
+  },
+  // Add other CherryIn models as needed
+]
+
 export class CherryInProvider extends OpenAICompatibleProvider {
   constructor(provider: LLM_PROVIDER, configPresenter: IConfigPresenter) {
     super(provider, configPresenter)
   }

   protected async fetchOpenAIModels(options?: { timeout: number }): Promise<MODEL_META[]> {
     try {
       const models = await super.fetchOpenAIModels(options)
       if (models.length > 0) {
         return models.map((model) => ({
           ...model,
           group: model.group === 'default' ? 'cherryin' : model.group,
           providerId: this.provider.id
         }))
       }
     } catch (error) {
       console.warn(
         '[CherryInProvider] Failed to fetch models via API, falling back to defaults',
         error
       )
     }

-    return []
+    return CHERRYIN_FALLBACK_MODELS
   }
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 97688f9 and 9cffd66.

📒 Files selected for processing (1)
  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

**/*.{js,jsx,ts,tsx}: Use OxLint for JS/TS code; pre-commit hooks run lint-staged and typecheck
Use camelCase for variables and functions
Use PascalCase for types and classes
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/**

📄 CodeRabbit inference engine (AGENTS.md)

Place all Electron main-process code under src/main/

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
src/main/presenter/**

📄 CodeRabbit inference engine (AGENTS.md)

src/main/presenter/**: Organize main-process presenters under src/main/presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider)
Follow the Presenter pattern for main-process modules

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
**/*.{js,jsx,ts,tsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

Apply Prettier formatting: single quotes, no semicolons, max width 100

Files:

  • src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (2)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
  • OpenAICompatibleProvider (69-1499)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • IConfigPresenter (378-540)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (1)
src/main/presenter/llmProviderPresenter/providers/cherryInProvider.ts (1)

13-17: CherryIn default group remapping
CherryIn provider remaps models with group === 'default' to 'cherryin', whereas other providers use static group values. Confirm this matches the intended grouping for CherryIn models.

Comment on lines +19 to +24
} catch (error) {
console.warn(
'[CherryInProvider] Failed to fetch models via API, falling back to defaults',
error
)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Improve error logging to follow structured logging guidelines.

The current error handling uses basic console.warn, which does not follow the coding guidelines that require structured logging with timestamps, log levels, error codes, error descriptions, stack traces, and relevant context information.

As per coding guidelines, implement structured error logging:

     } catch (error) {
-      console.warn(
-        '[CherryInProvider] Failed to fetch models via API, falling back to defaults',
-        error
-      )
+      console.error('[CherryInProvider] Failed to fetch models via API', {
+        timestamp: new Date().toISOString(),
+        level: 'ERROR',
+        errorCode: 'CHERRYIN_FETCH_MODELS_FAILED',
+        message: 'Failed to fetch models from CherryIn API, falling back to defaults',
+        error: error instanceof Error ? error.message : String(error),
+        stack: error instanceof Error ? error.stack : undefined,
+        context: {
+          providerId: this.provider.id,
+          timeout: options?.timeout
+        }
+      })
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (error) {
console.warn(
'[CherryInProvider] Failed to fetch models via API, falling back to defaults',
error
)
}
} catch (error) {
console.error('[CherryInProvider] Failed to fetch models via API', {
timestamp: new Date().toISOString(),
level: 'ERROR',
errorCode: 'CHERRYIN_FETCH_MODELS_FAILED',
message: 'Failed to fetch models from CherryIn API, falling back to defaults',
error: error instanceof Error ? error.message : String(error),
stack: error instanceof Error ? error.stack : undefined,
context: {
providerId: this.provider.id,
timeout: options?.timeout
}
})
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants