Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Aug 14, 2025

  • Increase model fetch timeout from 5s to 12s for all providers to handle large model lists
  • Add unified timeout configuration in BaseLLMProvider for better maintainability
  • Fix model enable/disable logic in WelcomeView (was inverted)
  • Fix model list scrolling issue by removing conflicting overflow styles
  • Update timeout configuration in OpenAICompatibleProvider and OpenAIResponsesProvider

Summary by CodeRabbit

  • New Features

    • None.
  • Improvements

    • Unified model fetch timeout for more reliable model loading.
  • Bug Fixes

    • Fixed model enable/disable toggle applying the wrong state.
    • Clarified error message when updating model status.
  • UI

    • Adjusted model list container to prevent content clipping while preserving vertical scrolling.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 14, 2025

Walkthrough

Adds a centralized default model-fetch timeout in BaseLLMProvider and updates two OpenAI-compatible providers to use it. Fixes a state inversion bug in WelcomeView’s model enable/disable handler and tweaks container CSS by removing overflow-hidden.

Changes

Cohort / File(s) Summary
Base timeout centralization
src/main/presenter/llmProviderPresenter/baseProvider.ts
Introduces DEFAULT_MODEL_FETCH_TIMEOUT = 12000 and protected getModelFetchTimeout() for reuse/override.
Providers use base timeout
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts, src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
Replaces hard-coded 5000 ms model-fetch timeouts with this.getModelFetchTimeout() in check() implementations.
Welcome view fixes
src/renderer/src/views/WelcomeView.vue
Corrects enabled flag passed to updateModelStatus (removes negation) and adjusts classes by removing overflow-hidden from the list wrapper; generalizes error message.

Sequence Diagram(s)

sequenceDiagram
  participant UI as User/Settings
  participant Provider as OpenAI*Provider.check()
  participant Base as BaseLLMProvider
  participant API as OpenAI API

  UI->>Provider: check()
  Provider->>Base: getModelFetchTimeout()
  Base-->>Provider: 12000 ms (default)
  Provider->>API: fetch models (timeout=12000)
  API-->>Provider: models / error
  Provider-->>UI: result
Loading
sequenceDiagram
  participant User
  participant View as WelcomeView.vue
  participant Store as updateModelStatus()

  User->>View: Toggle model enabled
  View->>Store: updateModelStatus(modelId, enabled)
  Store-->>View: success/failure
  View-->>User: UI updated / error message
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Poem

A rabbit taps the timeout key,
Twelve seconds set, consistent, free.
No flipsy switch to turn you round—
Enabled means it, solid ground.
Scrolls unclip, the models sing,
Hop-hop—unified fetching spring. 🐇⏳

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/provider-model-sync-and-ui-issues

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (3)
src/renderer/src/views/WelcomeView.vue (3)

363-364: Hardcoded user-facing text — replace with i18n

Renderer must not include literal strings. Replace "同步模型失败..." with a translation key.

Apply this diff:

-              <div v-if="!providerModelLoading && providerModels.length <= 0">同步模型失败...</div>
+              <div v-if="!providerModelLoading && providerModels.length <= 0">
+                {{ t('welcome.modelSyncFailed') }}
+              </div>

Note: Please add the welcome.modelSyncFailed key to the locale files.


263-265: Placeholders must use i18n keys

User-visible placeholders should be localized. Bind placeholder to i18n.

-                    <SelectValue placeholder="Select a provider" />
+                    <SelectValue :placeholder="t('welcome.provider.selectPlaceholder')" />

Note: Add welcome.provider.selectPlaceholder to locales.


288-301: Localize API URL and API Key placeholders

These placeholders are user-facing and should use i18n keys.

-                  <Input id="api-url" v-model="baseUrl" placeholder="Enter API URL" />
+                  <Input id="api-url" v-model="baseUrl" :placeholder="t('welcome.provider.apiUrlPlaceholder')" />
...
-                  <Input
+                  <Input
                     id="api-key"
                     v-model="apiKey"
                     type="password"
-                    placeholder="Enter API Key"
+                    :placeholder="t('welcome.provider.apiKeyPlaceholder')"
                   />

Note: Add welcome.provider.apiUrlPlaceholder and welcome.provider.apiKeyPlaceholder to locales.

🧹 Nitpick comments (4)
src/renderer/src/views/WelcomeView.vue (1)

191-205: Remove debug log or downgrade to debug-level

The console.log('newVal', newVal) looks like a leftover debug. Either remove it or use a consistent debug logger behind a debug flag.

-      console.log('newVal', newVal)
src/main/presenter/llmProviderPresenter/baseProvider.ts (1)

63-70: Make timeout overridable via config (optional)

Providing a base getter is great. Consider allowing per-install override via ConfigPresenter so users can tune timeouts without a code change.

-  protected getModelFetchTimeout(): number {
-    return BaseLLMProvider.DEFAULT_MODEL_FETCH_TIMEOUT
-  }
+  protected getModelFetchTimeout(): number {
+    // Optional: allow override via app settings; falls back to default if unset/invalid
+    const value = Number(this.configPresenter.getSetting('modelFetchTimeoutMs'))
+    return Number.isFinite(value) && value > 0
+      ? value
+      : BaseLLMProvider.DEFAULT_MODEL_FETCH_TIMEOUT
+  }

If you prefer provider-specific config, expose a provider-scoped getter in ConfigPresenter and read from it here.

src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)

1219-1223: Consider i18n for notification title (renderer-visible)

Notifications shown in the renderer should use translation keys. If your notification UI supports i18n keys, send a key instead of a literal.

Potential change (if your notification system supports titleKey):

-      eventBus.sendToRenderer(NOTIFICATION_EVENTS.SHOW_ERROR, SendTarget.ALL_WINDOWS, {
-        title: 'API Check Failed', // More specific title
+      eventBus.sendToRenderer(NOTIFICATION_EVENTS.SHOW_ERROR, SendTarget.ALL_WINDOWS, {
+        titleKey: 'notifications.apiCheckFailed.title',
         message: errorMessage,
         id: `openai-check-error-${Date.now()}`,
         type: 'error'
       })

Alternatively, keep title but add a titleKey so renderer can prioritize i18n.

src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)

1090-1095: Consider i18n for notification title (renderer-visible)

Same note as the compatible provider: prefer translation keys to literals for renderer notifications if supported.

-      eventBus.sendToRenderer(NOTIFICATION_EVENTS.SHOW_ERROR, SendTarget.ALL_WINDOWS, {
-        title: 'API Check Failed', // More specific title
+      eventBus.sendToRenderer(NOTIFICATION_EVENTS.SHOW_ERROR, SendTarget.ALL_WINDOWS, {
+        titleKey: 'notifications.apiCheckFailed.title',
         message: errorMessage,
         id: `openai-check-error-${Date.now()}`,
         type: 'error'
       })
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7174593 and 1d7028c.

📒 Files selected for processing (4)
  • src/main/presenter/llmProviderPresenter/baseProvider.ts (2 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1 hunks)
  • src/renderer/src/views/WelcomeView.vue (2 hunks)
🧰 Additional context used
📓 Path-based instructions (16)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
  • src/renderer/src/views/WelcomeView.vue
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/llmProviderPresenter/baseProvider.ts
src/renderer/src/**/*.vue

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI components

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/views/WelcomeView.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls

Files:

  • src/renderer/src/views/WelcomeView.vue
src/renderer/src/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/views/WelcomeView.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/views/WelcomeView.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/views/WelcomeView.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/views/WelcomeView.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/views/WelcomeView.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Files:

  • src/renderer/src/views/WelcomeView.vue
🧠 Learnings (2)
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Applied to files:

  • src/main/presenter/llmProviderPresenter/baseProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield events asynchronously using the async generator pattern.

Applied to files:

  • src/main/presenter/llmProviderPresenter/baseProvider.ts
🔇 Additional comments (5)
src/renderer/src/views/WelcomeView.vue (2)

214-217: Fixed inverted toggle logic — correct and aligned with intent

Passing the actual enabled value to updateModelStatus (instead of negation) is the right fix. The updated error log message is also clearer.


342-343: Restoring scrollability by removing overflow-hidden is appropriate

Dropping overflow-hidden from the container resolves the model list scrolling issue while retaining overflow-y-auto. Good targeted fix.

src/main/presenter/llmProviderPresenter/baseProvider.ts (1)

33-34: Centralized default model fetch timeout (12s) — good

Introducing DEFAULT_MODEL_FETCH_TIMEOUT centralizes timeout policy and aligns providers on a sane default for large model lists.

src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)

1201-1203: Unified timeout usage — correct and consistent

Using this.getModelFetchTimeout() in check() removes the hard-coded 5s and aligns with the base provider’s policy.

src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)

1073-1075: Unified timeout usage — correct and consistent

Replacing the hard-coded 5s with this.getModelFetchTimeout() is aligned with the new base configuration and improves reliability for large model lists.

@zerob13 zerob13 merged commit 8799ea7 into dev Aug 15, 2025
2 checks passed
@zerob13 zerob13 deleted the fix/provider-model-sync-and-ui-issues branch November 23, 2025 13:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants