Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Oct 22, 2025

Parse reasoning.effort and reasoning.verbosity from Provider DB and sanitize into model schema

Summary by CodeRabbit

  • New Features
    • Added configurable reasoning effort (minimal, low, medium, high) and verbosity (low, medium, high) settings for compatible models.
    • Enhanced capability detection system ensures reasoning and verbosity parameters are only sent to models that explicitly support them.
    • Replaced model-specific capability detection with a generalized system for broader provider compatibility.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 22, 2025

Walkthrough

This change extends the application's model configuration system to support reasoning effort and verbosity capabilities. It adds effort and verbosity fields to the provider database schema, implements capability checking methods across presenter layers, gates OpenAI provider parameter assignment based on model support, and replaces GPT-5-specific UI logic with generalized capability checks.

Changes

Cohort / File(s) Summary
Provider Database and Sanitization
resources/model-db/providers.json, scripts/fetch-provider-db.mjs
Extended reasoning configuration with effort and verbosity fields; updated sanitization script to extract and validate these fields from upstream provider models.
Type Definitions
src/shared/types/model-db.ts, src/shared/types/presenters/legacy.presenters.d.ts
Added ReasoningSchema fields for effort and verbosity with validation helpers; extended IConfigPresenter interface with four new optional methods for capability and default queries.
Presenter Capabilities Layer
src/main/presenter/configPresenter/index.ts, src/main/presenter/configPresenter/modelCapabilities.ts, src/main/presenter/configPresenter/modelConfig.ts
Added ConfigPresenter methods delegating to modelCapabilities for effort/verbosity support checks and defaults; extended ModelCapabilities with capability detection and default retrieval; updated ModelConfigHelper to populate effort/verbosity from provider models.
OpenAI Provider Implementation
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts, src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
Added private helper methods to check model support for effort and verbosity parameters; guarded assignment of reasoning_effort and verbosity in request payloads behind capability checks.
Renderer UI and Composables
src/renderer/src/components/settings/ModelConfigDialog.vue, src/renderer/src/composables/useChatConfigFields.ts
Replaced GPT-5-specific verbosity gating with generalized capabilitySupportsVerbosity checks; added capability state holders for effort and defaults; removed isGPT5Model logic.
Tests
test/main/presenter/providerDbModelConfig.test.ts
Extended test expectations to verify reasoningEffort and verbosity properties in model configuration.

Sequence Diagram(s)

sequenceDiagram
    participant PDB as Provider DB
    participant FPS as fetch-provider-db.mjs
    participant MC as ModelCapabilities
    participant OAI as OpenAI Provider
    participant UI as ModelConfigDialog

    PDB->>FPS: Extract effort/verbosity from reasoning
    FPS->>FPS: Validate & sanitize fields
    FPS->>MC: Store validated effort/verbosity
    
    UI->>MC: Check supportsEffort/Verbosity
    MC-->>UI: Return capability boolean
    
    UI->>MC: Get effort/verbosity defaults
    MC-->>UI: Return defaults
    UI->>UI: Render fields conditionally
    
    UI->>OAI: Send request with config
    OAI->>MC: Check supportsEffortParameter(modelId)
    MC-->>OAI: Return capability
    
    alt Capability supported
        OAI->>OAI: Include reasoning_effort param
    else Capability not supported
        OAI->>OAI: Omit reasoning_effort param
    end
    OAI-->>UI: Response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

The changes follow a consistent pattern of adding capability-driven gating across multiple layers (types, presenters, providers, UI), but require separate verification in each domain due to different implementation contexts and integration points.

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

🐰 With effort measured and verbosity controlled,
Reasoning now speaks in tones more bold,
Capabilities checked, smart gating unfolds,
From database deep to UI's gentle holds—
A rabbit's refactor, in patterns consoled! 🌟

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "feat: support effort and verbosity" directly aligns with the main objective described in the PR description: parsing and sanitizing reasoning effort and verbosity from the Provider DB into the model schema. The title accurately captures the primary feature being introduced across the entire changeset, which adds support for effort and verbosity capabilities throughout the backend (model configuration, sanitization, provider logic) and frontend (UI controls). The title is concise, uses standard commit convention, avoids unnecessary noise, and is specific enough that a teammate scanning commit history would immediately understand the PR introduces support for these new model reasoning parameters.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/capability-driven-effort-verbosity

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/shared/types/model-db.ts (1)

134-157: Consider consolidating the helper functions.

Both getEffortValue and getVerbosityValue follow the same pattern: type check then switch statement for validation. This could be simplified using a generic helper or by leveraging the Zod schema for validation.

Example generic helper:

+function getEnumValue<T extends string>(v: unknown, validValues: readonly T[]): T | undefined {
+  if (typeof v !== 'string') return undefined
+  return validValues.includes(v as T) ? (v as T) : undefined
+}
+
-function getEffortValue(v: unknown): ReasoningEffort | undefined {
-  if (typeof v !== 'string') return undefined
-  switch (v) {
-    case 'minimal':
-    case 'low':
-    case 'medium':
-    case 'high':
-      return v
-    default:
-      return undefined
-  }
-}
+const EFFORT_VALUES = ['minimal', 'low', 'medium', 'high'] as const
+const VERBOSITY_VALUES = ['low', 'medium', 'high'] as const
+
+function getEffortValue(v: unknown): ReasoningEffort | undefined {
+  return getEnumValue(v, EFFORT_VALUES)
+}
 
-function getVerbosityValue(v: unknown): Verbosity | undefined {
-  if (typeof v !== 'string') return undefined
-  switch (v) {
-    case 'low':
-    case 'medium':
-    case 'high':
-      return v
-    default:
-      return undefined
-  }
-}
+function getVerbosityValue(v: unknown): Verbosity | undefined {
+  return getEnumValue(v, VERBOSITY_VALUES)
+}
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)

75-81: Consider extracting duplicated capability helpers to base class.

These helper methods are identical to those in openAICompatibleProvider.ts (lines 85-91). Since both providers inherit from BaseLLMProvider, consider moving these methods to the base class to reduce duplication.

In baseProvider.ts:

protected supportsEffortParameter(modelId: string): boolean {
  return modelCapabilities.supportsReasoningEffort(this.provider.id, modelId)
}

protected supportsVerbosityParameter(modelId: string): boolean {
  return modelCapabilities.supportsVerbosity(this.provider.id, modelId)
}

Then remove the private implementations from both OpenAI provider classes.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f7527c1 and ea1b169.

📒 Files selected for processing (12)
  • resources/model-db/providers.json (11 hunks)
  • scripts/fetch-provider-db.mjs (1 hunks)
  • src/main/presenter/configPresenter/index.ts (1 hunks)
  • src/main/presenter/configPresenter/modelCapabilities.ts (1 hunks)
  • src/main/presenter/configPresenter/modelConfig.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (3 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (4 hunks)
  • src/renderer/src/components/settings/ModelConfigDialog.vue (6 hunks)
  • src/renderer/src/composables/useChatConfigFields.ts (1 hunks)
  • src/shared/types/model-db.ts (3 hunks)
  • src/shared/types/presenters/legacy.presenters.d.ts (1 hunks)
  • test/main/presenter/providerDbModelConfig.test.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (30)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/composables/useChatConfigFields.ts
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/shared/types/model-db.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/modelConfig.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/composables/useChatConfigFields.ts
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/shared/types/model-db.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/modelConfig.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/composables/useChatConfigFields.ts
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/shared/types/model-db.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/configPresenter/modelConfig.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Use PascalCase for TypeScript types and classes

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/composables/useChatConfigFields.ts
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/shared/types/model-db.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/configPresenter/modelConfig.ts
test/**/*

📄 CodeRabbit inference engine (CLAUDE.md)

Place unit and integration tests under the test/ directory mirroring project structure

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
test/{main,renderer}/**/*

📄 CodeRabbit inference engine (AGENTS.md)

Mirror source structure for tests under test/main/** and test/renderer/**

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
test/{main,renderer}/**/*.{test,spec}.ts

📄 CodeRabbit inference engine (AGENTS.md)

Name test files with .test.ts or .spec.ts

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}

📄 CodeRabbit inference engine (AGENTS.md)

Prettier style: single quotes, no semicolons, print width 100; run pnpm run format

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/composables/useChatConfigFields.ts
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/shared/types/model-db.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • resources/model-db/providers.json
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/configPresenter/modelConfig.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants

Files:

  • test/main/presenter/providerDbModelConfig.test.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/src/composables/useChatConfigFields.ts
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/shared/types/model-db.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/configPresenter/modelConfig.ts
src/shared/**/*.{ts,tsx,d.ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

共享类型定义放在 shared 目录

Files:

  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/types/model-db.ts
src/shared/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Place shared types, utilities, constants, and IPC contract definitions under src/shared/

Files:

  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/types/model-db.ts
src/shared/**

📄 CodeRabbit inference engine (AGENTS.md)

Put shared TypeScript types and utilities under src/shared

Files:

  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/types/model-db.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/modelConfig.ts
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Use Pinia for frontend state management (do not introduce alternative state libraries)

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**

📄 CodeRabbit inference engine (AGENTS.md)

Place Vue 3 app source under src/renderer/src (components, stores, views, i18n, lib)

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts}

📄 CodeRabbit inference engine (AGENTS.md)

All user-facing strings must use vue-i18n ($t/keys) rather than hardcoded literals

Files:

  • src/renderer/src/composables/useChatConfigFields.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/modelConfig.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/configPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
{build,resources,out,dist}/**

📄 CodeRabbit inference engine (AGENTS.md)

Treat build outputs and assets (build/, resources/, out/, dist/) as generated artifacts; do not hand-edit

Files:

  • resources/model-db/providers.json
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/{src,shell,floating}/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/{src,shell,floating}/**/*.vue: Use Vue 3 Composition API for all components
All user-facing strings must use i18n keys via vue-i18n (no hard-coded UI strings)
Use Tailwind CSS utilities and ensure styles are scoped in Vue components

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/components/**/*

📄 CodeRabbit inference engine (CLAUDE.md)

Organize UI components by feature within src/renderer/src/

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.vue

📄 CodeRabbit inference engine (AGENTS.md)

Name Vue component files in PascalCase (e.g., ChatInput.vue)

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
🧠 Learnings (1)
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts

Applied to files:

  • src/main/presenter/configPresenter/index.ts
🧬 Code graph analysis (4)
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (1)
src/main/presenter/configPresenter/modelCapabilities.ts (1)
  • modelCapabilities (177-177)
src/main/presenter/configPresenter/index.ts (1)
src/main/presenter/configPresenter/modelCapabilities.ts (1)
  • modelCapabilities (177-177)
src/shared/types/model-db.ts (1)
scripts/fetch-provider-db.mjs (2)
  • effort (80-80)
  • verbosity (84-84)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
src/main/presenter/configPresenter/modelCapabilities.ts (1)
  • modelCapabilities (177-177)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (22)
src/shared/types/presenters/legacy.presenters.d.ts (1)

399-405: LGTM! Well-structured capability interface additions.

The new methods follow established patterns and TypeScript best practices:

  • Naming is consistent with existing capability methods (supportsReasoningCapability, supportsSearchCapability)
  • Return types use precise literal unions that match field definitions throughout the file (lines 147-148, 740-741, 1388-1389)
  • Optional method signatures (?) maintain backward compatibility
  • Return type of | undefined for getters clearly indicates when a default isn't available
resources/model-db/providers.json (2)

3333-3369: Clarify file classification: is this a source or generated artifact?

The coding guidelines classify resources/ as a generated artifact that should not be hand-edited. However, this PR intentionally modifies resources/model-db/providers.json as a source configuration file. If this file is indeed meant to be manually maintained (as the PR changes suggest), the coding guidelines may need updating to clarify which resources are generated vs. source-maintained.

Can you confirm whether resources/model-db/providers.json is:

  1. A source configuration file that is manually maintained (in which case the guidelines should exclude it), or
  2. A generated artifact that should not be directly modified?

If it's a source file, I can then verify data consistency across all effort/verbosity assignments.


3333-69869: Verify JSON validity and consistency of the complete file.

While the provided code snippets show syntactically correct JSON structures, a full-file validation is needed to ensure:

  1. No malformed JSON (unclosed braces, missing commas)
  2. Consistent indentation and formatting per Prettier standards (print width 100)
  3. All modified reasoning objects have coherent and consistent effort/verbosity assignments
src/renderer/src/composables/useChatConfigFields.ts (1)

211-212: LGTM! Capability-based gating correctly implemented.

The change from isGPT5Model.value to checking options.verbosity.value !== undefined properly generalizes the verbosity field visibility to be capability-driven rather than model-specific. This aligns with the reasoning effort pattern above (line 160) and the broader capability checks introduced across the codebase.

src/main/presenter/configPresenter/modelConfig.ts (1)

67-73: LGTM! Type assertions are explicit and correct.

The reasoningEffort and verbosity fields are properly populated from the provider model with explicit type assertions. The use of ?? undefined clearly indicates optional nature, and the type casts to union types ('minimal' | 'low' | 'medium' | 'high' | undefined and 'low' | 'medium' | 'high' | undefined) ensure type safety downstream.

scripts/fetch-provider-db.mjs (1)

79-87: LGTM! Sanitization logic is robust.

The extraction and validation of effort and verbosity fields follows the established pattern for other reasoning properties. Type checking (typeof ... === 'string') and value validation (includes()) ensure only valid enum values are propagated to the sanitized schema.

test/main/presenter/providerDbModelConfig.test.ts (2)

69-74: LGTM! Mock data correctly includes new fields.

The test mock properly includes effort and verbosity in the reasoning object, matching the extended schema structure.


100-101: LGTM! Test assertions validate new fields.

The test correctly asserts that reasoningEffort and verbosity are populated from the provider DB configuration.

src/shared/types/model-db.ts (2)

17-18: LGTM! Schema extensions are well-defined.

The effort and verbosity fields are properly added to ReasoningSchema with appropriate Zod enum validators and marked as optional, consistent with the schema design.


178-189: LGTM! getReasoning properly extracts new fields.

The function correctly extracts effort and verbosity using the helper functions and includes them in the returned object when any reasoning property exists.

src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)

85-91: LGTM! Helper methods properly delegate to capability layer.

The private helper methods cleanly encapsulate capability checks by delegating to the modelCapabilities singleton, improving code readability.


658-665: LGTM! Parameter assignment correctly guarded by capabilities.

Both reasoningEffort and verbosity parameters are properly gated behind capability checks, ensuring only supported parameters are sent to the API. The pattern is consistent and defensive.

src/main/presenter/configPresenter/modelCapabilities.ts (2)

117-125: LGTM! Capability checks are straightforward and correct.

The support check methods properly verify that the reasoning fields exist and are strings, aligning with the schema validation pattern.


127-140: LGTM! Default getters properly validate enum values.

The methods safely return default values with proper enum validation, using explicit OR conditions to ensure type safety. Returning undefined for invalid values is a safe fallback.

src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (2)

242-253: LGTM! Capability guards correctly applied in completion method.

The reasoningEffort and verbosity parameters are properly guarded and nested in the correct request structure for this provider.


575-586: LGTM! Capability guards correctly applied in streaming method.

The guards consistently protect both the standard completion and streaming paths, ensuring unsupported parameters are never sent.

src/main/presenter/configPresenter/index.ts (1)

234-251: LGTM! Clean capability delegation.

The four new methods properly expose reasoning effort and verbosity capabilities by delegating to modelCapabilities. The implementation is consistent with existing capability methods and follows the established pattern.

src/renderer/src/components/settings/ModelConfigDialog.vue (5)

194-194: Good: Generalized comment aligns with capability-driven gating.

The comment update removes GPT-5 specific references and correctly describes the new capability-based visibility logic.


601-601: LGTM! Capability-driven computed properties.

The new supportsVerbosity and updated supportsReasoningEffort computed properties correctly derive UI visibility from capability state using strict boolean checks.

Also applies to: 610-610


627-630: Well-typed capability state refs.

The new refs for effort and verbosity capabilities use appropriate nullable types and follow the established pattern for capability state management.


640-668: Solid capability fetching implementation.

The fetchCapabilities function correctly:

  • Fetches effort and verbosity capabilities in parallel with existing checks
  • Populates the new state refs with results
  • Resets capability state on error to prevent stale data

485-497: Correct default application logic.

The initialization properly applies capability defaults for effort and verbosity only when:

  1. The config is not user-defined (preserves user customization)
  2. The capability is supported
  3. No existing value is set

This ensures defaults are applied appropriately without overwriting user preferences.

@zerob13 zerob13 merged commit e4b12ae into dev Oct 22, 2025
2 checks passed
zerob13 added a commit that referenced this pull request Oct 22, 2025
* style(settings): format about page link handler (#1016)

* style(ollama): format model config handlers (#1018)

* fix: think content scroll issue (#1023)

* fix: remove shimmer for think content

* chore: update screen shot and fix scroll issue

* chore: update markdown renderer

* fix: import button bug and prevent backup overwriting during import (#1024)

* fix(sync): fix import button bug and prevent backup overwriting during import

* fix(sync): fix import button bug and prevent backup overwriting during import

* fix(sync): fix import button bug and prevent backup overwriting during import

* refactor(messageList): refactor message list ui components (#1026)

* feat: remove new thread button, add clean button.

* refactor(messageList): refactor message list ui components

* feat: add configurable fields for chat settings

- Introduced ConfigFieldHeader component for consistent field headers.
- Added ConfigInputField, ConfigSelectField, ConfigSliderField, and ConfigSwitchField components for various input types.
- Created types for field configurations in types.ts to standardize field definitions.
- Implemented useChatConfigFields composable to manage field configurations dynamically.
- Added useModelCapabilities and useModelTypeDetection composables for handling model-specific capabilities and requirements.
- Developed useSearchConfig and useThinkingBudget composables for managing search and budget configurations.

* feat: implement input history management in prompt input

- Added `useInputHistory` composable for managing input history and navigation.
- Implemented methods for setting, clearing, and confirming history placeholders.
- Integrated arrow key navigation for browsing through input history.

feat: enhance mention data handling in prompt input

- Created `useMentionData` composable to aggregate mention data from selected files and MCP resources.
- Implemented watchers to update mention data based on selected files, MCP resources, tools, and prompts.

feat: manage prompt input configuration with store synchronization

- Developed `usePromptInputConfig` composable for managing model configuration.
- Implemented bidirectional sync between local config and chat store.
- Added debounced watcher to reduce updates and improve performance.

feat: streamline TipTap editor operations in prompt input

- Introduced `usePromptInputEditor` composable for managing TipTap editor lifecycle and content transformation.
- Implemented methods for handling mentions, pasting content, and clearing editor content.

feat: handle file operations in prompt input

- Created `usePromptInputFiles` composable for managing file selection, paste, and drag-drop operations.
- Implemented methods for processing files, handling dropped files, and clearing selected files.

feat: manage rate limit status in prompt input

- Developed `useRateLimitStatus` composable for displaying and polling rate limit status.
- Implemented methods for handling rate limit events and computing status icons, classes, and tooltips.

* refactor(artifacts): migrate component logic to composables and update documentation

- Refactor ArtifactDialog.vue to use composables for view mode, viewport size, code editor, and export functionality
- Simplify HTMLArtifact.vue by removing drag-resize logic and using fixed viewport dimensions
- Clean up MermaidArtifact.vue styling and structure
- Update component refactoring guide to reflect new patterns and best practices
- Adjust prompt input composable to allow delayed editor initialization
- Update internationalization files for new responsive label

* fix(lint): unused variables

* fix(format): format code

* CodeRabbit Generated Unit Tests: Add renderer unit tests for components and composables

* feat: implement input history management in chat input component

- Added `useInputHistory` composable for managing input history and placeholder navigation.
- Implemented methods for setting, clearing, and confirming history placeholders.
- Integrated arrow key navigation for cycling through input history.

feat: enhance mention data handling in chat input

- Created `useMentionData` composable to manage mention data aggregation.
- Implemented watchers for selected files and MCP resources/tools/prompts to update mention data.

feat: manage prompt input configuration and synchronization

- Developed `usePromptInputConfig` composable for managing model configuration.
- Implemented bidirectional sync between local config refs and chat store.
- Added debounced watcher to reduce updates to the store.

feat: manage prompt input editor operations

- Introduced `usePromptInputEditor` composable for handling TipTap editor operations.
- Implemented content transformation, mention insertion, and paste handling.
- Added methods for handling editor updates and restoring focus.

feat: handle prompt input files management

- Created `usePromptInputFiles` composable for managing file operations in prompt input.
- Implemented file selection, paste, drag-drop, and prompt files integration.

feat: implement rate limit status management

- Developed `useRateLimitStatus` composable for managing rate limit status display and polling.
- Added methods for retrieving rate limit status icon, class, tooltip, and wait time formatting.

* feat: enhance chat input component with context length management and settings integration

* feat: update model configuration and enhance error handling in providers

* feat: add MCP tools list component and integrate with chat settings
feat: enhance artifact dialog with improved error handling and localization
fix: update Mermaid artifact rendering error handling and localization
fix: improve input settings error handling and state management
fix: update drag and drop composable to handle drag events correctly
fix: update Vitest configuration for better project structure and alias resolution

* fix(i18n): add unknownError translation

---------

Co-authored-by: deepinsect <deepinsect@github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>

* feat: add Poe provider integration and icon support  (#1028)

* feat: add Poe provider integration and icon support

* chore: format and lint

---------

Co-authored-by: zerob13 <zerob13@gmail.com>

* fix: make auto scroll works (#1030)

* fix: allow settings window links to open externally (#1029)

* fix(settings): allow target blank links

* fix: harden settings window link handling

* feat: enhance GitHub Copilot Device Flow with OAuth token management and API token retrieval (#1021)

* feat: enhance GitHub Copilot Device Flow with OAuth token management and API token retrieval

- Fixed request header for managing OAuth tokens and retrieving API tokens.
- Enhanced model definitions and added new models for better compatibility.

* fix: remove privacy related log

* fix: OAuth 2.0 for slow_down response

* fix: handle lint errors

* fix: provider fetched from publicdb

* fix(githubCopilotProvider): update request body logging format for clarity

* fix(githubCopilotProvider): improve error handling and logging in device flow

* feat(theme): fix message paragraph gap and toolcall block (#1031)

Co-authored-by: deepinsect <deepinsect@github.com>

* fix: scroll to bottom (#1034)

* fix: add debounce for renderer

* feat: add max wait for renderer

* chore(deps): upgrade markdown renderer add worker support

* chore: bump markdown version

* fix(build): use es module worker format (#1037)

* feat: remove function deleteOllamaModel (#1036)

* feat: remove function deleteOllamaModel

* fix(build): use es module worker format (#1037)

---------

Co-authored-by: duskzhen <zerob13@gmail.com>

* perf: update dependencies to use stream-monaco and bump vue-renderer-markdown version (#1038)

* feat(theme): add markdown layout style and table style (#1039)

* feat(theme): add markdown layout style and table style

* fix(lint): remove props

---------

Co-authored-by: deepinsect <deepinsect@github.com>

* feat: support effort and verbosity (#1040)

* chore: bump up version

* feat: add jiekou.ai as LLM provider (#1041)

* feat: add jiekou.ai as LLM provider

* fix: change api type to jiekou

---------

Co-authored-by: zerob13 <zerob13@gmail.com>

* chore: update provider db

---------

Co-authored-by: 韦伟 <xweimvp@gmail.com>
Co-authored-by: Happer <ericted8810us@gmail.com>
Co-authored-by: deepinsect <deepinsect@github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: cp90 <153345481+cp90-pixel@users.noreply.github.com>
Co-authored-by: Cedric <14017092+douyixuan@users.noreply.github.com>
Co-authored-by: Simon He <57086651+Simon-He95@users.noreply.github.com>
Co-authored-by: yyhhyyyyyy <yyhhyyyyyy8@gmail.com>
Co-authored-by: cnJasonZ <gbdzxalbb@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants