-
Notifications
You must be signed in to change notification settings - Fork 614
feat: providerDB model capabilities #971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughIntroduces a provider DB–driven capability system and documentation. Adds/standardizes provider and model entries in providers.json. Implements a ModelCapabilities index with event-driven rebuilds and alias resolution. Updates modelConfig to normalize provider IDs. Refactors dashscopeProvider to consume centralized reasoning/search defaults and remove hardcoded lists. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant App
participant ProviderDbLoader as Provider DB Loader
participant ModelCapabilities as ModelCapabilities (index)
participant Presenter as DashscopeProvider
participant ProviderAPI as Provider API
rect rgb(245,248,255)
note over ProviderDbLoader,ModelCapabilities: Initialization & Updates
App->>ProviderDbLoader: load()
ProviderDbLoader-->>ModelCapabilities: emit LOADED (db)
ModelCapabilities->>ModelCapabilities: rebuild index (lowercased IDs, aliases)
ProviderDbLoader-->>ModelCapabilities: emit UPDATED (db)
ModelCapabilities->>ModelCapabilities: rebuild index
end
rect rgb(245,255,245)
note over App,ProviderAPI: Request flow with centralized capabilities
App->>Presenter: coreStream(request, providerId, modelId, overrides)
Presenter->>ModelCapabilities: supportsReasoning(providerId, modelId)
ModelCapabilities-->>Presenter: boolean
Presenter->>ModelCapabilities: getThinkingBudgetRange(...)
ModelCapabilities-->>Presenter: {min,max,default}
Presenter->>ModelCapabilities: supportsSearch(...), getSearchDefaults(...)
ModelCapabilities-->>Presenter: flags/defaults
Presenter->>Presenter: compose params (apply DB defaults, apply overrides)
Presenter->>ProviderAPI: stream(params)
ProviderAPI-->>Presenter: chunks
Presenter-->>App: response stream
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@codex review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
resources/model-db/providers.json (1)
6888-6889: Restore a human-readable provider name
display_namenow showsalibaba-cn, so the UI will surface the raw id instead of the friendlier “DashScope”/“Alibaba Cloud” label users expect. Please keepdisplay_name(and likelyname) descriptive while keeping the id unchanged.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
resources/model-db/providers.json(8 hunks)src/main/presenter/configPresenter/modelCapabilities.ts(1 hunks)src/main/presenter/configPresenter/modelConfig.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (12)
{build,resources,out,dist}/**
📄 CodeRabbit inference engine (AGENTS.md)
Keep build outputs and assets within build/, resources/, out/, and dist/
Files:
resources/model-db/providers.json
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
**/*.{js,jsx,ts,tsx}: Use OxLint for JS/TS code; pre-commit hooks run lint-staged and typecheck
Use camelCase for variables and functions
Use PascalCase for types and classes
Use SCREAMING_SNAKE_CASE for constants
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/**
📄 CodeRabbit inference engine (AGENTS.md)
Place all Electron main-process code under src/main/
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/presenter/**
📄 CodeRabbit inference engine (AGENTS.md)
src/main/presenter/**: Organize main-process presenters under src/main/presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider)
Follow the Presenter pattern for main-process modules
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
**/*.{js,jsx,ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
Apply Prettier formatting: single quotes, no semicolons, max width 100
Files:
src/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
🧠 Learnings (1)
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts
Applied to files:
src/main/presenter/configPresenter/modelConfig.ts
🧬 Code graph analysis (2)
src/main/presenter/configPresenter/modelCapabilities.ts (3)
src/shared/types/model-db.ts (2)
ProviderModel(57-57)ProviderAggregate(75-75)src/main/eventbus.ts (1)
eventBus(151-151)src/main/presenter/configPresenter/providerDbLoader.ts (1)
providerDbLoader(225-225)
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (1)
src/main/presenter/configPresenter/modelCapabilities.ts (1)
modelCapabilities(103-103)
🔇 Additional comments (7)
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (2)
12-12: LGTM! Clean refactoring to centralized capabilities.The refactoring from hardcoded model lists to
modelCapabilities-based checks significantly improves maintainability by centralizing provider/model capability information.Also applies to: 19-25
53-72: LGTM! Proper handling of thinking budget and search defaults.The logic correctly:
- Retrieves defaults from the centralized
modelCapabilities- Falls back to
modelConfigvalues when available- Applies proper type guards (
typeof budget === 'number',if (strategy)) to ensure only valid values are sent to the APIThis aligns well with the centralized capability system introduced in this PR.
src/main/presenter/configPresenter/modelCapabilities.ts (5)
6-16: LGTM! Well-defined capability types.The
ThinkingBudgetRangeandSearchDefaultstypes provide clear contracts for capability information with optional fields, allowing graceful handling of missing data.
18-49: LGTM! Solid index design with event-driven refresh.The implementation correctly:
- Rebuilds the index from the provider DB on initialization
- Automatically refreshes on
LOADEDandUPDATEDevents- Uses case-insensitive keys for consistent lookups
- Handles missing data gracefully with optional chaining
The nested
Map<string, Map<string, ProviderModel>>structure provides efficient O(1) lookups for provider→model queries.Note: The
PROVIDER_ID_ALIASESduplication issue is covered in the review comment formodelConfig.ts.
51-64: LGTM! Robust model lookup with alias resolution.The
getModelmethod correctly:
- Normalizes both provider ID and model ID to lowercase
- Resolves provider aliases before lookup
- Returns
undefinedfor missing or invalid inputsThe
resolveProviderIdmethod properly implements fallback logic.
66-100: LGTM! Safe capability accessors with proper type guards.All public methods implement proper defensive programming:
supportsReasoningandsupportsSearchuse strict equality (=== true) to avoid truthy valuesgetThinkingBudgetRangevalidates each numeric field withtypeof === 'number'getSearchDefaultsvalidates booleans and restricts strategy to known values ('turbo' | 'max')- All methods return safe defaults (empty objects or
false) when data is missing
103-103: Note: Singleton pattern creates immediate instance.The singleton is instantiated immediately at module load time, which starts event listeners and builds the index. This is acceptable for application-level services but be aware that it runs before explicit initialization.
If lazy initialization is preferred, consider exporting the class and instantiating the singleton in a separate initialization module.
| private static readonly PROVIDER_ID_ALIASES: Record<string, string> = { | ||
| dashscope: 'alibaba-cn' | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Centralize the provider alias map.
The PROVIDER_ID_ALIASES map is duplicated in both modelConfig.ts (lines 23-25) and modelCapabilities.ts (lines 20-22). This violates the DRY principle and creates a maintenance risk—future alias additions would need to be synchronized across multiple files.
Consider extracting the alias map to a shared location (e.g., a constants file or the providerDbLoader module) and importing it from both files to ensure a single source of truth.
Example:
Create src/main/presenter/configPresenter/providerAliases.ts:
export const PROVIDER_ID_ALIASES: Record<string, string> = {
dashscope: 'alibaba-cn'
}Then import and use in both files:
+import { PROVIDER_ID_ALIASES } from './providerAliases'
+
export class ModelConfigHelper {
private modelConfigStore: ElectronStore<ModelConfigStoreSchema>
private memoryCache: Map<string, IModelConfig> = new Map()
private cacheInitialized: boolean = false
private currentVersion: string
- private static readonly PROVIDER_ID_ALIASES: Record<string, string> = {
- dashscope: 'alibaba-cn'
- }
// ...
private resolveProviderId(providerId: string | undefined): string | undefined {
if (!providerId) return undefined
- const alias = ModelConfigHelper.PROVIDER_ID_ALIASES[providerId]
+ const alias = PROVIDER_ID_ALIASES[providerId]
return alias || providerId
}Also applies to: 47-51
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/modelConfig.ts around lines 23-25 (and
also lines 47-51) and in src/main/presenter/configPresenter/modelCapabilities.ts
around lines 20-22, the PROVIDER_ID_ALIASES map is duplicated; extract this map
to a single shared module (suggested path:
src/main/presenter/configPresenter/providerAliases.ts) that exports a const
PROVIDER_ID_ALIASES: Record<string,string>, replace the local maps in both files
with an import from that new module, and remove the duplicated definitions so
both files consume the single source of truth.
| const resolvedProviderId = this.resolveProviderId(normProviderId) | ||
| const provider = db?.providers?.[resolvedProviderId!] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Verify alias resolution across the codebase.
The alias resolution is correctly applied here, ensuring provider lookups use resolved IDs. However, ensure that all components that perform provider ID lookups consistently use the centralized alias resolution mechanism.
Run the following script to verify consistent provider ID usage:
🏁 Script executed:
#!/bin/bash
# Description: Check for direct references to 'dashscope' that might bypass alias resolution
# Search for hardcoded 'dashscope' references that might need alias resolution
rg -n --type=ts -C3 "['\"](dashscope)['\"]" \
-g '!**/modelConfig.ts' \
-g '!**/modelCapabilities.ts' \
-g '!**/providerAliases.ts' \
-g '!**/*.test.ts' \
-g '!**/*.spec.ts'Length of output: 2852
Replace hardcoded ‘dashscope’ checks with centralized alias resolution
- In src/main/presenter/llmProviderPresenter/index.ts, invoke
resolveProviderIdbefore the switch and match on the resolved ID instead of literal'dashscope' - In openAICompatibleProvider.ts, replace
this.provider.id.toLowerCase().includes('dashscope')with a strict comparison against the resolved provider ID
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (6)
resources/model-db/providers.json (5)
6888-6890: Use human-friendly display_name/name; ensure alias mapping updated."display_name" and "name" mirror the provider id. Prefer a readable display name (e.g., "Alibaba Cloud (DashScope)") and keep id URL‑safe. Also confirm PROVIDER_ID_ALIASES maps legacy ids (e.g., "dashscope", "alibaba") to "alibaba-cn".
25696-25716: Temperature marked unsupported; verify.Community/OpenRouter variants typically accept temperature. If supported, set temperature: true for “sao10k/l3.1-70b-hanami-x1”.
6887-6892: General consistency nits.
- Standardize display_name capitalization (e.g., “Qwen VL OCR Latest”).
- Where reasoning.budget is present, consider aligning default < max.
- If you intend cost/knowledge metadata, add across models or document optionality.
Also applies to: 7445-7471, 7606-7662, 7716-7743, 7745-7772, 7826-7853, 7884-7940, 7971-7998, 8000-8058, 8060-8125, 8127-8177
6885-6892: Automated validation helpers.To reduce drift and regressions, consider adding a CI check to validate:
- unique model ids
- required fields present (id, name, display_name, modalities.input/output, limit.context)
- numeric fields are positive ints
- context/output not exceeding a configured ceiling per provider
I can provide a JSON Schema and a small CI script if helpful.
Also applies to: 6903-8179, 18325-18331, 20710-20716, 22079-22085, 23233-23260, 25692-25719, 32723-32747
23236-23257: Add missinglimit.outputand version metadata
- Add
"output": <value>underlimitfor schema consistency; this model—and many others—currently omit it.- Optionally include
releaseorlast_updatedfields to capture version metadata.src/main/presenter/configPresenter/modelCapabilities.ts (1)
20-22: Share provider alias mapThe alias map here duplicates
ModelConfigHelper.PROVIDER_ID_ALIASES(see src/main/presenter/configPresenter/modelConfig.ts, Line 23). To avoid drift when new aliases are added, please extract a shared constant (e.g., export from a small helper module) and reuse it in both places.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
provider.md(1 hunks)resources/model-db/providers.json(8 hunks)src/main/presenter/configPresenter/modelCapabilities.ts(1 hunks)src/main/presenter/configPresenter/modelConfig.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (12)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
**/*.{js,jsx,ts,tsx}: Use OxLint for JS/TS code; pre-commit hooks run lint-staged and typecheck
Use camelCase for variables and functions
Use PascalCase for types and classes
Use SCREAMING_SNAKE_CASE for constants
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
src/main/**
📄 CodeRabbit inference engine (AGENTS.md)
Place all Electron main-process code under src/main/
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
src/main/presenter/**
📄 CodeRabbit inference engine (AGENTS.md)
src/main/presenter/**: Organize main-process presenters under src/main/presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider)
Follow the Presenter pattern for main-process modules
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
**/*.{js,jsx,ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
Apply Prettier formatting: single quotes, no semicolons, max width 100
Files:
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.tssrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/modelConfig.ts
{build,resources,out,dist}/**
📄 CodeRabbit inference engine (AGENTS.md)
Keep build outputs and assets within build/, resources/, out/, and dist/
Files:
resources/model-db/providers.json
🧠 Learnings (1)
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts
Applied to files:
src/main/presenter/configPresenter/modelConfig.ts
🧬 Code graph analysis (2)
src/main/presenter/llmProviderPresenter/providers/dashscopeProvider.ts (1)
src/main/presenter/configPresenter/modelCapabilities.ts (1)
modelCapabilities(103-103)
src/main/presenter/configPresenter/modelCapabilities.ts (4)
src/shared/types/model-db.ts (2)
ProviderModel(57-57)ProviderAggregate(75-75)src/main/eventbus.ts (1)
eventBus(151-151)src/main/presenter/configPresenter/providerDbLoader.ts (1)
providerDbLoader(225-225)scripts/fetch-provider-db.mjs (5)
providers(29-29)pid(34-34)mid(41-41)out(31-31)s(84-84)
🔇 Additional comments (3)
resources/model-db/providers.json (3)
18328-18328: Output tokens raised to 131072; confirm provider limit.Doubling/raising output equal to context is unusual. Please verify against upstream limits to avoid 4xx or truncation.
20713-20713: Temperature support toggled to true; confirm API actually accepts it.If the endpoint ignores/rejects temperature, the UI may enable a nonfunctional control.
22082-22082: Non‑round context limit (96000); verify correctness.Most providers advertise 8k/16k/32k/128k/256k. Ensure 96000 is documented.
| "context": 1000000, | ||
| "output": 65536 | ||
| }, | ||
| "temperature": true, | ||
| "tool_call": true, | ||
| "reasoning": { | ||
| "supported": false | ||
| }, | ||
| "attachment": false, | ||
| "open_weights": true, | ||
| "knowledge": "2025-04", | ||
| "release_date": "2025-07-23", | ||
| "last_updated": "2025-07-23", | ||
| "cost": { | ||
| "input": 1, | ||
| "output": 5 | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Inflated token limits and inconsistent metadata; verify against provider docs.
Several models declare context up to 1,000,000 and large outputs. These may exceed provider hard limits and can cause 4xx failures or inefficient chunking. Only the first model includes knowledge/release_date/last_updated/cost; others omit them.
Run to spot outliers and missing fields:
Also applies to: 6982-6985, 7005-7007, 7106-7109, 7223-7225, 7258-7260, 7293-7295, 7386-7388
🏁 Script executed:
#!/bin/bash
jq -r '
def pathstr(p): ([ ".[\"" + (p|map(tostring)|join("\"][\"")) + "\"]" ] | join(""));
. as $root
| .providers as $p
| paths(.providers[]?.models[]?.limit?.context) as $ctxp
| ($root | getpath($ctxp)) as $ctx
| if ($ctx|tonumber) > 262144 then "CTX>262144\t" + pathstr($ctxp) + "\t" + ($ctx|tostring) else empty end
' resources/model-db/providers.json
# Missing limit or limit.output
jq -r '
.providers[]?.models[] as $m
| select(($m.limit|not) or ($m.limit.output|not))
| "\($m.id)\tmissing:\(if $m.limit|not then "limit" else "limit.output" end)"
' resources/model-db/providers.json
# Duplicate model ids anywhere in file
jq -r '
[ .providers[]?.models[]?.id ] as $ids
| ($ids | group_by(.) | map(select(length>1) | .[0]))[]
' resources/model-db/providers.jsonLength of output: 40517
Enforce valid limit definitions and unique IDs
- Define
limit.context≤ 262144 andlimit.outputfor every model. - Remove or consolidate duplicate
identries. - Confirm outlier contexts (e.g.,
context: 1000000at lines 6906–6922) against provider docs. - Standardize metadata fields (
knowledge,release_date,last_updated,cost) across all models.
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 6906–6922, the model entry has
an outlier limit.context (1000000) and missing/unspecified limit.output plus
potentially duplicate id entries and inconsistent metadata; update the entry to
ensure limit.context ≤ 262144 and add a sensible limit.output value, remove or
consolidate any duplicate id records for this model, verify the context value
against the provider docs and correct it if the docs indicate a different
maximum, and normalize the metadata fields (knowledge, release_date,
last_updated, cost) to match the project's standard format used by other models.
| "id": "qvq-max-2025-05-15", | ||
| "name": "Qvq Max 2025 05 15", | ||
| "display_name": "Qvq Max 2025 05 15", | ||
| "modalities": { | ||
| "input": [ | ||
| "text", | ||
| "image" | ||
| ], | ||
| "output": [ | ||
| "text" | ||
| ] | ||
| }, | ||
| "limit": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Possible model naming typo ("qvq" vs "qwq").
"id/name/display_name" use "Qvq". Verify intended brand; Qwen’s “QwQ” series commonly uses “qwq”.
🏁 Script executed:
#!/bin/bash
file=$(fd -t f providers.json | head -n1)
echo "Using file: $file"
rg -n '"qwq' "$file" -C2
rg -n '"qvq' "$file" -C2Length of output: 1171
Rename “qvq-max-2025-05-15” to “qwq-max-2025-05-15” in id, name, and display_name fields in resources/model-db/providers.json to match the QwQ series.
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 7065 to 7077, the provider is
incorrectly named "qvq-max-2025-05-15"; update the id, name, and display_name
fields to "qwq-max-2025-05-15" so they match the QwQ series naming convention;
ensure all three fields are changed consistently and keep the rest of the object
unchanged.
| "id": "gpt-image-1-mini", | ||
| "name": "gpt-image-1-mini", | ||
| "display_name": "gpt-image-1-mini", | ||
| "modalities": { | ||
| "input": [ | ||
| "text", | ||
| "image" | ||
| ], | ||
| "output": [ | ||
| "text", | ||
| "image" | ||
| ] | ||
| }, | ||
| "tool_call": false, | ||
| "reasoning": { | ||
| "supported": false | ||
| } | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Missing limit for gpt-image-1-mini; likely required by capability index/UI.
Most models define limit.context/output. Add an appropriate limit to avoid downstream assumptions breaking.
Also confirm output modalities: image is correct; text output may not be returned by this endpoint.
🏁 Script executed:
#!/usr/bin/env bash
set -e
echo "=== All 'limit' definitions in providers.json ==="
rg -n '"limit":' -C2 resources/model-db/providers.json
echo
echo "=== Context for 'gpt-image-1-mini' entry ==="
rg -n '"id": "gpt-image-1-mini"' -C3 resources/model-db/providers.jsonLength of output: 135925
🏁 Script executed:
#!/usr/bin/env bash
set -e
echo "=== Full entry for gpt-image-1-mini ==="
sed -n '32724,32750p' resources/model-db/providers.json
echo
echo "=== Base image model: gpt-image-1 ==="
rg -n '"id": "gpt-image-1"' -C10 resources/model-db/providers.jsonLength of output: 1325
🏁 Script executed:
#!/usr/bin/env bash
set -e
echo "=== gpt-image-1 snippet (including limit) ==="
rg -n '"id": "gpt-image-1"' -A20 resources/model-db/providers.jsonLength of output: 672
Add missing limit to gpt-image-1-mini
- resources/model-db/providers.json (32727–32744): no
"limit"block defined; add alimitsection with appropriatecontextandoutputvalues consistent with similar models - Confirm that
"output": ["text","image"]accurately reflects this endpoint’s responses
🤖 Prompt for AI Agents
In resources/model-db/providers.json around lines 32727 to 32744, the
gpt-image-1-mini entry is missing a "limit" block; add a "limit" object
consistent with similar image-capable models (include a "context" value
appropriate for small image models and an "output" array matching allowed output
sizes/types), for example mirror other image-text models' limits (e.g., context
token/window and output size limits) and ensure the existing "output":
["text","image"] accurately reflects the endpoint; update the JSON entry to
include this "limit" block with numeric context and output constraints
consistent with comparable model entries.
|
Codex Review: Didn't find any major issues. Another round soon, please! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting |
Summary by CodeRabbit