Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Aug 18, 2025

  • add comprehensive OpenAI reasoning models support for OpenRouter

Summary by CodeRabbit

  • New Features

    • Added support for GPT-5 family across OpenAI and OpenRouter, plus new O1 and O3 series variants with extended token/context capabilities and vision/function-calling/reasoning enabled.
  • Improvements

    • Broadened GPT-5 detection to match any ID containing "gpt-5", ensuring correct token limits and verbosity.
    • UI: Temperature slider and GPT-5-specific options/verbosity now appear or hide for a wider set of model IDs.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 18, 2025

Walkthrough

Adds GPT‑5, O1, and O3 model entries to OpenAI and OpenRouter provider model settings and widens GPT‑5 detection from startsWith to includes across provider logic and UI, affecting token selection and verbosity handling. All edits are additive.

Changes

Cohort / File(s) Summary
Model catalog expansion
src/main/presenter/configPresenter/providerModelSettings.ts
Added GPT‑5 family (gpt-5-chat, gpt-5, gpt-5-mini, gpt-5-nano) plus O1 and O3 series entries for both OpenAI and OpenRouter with matching patterns and per-model token/context/feature settings.
Provider logic: OpenAI-compatible
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
Replaced modelId.startsWith('gpt-5') with modelId.includes('gpt-5') where choosing max_completion_tokens vs max_tokens and when attaching verbosity, broadening GPT‑5 recognition.
Provider logic: OpenAI responses
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
Switched GPT‑5 checks from startsWith to includes in openAICompletion and handleChatCompletion so verbosity/parameters apply for any modelId containing "gpt-5".
Renderer/UI GPT‑5 gating
src/renderer/src/components/ChatConfig.vue, src/renderer/src/components/settings/ModelConfigDialog.vue
Updated isGPT5Model computed property to use includes('gpt-5') (after lowercasing), widening the set of model IDs that hide/show GPT‑5-specific UI controls like temperature and verbosity options.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant UI as Renderer (Chat/Settings)
  participant Presenter as Provider Presenter
  participant API as OpenAI/OpenRouter

  User->>UI: Select model / Send message
  UI->>Presenter: Build request (detect GPT‑5 via includes('gpt-5'))
  alt modelId includes "gpt-5"
    Presenter->>Presenter: Use max_completion_tokens, attach verbosity/reasoning flags
  else
    Presenter->>Presenter: Use max_tokens, default flags
  end
  Presenter->>API: POST chat/completions
  API-->>Presenter: Response
  Presenter-->>UI: Render response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

I twitch my whiskers at models new,
GPT‑5 carrots in a richer brew.
Tokens stretched and flags set free,
I hop through configs, glee in me.
More models to nibble — let’s decree! 🐇🥕

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/openrouter-gpt5-reasoning-effort-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@yyhhyyyyyy yyhhyyyyyy changed the title feat: add OpenRouter GPT-5 reasoning effort and verbosity support feat: add comprehensive OpenAI reasoning models support for OpenRouter Aug 18, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (4)
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (2)

246-251: Temperature not removed for OpenRouter GPT‑5 IDs (may cause API errors)

Reasoning models (incl. GPT‑5) shouldn’t receive a temperature param. With startsWith() against raw modelId, openai/gpt-5-* won’t match, so temperature is left in the request. Normalize the ID before checking.

-OPENAI_REASONING_MODELS.forEach((noTempId) => {
-  if (modelId.startsWith(noTempId)) {
-    delete requestParams.temperature
-  }
-})
+const normalizedModelId2 = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+OPENAI_REASONING_MODELS.forEach((noTempId) => {
+  if (normalizedModelId2.startsWith(noTempId)) {
+    delete requestParams.temperature
+  }
+})

589-591: Temperature still not removed for OpenRouter GPT‑5 in streaming path

Same issue as the non-streaming path: temperature won’t be deleted for IDs prefixed with openai/. Normalize before checking.

-OPENAI_REASONING_MODELS.forEach((noTempId) => {
-  if (modelId.startsWith(noTempId)) delete requestParams.temperature
-})
+const normalizedModelId2 = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+OPENAI_REASONING_MODELS.forEach((noTempId) => {
+  if (normalizedModelId2.startsWith(noTempId)) delete requestParams.temperature
+})
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (2)

224-228: Temperature not removed for OpenRouter-prefixed reasoning models

Same normalization concern: temperature remains when modelId has a provider prefix.

-OPENAI_REASONING_MODELS.forEach((noTempId) => {
-  if (modelId.startsWith(noTempId)) {
-    delete requestParams.temperature
-  }
-})
+OPENAI_REASONING_MODELS.forEach((noTempId) => {
+  if (normalizedModelId.startsWith(noTempId)) {
+    delete requestParams.temperature
+  }
+})

576-579: Remove temperature using normalized ID for reasoning models

Ensure temperature is deleted when using OpenRouter GPT‑5 or other reasoning models.

-OPENAI_REASONING_MODELS.forEach((noTempId) => {
-  if (modelId.startsWith(noTempId)) delete requestParams.temperature
-})
+OPENAI_REASONING_MODELS.forEach((noTempId) => {
+  if (normalizedModelId.startsWith(noTempId)) delete requestParams.temperature
+})
🧹 Nitpick comments (6)
src/renderer/src/components/settings/ModelConfigDialog.vue (1)

492-495: Tighten GPT‑5 detection to avoid accidental matches

Switching to includes() broadens coverage for OpenRouter IDs like openai/gpt-5-..., which is good. To avoid false positives (e.g., a model name embedding "gpt-5" in the middle), normalize the suffix after the slash and check with startsWith.

Apply this diff:

-const isGPT5Model = computed(() => {
-  const modelId = props.modelId.toLowerCase()
-  return modelId.includes('gpt-5')
-})
+const isGPT5Model = computed(() => {
+  const id = props.modelId.toLowerCase()
+  const normalized = id.includes('/') ? id.split('/').pop()! : id
+  return normalized.startsWith('gpt-5')
+})
src/renderer/src/components/ChatConfig.vue (1)

87-90: Normalize model ID and prefer startsWith for GPT‑5 family checks

includes('gpt-5') will match more than intended. Normalize the trailing segment (after any provider prefix) and use startsWith to precisely target GPT‑5 variants while still supporting OpenRouter-style IDs.

-const isGPT5Model = computed(() => {
-  const modelId = props.modelId?.toLowerCase() || ''
-  return modelId.includes('gpt-5')
-})
+const isGPT5Model = computed(() => {
+  const id = (props.modelId ?? '').toLowerCase()
+  const normalized = id.includes('/') ? id.split('/').pop()! : id
+  return normalized.startsWith('gpt-5')
+})
src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (2)

239-245: Use normalized model ID and startsWith for GPT‑5 verbosity gating

The intent is to enable verbosity for GPT‑5, including OpenRouter IDs like openai/gpt-5-*. Normalize the ID once and use startsWith to avoid unexpected matches.

-// verbosity 仅支持 GPT-5 系列模型
-if (modelId.includes('gpt-5') && modelConfig.verbosity) {
+// verbosity 仅支持 GPT-5 系列模型
+const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+if (normalizedModelId.startsWith('gpt-5') && modelConfig.verbosity) {
   ;(requestParams as any).text = {
     verbosity: modelConfig.verbosity
   }
}

583-587: Align verbosity gating with normalized model ID in streaming path

Mirror the non-streaming path: normalize and use startsWith for GPT‑5 detection here as well.

-// verbosity 仅支持 GPT-5 系列模型
-if (modelId.includes('gpt-5') && modelConfig.verbosity) {
+// verbosity 仅支持 GPT-5 系列模型
+const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+if (normalizedModelId.startsWith('gpt-5') && modelConfig.verbosity) {
   ;(requestParams as any).text = {
     verbosity: modelConfig.verbosity
   }
}
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)

571-573: Verbosity gating: prefer normalized startsWith to avoid over-matching

Keep behavior consistent and precise: use normalizedModelId.startsWith('gpt-5').

-if (modelId.includes('gpt-5') && modelConfig.verbosity) {
+if (normalizedModelId.startsWith('gpt-5') && modelConfig.verbosity) {
   ;(requestParams as any).verbosity = modelConfig.verbosity
 }
src/main/presenter/configPresenter/providerModelSettings.ts (1)

2340-2400: Model catalog addition looks consistent; consider future-proofing match patterns

The OpenRouter GPT‑5 entries use fully qualified matches (e.g., openai/gpt-5-mini-2025-08-07). Good. To reduce maintenance churn and false positives, consider adding both a stable prefix and versioned variants, or adopt a boundary-aware match convention in getProviderSpecificModelConfig (e.g., match end-segment with startsWith), instead of broad includes().

If you want, I can refactor getProviderSpecificModelConfig to:

  • normalize the tail segment (after the last slash),
  • match either exact IDs or segment startsWith for versioned models,
  • and keep provider-specific prefixes when needed.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 68854b7 and 56e3506.

📒 Files selected for processing (5)
  • src/main/presenter/configPresenter/providerModelSettings.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (3 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts (2 hunks)
  • src/renderer/src/components/ChatConfig.vue (1 hunks)
  • src/renderer/src/components/settings/ModelConfigDialog.vue (1 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/renderer/src/components/ChatConfig.vue
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/renderer/src/**/*.vue

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI components

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/renderer/src/components/ChatConfig.vue
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Centralize configuration in configPresenter/

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
🧠 Learnings (1)
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts

Comment on lines 217 to 223
...(modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
modelId.includes('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Select max_completion_tokens using normalized ID (covers OpenRouter IDs)

Use a normalizedModelId for o1/o3/o4/GPT‑5 detection. This fixes cases like openai/o3-... and openai/gpt-5-... coming from OpenRouter.

-const requestParams: OpenAI.Chat.ChatCompletionCreateParams = {
+const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+const requestParams: OpenAI.Chat.ChatCompletionCreateParams = {
   messages: this.formatMessages(messages),
   model: modelId,
   stream: false,
   temperature: temperature,
-  ...(modelId.startsWith('o1') ||
-  modelId.startsWith('o3') ||
-  modelId.startsWith('o4') ||
-  modelId.includes('gpt-5')
+  ...(normalizedModelId.startsWith('o1') ||
+  normalizedModelId.startsWith('o3') ||
+  normalizedModelId.startsWith('o4') ||
+  normalizedModelId.startsWith('gpt-5')
     ? { max_completion_tokens: maxTokens }
     : { max_tokens: maxTokens })
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
...(modelId.startsWith('o1') ||
modelId.startsWith('o3') ||
modelId.startsWith('o4') ||
modelId.startsWith('gpt-5')
modelId.includes('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
const requestParams: OpenAI.Chat.ChatCompletionCreateParams = {
messages: this.formatMessages(messages),
model: modelId,
stream: false,
temperature: temperature,
...(normalizedModelId.startsWith('o1') ||
normalizedModelId.startsWith('o3') ||
normalizedModelId.startsWith('o4') ||
normalizedModelId.startsWith('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 217 to 223, the branch that chooses max_completion_tokens vs
max_tokens currently checks modelId directly and misses OpenRouter-style ids
like "openai/o3-..." or "openai/gpt-5-..."; normalize the id first (e.g., take
the segment after any '/' and lowercase it) into a variable like
normalizedModelId, then use normalizedModelId.startsWith('o1'|'o3'|'o4') or
normalizedModelId.includes('gpt-5') to decide to use max_completion_tokens,
otherwise use max_tokens.

Comment on lines +541 to 544
modelId.includes('gpt-5')
? { max_completion_tokens: maxTokens }
: { max_tokens: maxTokens })
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Normalize ID for token-field selection in streaming path

Mirror the non-streaming path to cover OpenRouter IDs.

-...(modelId.startsWith('o1') ||
-  modelId.startsWith('o3') ||
-  modelId.startsWith('o4') ||
-  modelId.includes('gpt-5')
+const normalizedModelId = modelId.includes('/') ? modelId.split('/').pop()! : modelId
+...(normalizedModelId.startsWith('o1') ||
+  normalizedModelId.startsWith('o3') ||
+  normalizedModelId.startsWith('o4') ||
+  normalizedModelId.startsWith('gpt-5')
   ? { max_completion_tokens: maxTokens }
   : { max_tokens: maxTokens })

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
around lines 541 to 544, the streaming path chooses the token-field based on
modelId but doesn't normalize the ID like the non-streaming path does, so
OpenRouter-style IDs (provider/model) aren't matched; normalize the modelId
first (e.g., lowercase and strip any provider prefix by taking substring after
the last '/' or similar) and then use that normalized id when checking
includes('gpt-5') to decide between max_completion_tokens and max_tokens,
mirroring the non-streaming logic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
src/main/presenter/configPresenter/providerModelSettings.ts (4)

2341-2341: Use English for comments per repo guidelines

Per coding guidelines, logs and comments in TS files must be in English. Please translate this comment.

Apply this diff:

-      // GPT-5 系列模型配置
+      // GPT-5 family model configurations (OpenRouter)

2401-2401: Use English for comments per repo guidelines

Same note here.

-      // O1 系列模型配置
+      // O1 family model configurations (OpenRouter)

2459-2459: Use English for comments per repo guidelines

And here.

-      // O3 系列模型配置
+      // O3 family model configurations (OpenRouter)

2342-2400: Optional: Factor out GPT‑5 defaults to reduce duplication and drift

The four GPT‑5 entries share identical defaults (temperature, contextLength, reasoning flags, verbosity). Extracting a small helper or constant reduces maintenance and misalignment risk across providers.

Example:

const GPT5_DEFAULTS = {
  temperature: 0.7,
  contextLength: 400000,
  vision: true,
  functionCall: true,
  reasoning: true as const,
  reasoningEffort: 'medium' as const,
  verbosity: 'medium' as const,
  maxCompletionTokens: 128000,
  type: ModelType.Chat as const
}

// Then spread into each GPT-5 variant
{
  id: 'gpt-5-mini',
  name: 'OpenAI: GPT-5 Mini',
  maxTokens: 128000,
  match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
  ...GPT5_DEFAULTS
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 56e3506 and acc68c0.

📒 Files selected for processing (1)
  • src/main/presenter/configPresenter/providerModelSettings.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Centralize configuration in configPresenter/

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
🧠 Learnings (1)
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration in configPresenter/providers.ts when adding a new LLM provider

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
🔇 Additional comments (3)
src/main/presenter/configPresenter/providerModelSettings.ts (3)

2342-2355: Double-check function/tool calling support for gpt‑5‑chat via OpenRouter

In the OpenAI provider block, gpt‑5‑chat had functionCall: false, but here it’s true. If OpenRouter’s OpenAI relay supports tool/function calling for gpt‑5‑chat, keep this; otherwise align for consistency.

Would you like me to scan the codebase for how functionCall affects request construction for OpenRouter and confirm the behavior paths?


2359-2363: Verify token limits consistency vs OpenAI provider block

Values here differ from the OpenAI provider section (e.g., contextLength 400000 vs 272000, functionCall true vs false for gpt‑5‑chat, maxTokens 128000 vs 16384 for gpt‑5‑chat). If this reflects OpenRouter’s plan caps, all good; otherwise consider aligning for predictability across providers.

Also applies to: 2369-2369


2353-2355: Confirm downstream usage of maxCompletionTokens for GPT‑5/O‑series

The intent is that GPT‑5 uses maxCompletionTokens instead of maxTokens. Ensure the request builders for OpenRouter models actually map this field to the correct OpenRouter API parameter and don’t also send max_tokens for these models.

I can trace request building in the providers and add unit coverage if helpful.

Also applies to: 2368-2369, 2383-2385, 2398-2400, 2412-2415, 2426-2429, 2440-2443, 2454-2457, 2470-2472, 2484-2486, 2498-2500, 2511-2514

Comment on lines +2342 to +2400
id: 'gpt-5-chat',
name: 'OpenAI: GPT-5 Chat',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-chat'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5',
name: 'OpenAI: GPT-5',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-mini',
name: 'OpenAI: GPT-5 Mini',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-nano',
name: 'OpenAI: GPT-5 Nano',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Harden model ID matching to prevent future collisions

Even after reordering, substring matching with includes is brittle. A safer matcher reduces footguns across all providers and variants.

Consider updating the matcher (outside this diff hunk) to prefer exact IDs and only allow suffix date variants. Example:

// src/main/presenter/configPresenter/providerModelSettings.ts
// Replace the includes() check with:
if (config.match.some((matchStr) => {
  const ms = matchStr.toLowerCase()
  return (
    lowerModelId === ms ||
    lowerModelId.startsWith(ms + '-') || // e.g. openai/gpt-5-2025-08-07
    lowerModelId.startsWith(ms + ':')    // e.g. ...:free
  )
})) {
  // ...
}

This keeps 'openai/gpt-5' from matching 'openai/gpt-5-mini' while still allowing date-suffixed variants.

If you want, I can open a follow-up PR to apply and validate this across the repository.

Also applies to: 2403-2457, 2460-2514


🏁 Script executed:

#!/bin/bash
# Locate where the providerModelSettings is referenced
echo "=== Searching for providerModelSettings usage ==="
rg -n "providerModelSettings" -C3

# Find where the `match` array is used to select or filter models
echo "=== Searching for config.match.some usage ==="
rg -nP "config\.match\.some" -C3

# Find any includes() calls within the presenter directory
echo "=== Searching for includes( usage in presenter folder ==="
rg -nP "includes\(" -g "src/main/presenter/**/*.ts" -C3

Length of output: 77074


Harden model-ID matching in both default and provider configs

Substring-based includes() allows collisions (e.g. “gpt-5” matching “gpt-5-mini”). Update the two matchers to require exact IDs or only valid suffixes:

• src/main/presenter/configPresenter/modelConfig.ts (around line 136)
• src/main/presenter/configPresenter/providerModelSettings.ts (around line 2634)

Replace:

if (config.match.some((matchStr) => lowerModelId.includes(matchStr.toLowerCase()))) {
  
}

With:

if (config.match.some((matchStr) => {
  const ms = matchStr.toLowerCase()
  return (
    lowerModelId === ms ||
    lowerModelId.startsWith(ms + '-') || // date or variant suffix
    lowerModelId.startsWith(ms + ':')    // tag suffix
  )
})) {
  
}

This ensures “openai/gpt-5” no longer matches “openai/gpt-5-mini,” yet still supports date- or tag-appended IDs.

🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines
2342-2400 (and also update src/main/presenter/configPresenter/modelConfig.ts
around line 136), the current substring-based matching using includes() can
cause collisions like "gpt-5" matching "gpt-5-mini"; change the matcher so each
match string is normalized to lowercase and only returns true when the
lowerModelId is exactly equal to the match, or when lowerModelId starts with the
match plus '-' (for date/variant suffixes) or starts with the match plus ':'
(for tag suffixes); replace the includes-based predicate with this stricter
equality/startsWith logic in both locations.

Comment on lines +2357 to +2400
id: 'gpt-5',
name: 'OpenAI: GPT-5',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-mini',
name: 'OpenAI: GPT-5 Mini',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-nano',
name: 'OpenAI: GPT-5 Nano',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix substring matching collisions for GPT‑5 entries (gpt‑5 matches gpt‑5-mini/nano)

getProviderSpecificModelConfig uses substring matching (includes). With the current order, 'openai/gpt-5' will match 'openai/gpt-5-mini' and 'openai/gpt-5-nano' before their specific blocks are reached, returning the wrong config. Reorder specific variants before the generic gpt‑5.

Apply this diff to reorder GPT‑5 entries:

       {
-        id: 'gpt-5',
-        name: 'OpenAI: GPT-5',
-        temperature: 0.7,
-        maxTokens: 128000,
-        contextLength: 400000,
-        match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
-        vision: true,
-        functionCall: true,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        verbosity: 'medium',
-        maxCompletionTokens: 128000,
-        type: ModelType.Chat
-      },
-      {
         id: 'gpt-5-mini',
         name: 'OpenAI: GPT-5 Mini',
         temperature: 0.7,
         maxTokens: 128000,
         contextLength: 400000,
         match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
         vision: true,
         functionCall: true,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 128000,
         type: ModelType.Chat
       },
       {
         id: 'gpt-5-nano',
         name: 'OpenAI: GPT-5 Nano',
         temperature: 0.7,
         maxTokens: 128000,
         contextLength: 400000,
         match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
         vision: true,
         functionCall: true,
         reasoning: true,
         reasoningEffort: 'medium',
         verbosity: 'medium',
         maxCompletionTokens: 128000,
         type: ModelType.Chat
-      },
+      },
+      {
+        id: 'gpt-5',
+        name: 'OpenAI: GPT-5',
+        temperature: 0.7,
+        maxTokens: 128000,
+        contextLength: 400000,
+        match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
+        vision: true,
+        functionCall: true,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        verbosity: 'medium',
+        maxCompletionTokens: 128000,
+        type: ModelType.Chat
+      },

Additionally, consider hardening the matcher to avoid substring collisions (see separate suggestion below).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'gpt-5',
name: 'OpenAI: GPT-5',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-mini',
name: 'OpenAI: GPT-5 Mini',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-nano',
name: 'OpenAI: GPT-5 Nano',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-mini',
name: 'OpenAI: GPT-5 Mini',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-mini', 'openai/gpt-5-mini-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5-nano',
name: 'OpenAI: GPT-5 Nano',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5-nano', 'openai/gpt-5-nano-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
{
id: 'gpt-5',
name: 'OpenAI: GPT-5',
temperature: 0.7,
maxTokens: 128000,
contextLength: 400000,
match: ['openai/gpt-5', 'openai/gpt-5-2025-08-07'],
vision: true,
functionCall: true,
reasoning: true,
reasoningEffort: 'medium',
verbosity: 'medium',
maxCompletionTokens: 128000,
type: ModelType.Chat
},
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2357
to 2400, the generic 'gpt-5' entry can be matched before its specific variants
due to substring matching; fix by moving the 'gpt-5-mini' and 'gpt-5-nano'
objects so they appear before the generic 'gpt-5' object in the models array
(ensure their full match strings remain unchanged), and as an additional
hardening step update the provider matching logic to prefer exact/anchored
matches or longest-match-first (e.g., exact equality or regex anchors) instead
of simple includes so specific model IDs are matched before generic prefixes.

Comment on lines +2417 to +2457
id: 'o1',
name: 'OpenAI: o1',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-pro',
name: 'OpenAI: o1-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-preview',
name: 'OpenAI: o1-preview',
temperature: 1,
maxTokens: 32768,
contextLength: 128000,
match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 32768,
type: ModelType.Chat
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix substring matching collisions for O1 entries (o1 matches o1-preview/pro)

Because matching uses includes, 'openai/o1' will match 'openai/o1-preview' and 'openai/o1-pro' first. Reorder to list the specific variants before the generic 'o1'.

Apply this diff to reorder O1 entries:

       {
         id: 'o1-mini',
         name: 'OpenAI: o1-mini',
         temperature: 1,
         maxTokens: 65536,
         contextLength: 128000,
         match: ['openai/o1-mini', 'openai/o1-mini-2024-09-12'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'medium',
         maxCompletionTokens: 65536,
         type: ModelType.Chat
       },
-      {
-        id: 'o1',
-        name: 'OpenAI: o1',
-        temperature: 1,
-        maxTokens: 100000,
-        contextLength: 200000,
-        match: ['openai/o1'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 100000,
-        type: ModelType.Chat
-      },
-      {
-        id: 'o1-pro',
-        name: 'OpenAI: o1-pro',
-        temperature: 1,
-        maxTokens: 100000,
-        contextLength: 200000,
-        match: ['openai/o1-pro'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 100000,
-        type: ModelType.Chat
-      },
       {
         id: 'o1-preview',
         name: 'OpenAI: o1-preview',
         temperature: 1,
         maxTokens: 32768,
         contextLength: 128000,
         match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'medium',
         maxCompletionTokens: 32768,
         type: ModelType.Chat
       },
+      {
+        id: 'o1-pro',
+        name: 'OpenAI: o1-pro',
+        temperature: 1,
+        maxTokens: 100000,
+        contextLength: 200000,
+        match: ['openai/o1-pro'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 100000,
+        type: ModelType.Chat
+      },
+      {
+        id: 'o1',
+        name: 'OpenAI: o1',
+        temperature: 1,
+        maxTokens: 100000,
+        contextLength: 200000,
+        match: ['openai/o1'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 100000,
+        type: ModelType.Chat
+      },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'o1',
name: 'OpenAI: o1',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-pro',
name: 'OpenAI: o1-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1-preview',
name: 'OpenAI: o1-preview',
temperature: 1,
maxTokens: 32768,
contextLength: 128000,
match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 32768,
type: ModelType.Chat
},
{
id: 'o1-mini',
name: 'OpenAI: o1-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o1-mini', 'openai/o1-mini-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o1-preview',
name: 'OpenAI: o1-preview',
temperature: 1,
maxTokens: 32768,
contextLength: 128000,
match: ['openai/o1-preview', 'openai/o1-preview-2024-09-12'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 32768,
type: ModelType.Chat
},
{
id: 'o1-pro',
name: 'OpenAI: o1-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o1',
name: 'OpenAI: o1',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o1'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2417
to 2457, the model matching uses substring includes so the generic entry with
match ['openai/o1'] will incorrectly match o1-preview and o1-pro; reorder the
array so the more specific entries ('openai/o1-preview', 'openai/o1-pro'
variants) appear before the generic 'openai/o1' entry to avoid collisions, i.e.,
move the o1-preview and o1-pro objects above the o1 object so specific matches
are checked first.

Comment on lines +2460 to 2514
id: 'o3',
name: 'OpenAI: o3',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o3-mini',
name: 'OpenAI: o3-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-mini-high',
name: 'OpenAI: o3-mini-high',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini-high'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'high',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-pro',
name: 'OpenAI: o3-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix substring matching collisions for O3 entries (o3 matches o3-mini/pro)

Same collision pattern: 'openai/o3' will match 'openai/o3-mini', 'openai/o3-mini-high', and 'openai/o3-pro'. Reorder specific variants before generic 'o3'.

Apply this diff:

-      {
-        id: 'o3',
-        name: 'OpenAI: o3',
-        temperature: 1,
-        maxTokens: 100000,
-        contextLength: 200000,
-        match: ['openai/o3'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 100000,
-        type: ModelType.Chat
-      },
-      {
-        id: 'o3-mini',
-        name: 'OpenAI: o3-mini',
-        temperature: 1,
-        maxTokens: 65536,
-        contextLength: 128000,
-        match: ['openai/o3-mini'],
-        vision: false,
-        functionCall: false,
-        reasoning: true,
-        reasoningEffort: 'medium',
-        maxCompletionTokens: 65536,
-        type: ModelType.Chat
-      },
       {
         id: 'o3-mini-high',
         name: 'OpenAI: o3-mini-high',
         temperature: 1,
         maxTokens: 65536,
         contextLength: 128000,
         match: ['openai/o3-mini-high'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'high',
         maxCompletionTokens: 65536,
         type: ModelType.Chat
       },
+      {
+        id: 'o3-mini',
+        name: 'OpenAI: o3-mini',
+        temperature: 1,
+        maxTokens: 65536,
+        contextLength: 128000,
+        match: ['openai/o3-mini'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 65536,
+        type: ModelType.Chat
+      },
       {
         id: 'o3-pro',
         name: 'OpenAI: o3-pro',
         temperature: 1,
         maxTokens: 100000,
         contextLength: 200000,
         match: ['openai/o3-pro'],
         vision: false,
         functionCall: false,
         reasoning: true,
         reasoningEffort: 'medium',
         maxCompletionTokens: 100000,
         type: ModelType.Chat
-      }
+      },
+      {
+        id: 'o3',
+        name: 'OpenAI: o3',
+        temperature: 1,
+        maxTokens: 100000,
+        contextLength: 200000,
+        match: ['openai/o3'],
+        vision: false,
+        functionCall: false,
+        reasoning: true,
+        reasoningEffort: 'medium',
+        maxCompletionTokens: 100000,
+        type: ModelType.Chat
+      }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: 'o3',
name: 'OpenAI: o3',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o3-mini',
name: 'OpenAI: o3-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-mini-high',
name: 'OpenAI: o3-mini-high',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini-high'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'high',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-pro',
name: 'OpenAI: o3-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
}
{
id: 'o3-mini-high',
name: 'OpenAI: o3-mini-high',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini-high'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'high',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-mini',
name: 'OpenAI: o3-mini',
temperature: 1,
maxTokens: 65536,
contextLength: 128000,
match: ['openai/o3-mini'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 65536,
type: ModelType.Chat
},
{
id: 'o3-pro',
name: 'OpenAI: o3-pro',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3-pro'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
},
{
id: 'o3',
name: 'OpenAI: o3',
temperature: 1,
maxTokens: 100000,
contextLength: 200000,
match: ['openai/o3'],
vision: false,
functionCall: false,
reasoning: true,
reasoningEffort: 'medium',
maxCompletionTokens: 100000,
type: ModelType.Chat
}
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/providerModelSettings.ts around lines 2460
to 2514, the generic 'o3' match ('openai/o3') will substring-match more specific
variants ('openai/o3-mini', 'openai/o3-mini-high', 'openai/o3-pro'); reorder the
model entries so the specific variants (o3-mini-high, o3-mini, o3-pro) appear
before the generic 'o3' entry to ensure exact matching precedence, preserving
all other fields as-is.

@zerob13 zerob13 merged commit b11d1a9 into dev Aug 18, 2025
2 checks passed
@zerob13 zerob13 deleted the feat/openrouter-gpt5-reasoning-effort-support branch November 23, 2025 13:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants