Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Aug 17, 2025

pass user reasoningEffort/verbosity settings to GPT-5 API requests

Summary by CodeRabbit

  • New Features
    • Added per-conversation controls for Reasoning Effort and Verbosity, enabling finer control over response depth and detail.
    • These settings are honored across initial generation, tool-continued replies, and permission restarts in both streaming and non-streaming modes.
    • Enhanced compatibility with GPT‑5 models to utilize Reasoning Effort and Verbosity when available.
    • Updated conversation settings to include the new options, ensuring prompts and token budgeting adapt to user-selected preferences.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 17, 2025

Walkthrough

Adds reasoningEffort and verbosity settings across the conversation pipeline: updates ThreadPresenter to read/pass them, extends LLMProviderPresenter.startStreamCompletion signature and modelConfig handling, and augments the OpenAI-compatible provider to include GPT‑5-specific request fields when applicable. Shared type declarations are updated accordingly.

Changes

Cohort / File(s) Summary
Presenter API update
src/main/presenter/llmProviderPresenter/index.ts, src/shared/presenter.d.ts
Adds optional reasoningEffort and verbosity params to startStreamCompletion; conditionally sets modelConfig.reasoningEffort and modelConfig.verbosity; updates interface/type declarations.
ThreadPresenter propagation
src/main/presenter/threadPresenter/index.ts
Reads reasoningEffort and verbosity from conversation settings; passes them (with thinkingBudget) through all streaming/continuation paths and prompt prep.
OpenAI-compatible GPT‑5 fields
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
When modelId starts with gpt-5, includes reasoning_effort and verbosity in both streaming and non‑streaming request params if present; retains existing temperature handling for reasoning models.

Sequence Diagram(s)

sequenceDiagram
  participant UI as ThreadPresenter
  participant LLM as LLMProviderPresenter
  participant OAI as OpenAICompatibleProvider
  participant API as OpenAI API

  UI->>LLM: startStreamCompletion(messages, modelId, ... , thinkingBudget, reasoningEffort?, verbosity?)
  LLM->>LLM: Apply to modelConfig if provided
  LLM->>OAI: handleChatCompletion(modelId, modelConfig, ...)
  alt modelId startsWith "gpt-5"
    OAI->>OAI: Add reasoning_effort / verbosity to request
  end
  OAI->>API: POST /chat/completions (stream or non-stream)
  API-->>OAI: Completion chunks / response
  OAI-->>LLM: Stream events
  LLM-->>UI: LLMAgentEvent stream
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

I twitch my whiskers, configs in flight,
Reasoning turns from low to bright.
GPT‑5 hears my quiet plea,
“Be verbose? Or concise as me?”
Through streams I hop—budget in paw—
Carrots of tokens, code without flaw.
Thump! Another prompt—hooray! 🎉🐇

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/gpt5-reasoning-effort-verbosity-params

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/main/presenter/llmProviderPresenter/index.ts (1)

465-471: Avoid mutating shared modelConfig object in-place

Depending on how getModelConfig is implemented, in-place mutation may persist across calls. Consider cloning into a runtime object and passing that to coreStream to prevent accidental persistence.

Minimal adjustment (conceptual, outside this hunk):

  • const runtimeModelConfig = { ...modelConfig }
  • Apply overrides to runtimeModelConfig
  • Pass runtimeModelConfig to provider.coreStream

This keeps persistent config clean while honoring per-call overrides.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 97546cc and f9bc015.

📒 Files selected for processing (4)
  • src/main/presenter/llmProviderPresenter/index.ts (2 hunks)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1 hunks)
  • src/main/presenter/threadPresenter/index.ts (6 hunks)
  • src/shared/presenter.d.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (12)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/shared/presenter.d.ts
  • src/main/presenter/threadPresenter/index.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/shared/presenter.d.ts
  • src/main/presenter/threadPresenter/index.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/index.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/index.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/shared/presenter.d.ts
  • src/main/presenter/threadPresenter/index.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/index.ts
src/main/presenter/llmProviderPresenter/index.ts

📄 CodeRabbit Inference Engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/index.ts: src/main/presenter/llmProviderPresenter/index.ts should manage the overall Agent loop, conversation history, tool execution via McpPresenter, and frontend communication via eventBus.
The main Agent loop in llmProviderPresenter/index.ts should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with needContinueConversation and toolCallCount.
The main Agent loop should send standardized STREAM_EVENTS (RESPONSE, END, ERROR) to the frontend via eventBus.
The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
  • src/main/presenter/threadPresenter/index.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/shared/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Shared types in src/shared/

Files:

  • src/shared/presenter.d.ts
src/shared/*.d.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

The shared/*.d.ts files are used to define the types of objects exposed by the main process to the renderer process

Files:

  • src/shared/presenter.d.ts
src/shared/**/*.{ts,tsx,d.ts}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

共享类型定义放在 shared 目录

Files:

  • src/shared/presenter.d.ts
🧠 Learnings (14)
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/shared/presenter.d.ts
  • src/main/presenter/threadPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/main/presenter/threadPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : `src/main/presenter/llmProviderPresenter/index.ts` should manage the overall Agent loop, conversation history, tool execution via `McpPresenter`, and frontend communication via `eventBus`.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/index.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/streamEvents.ts : Standardized stream events should conform to the `LLMCoreStreamEvent` interface, ideally defined in a shared file such as `src/main/presenter/llmProviderPresenter/streamEvents.ts`.

Applied to files:

  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations must use a `coreStream` method that yields standardized stream events to decouple the main loop from provider-specific details.

Applied to files:

  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should send standardized `STREAM_EVENTS` (`RESPONSE`, `END`, `ERROR`) to the frontend via `eventBus`.

Applied to files:

  • src/shared/presenter.d.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should aggregate and yield usage events as part of the standardized stream.

Applied to files:

  • src/shared/presenter.d.ts
🔇 Additional comments (5)
src/shared/presenter.d.ts (1)

569-571: Interface extension aligns with PR goal; signature looks correct

Adding reasoningEffort and verbosity (after thinkingBudget) to ILlmProviderPresenter.startStreamCompletion is consistent with the shared types and downstream usage.

src/main/presenter/llmProviderPresenter/index.ts (1)

446-449: Public API extended with reasoningEffort/verbosity — good, backward compatible

The added optional params keep call sites compatible and align with shared typings.

src/main/presenter/threadPresenter/index.ts (3)

1476-1491: Good: Propagates thinkingBudget, reasoningEffort, and verbosity into the main streaming call

Thread-level settings now flow into the provider for the initial stream start. This directly satisfies the PR goal.


1588-1597: Good: Continuation path also forwards GPT-5 options

Continuation after tool calls maintains parity by passing the same trio of settings.

Also applies to: 1666-1669


3676-3685: Good: Permission-resume path preserves GPT-5 options

Resuming after permission grant keeps reasoningEffort/verbosity in sync with conversation settings.

Also applies to: 3740-3743

@zerob13 zerob13 merged commit 16854b2 into dev Aug 18, 2025
2 checks passed
@zerob13 zerob13 deleted the fix/gpt5-reasoning-effort-verbosity-params branch November 23, 2025 13:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants