Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Aug 15, 2025

fix: resolve reasoningEffort parameter transmission issues

  • Add reasoningEffort and verbosity fields to CONVERSATION_SETTINGS type
  • Add database migration v6 for new reasoning_effort and verbosity columns
  • Fix Ollama provider to handle gpt-oss thinking field correctly
  • Move reasoning_effort parameter to top-level in Ollama API calls
  • Remove type assertions in TitleView component
  • Unify optional field handling in database operations
  • Add missing field initializations in chat store

Summary by CodeRabbit

  • New Features
    • Added conversation settings for Reasoning Effort (minimal/low/medium/high) and Verbosity (low/medium/high), now persisted across sessions.
    • Enhanced responses to include model reasoning content when available; streaming chats can show live reasoning updates.
  • Chores
    • Database updated to store Reasoning Effort and Verbosity for conversations.
    • Internal configuration extended to support new settings across the app.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 15, 2025

Walkthrough

Adds reasoning/thinking support across stack: Ollama provider now parses response.message.thinking and streams reasoning chunks; reasoning_effort moves to top-level request. Conversation schema gains reasoning_effort and verbosity with migration v6 and end-to-end wiring. Renderer store and TitleView handle new settings. Shared types updated accordingly.

Changes

Cohort / File(s) Summary
Provider: Ollama reasoning support
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
Parses response.message.thinking for non-stream and stream; emits reasoning events per chunk; retains tag handling; moves reasoning_effort to top-level chat params; comment adjustments.
DB: Conversation settings + migration
src/main/presenter/sqlitePresenter/tables/conversations.ts
Adds reasoning_effort and verbosity columns (migration v6); extends ConversationRow; CRUD and mapping updated to surface settings.reasoningEffort and settings.verbosity.
Renderer: UI sync
src/renderer/src/components/TitleView.vue
Uses typed access for reasoningEffort/verbosity; watchers now assign from store directly; potential undefined propagation due to unconditional assignment.
Renderer: Store config
src/renderer/src/stores/chat.ts
Extends CONVERSATION_SETTINGS in chatConfig with thinkingBudget, reasoningEffort, verbosity (initialized undefined).
Shared types
src/shared/presenter.d.ts
Adds optional reasoningEffort ('minimal'

Sequence Diagram(s)

sequenceDiagram
  participant UI as TitleView.vue
  participant Store as chat store
  participant Main as Presenter
  participant Prov as OllamaProvider
  participant API as Ollama API

  UI->>Store: Read/update chatConfig (reasoningEffort, verbosity)
  Store->>Main: Start chat(session settings)
  Main->>Prov: build chatParams(reasoning_effort top-level)
  Prov->>API: POST /chat (reasoning_effort, messages)
  API-->>Prov: response (thinking, content)
  alt Streaming
    loop chunks
      API-->>Prov: chunk(thinking/content)
      Prov-->>Main: reasoning event (thinking)
      Prov-->>Main: content delta
    end
  else Non-stream
    Prov-->>Main: result (reasoning_content from thinking or <think>, content)
  end
  Main-->>Store: deliver messages/events
  Store-->>UI: update view
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Poem

A twitch of whiskers, thoughts take wing,
Little hops of “thinking” stream and sing.
Tables grow fields, tidy and terse,
Providers muse, then send their verse.
Effort and voice set just right—
Rabbit approves: logic light and bright. 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/reasoning-effort-transmission

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (1)

496-509: Move reasoning_effort into options — Ollama expects it there

Ollama's chat API does not accept a top-level reasoning_effort; put it under options (or use top-level think to toggle thinking).

  • File: src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (around lines 496–509)

Suggested replacement snippet:

      // Ollama聊天参数
      const chatParams = {
        model: modelId,
        messages: processedMessages,
        options: {
          temperature: temperature || 0.7,
          num_predict: maxTokens,
          ...(modelConfig?.reasoningEffort && { reasoning_effort: modelConfig.reasoningEffort })
        },
        stream: true as const,
        ...(supportsFunctionCall && ollamaTools && ollamaTools.length > 0
          ? { tools: ollamaTools }
          : {})
      }
🧹 Nitpick comments (4)
src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (2)

719-721: Emit usage once per chunk, not per character

Yielding a usage event inside the per-character loop is noisy and inefficient. Emit one usage event after finishing a chunk instead.

Apply this minimal diff:

@@
-          if (usage) {
-            yield { type: 'usage', usage }
-          }
+          // Defer usage emission until the end of this chunk
@@
-        } // 字符循环结束
+        } // 字符循环结束
+        if (usage) {
+          yield { type: 'usage', usage }
+        }

Also applies to: 947-949


204-205: Use English for comments (codebase guideline)

Replace the Chinese comment with English to keep consistency across TS/JS/Vue files.

-        // 没有特殊格式,所有内容作为普通内容
+        // No special format detected; treat everything as normal content
src/main/presenter/sqlitePresenter/tables/conversations.ts (1)

134-160: Avoid storing the literal string 'NULL' for enabled_mcp_tools

The insert currently uses the string 'NULL' when enabledMcpTools is absent. This writes "NULL" (text) instead of SQL NULL, which is semantically wrong and complicates queries.

Apply this diff:

-      settings.enabledMcpTools ? JSON.stringify(settings.enabledMcpTools) : 'NULL',
+      settings.enabledMcpTools ? JSON.stringify(settings.enabledMcpTools) : null,

Note: better-sqlite3 will bind JS null as SQL NULL.

src/renderer/src/components/TitleView.vue (1)

261-274: Remove the as any when updating chat config

The payload conforms to Partial<CONVERSATION_SETTINGS>; the cast is unnecessary and hides type issues.

-      chatStore.updateChatConfig({
+      chatStore.updateChatConfig({
         temperature: newTemp,
         contextLength: newContext,
         maxTokens: newMaxTokens,
         systemPrompt: newSystemPrompt,
         artifacts: newArtifacts,
         thinkingBudget: newThinkingBudget,
         reasoningEffort: newReasoningEffort,
         verbosity: newVerbosity
-      } as any)
+      })
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ae13c8a and 9bf53ad.

📒 Files selected for processing (5)
  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (4 hunks)
  • src/main/presenter/sqlitePresenter/tables/conversations.ts (9 hunks)
  • src/renderer/src/components/TitleView.vue (3 hunks)
  • src/renderer/src/stores/chat.ts (1 hunks)
  • src/shared/presenter.d.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (20)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/shared/presenter.d.ts
  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/sqlitePresenter/tables/conversations.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/shared/presenter.d.ts
  • src/renderer/src/stores/chat.ts
  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/sqlitePresenter/tables/conversations.ts
src/shared/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Shared types in src/shared/

Files:

  • src/shared/presenter.d.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/shared/presenter.d.ts
  • src/renderer/src/stores/chat.ts
  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/sqlitePresenter/tables/conversations.ts
src/shared/*.d.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

The shared/*.d.ts files are used to define the types of objects exposed by the main process to the renderer process

Files:

  • src/shared/presenter.d.ts
src/shared/**/*.{ts,tsx,d.ts}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

共享类型定义放在 shared 目录

Files:

  • src/shared/presenter.d.ts
src/renderer/src/**/*.vue

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.vue: Use Composition API for all Vue 3 components
Use Tailwind CSS with scoped styles for styling
Organize components by feature in src/renderer/src/
Follow existing component patterns in src/renderer/src/ when creating new UI components
Use Composition API with proper TypeScript typing for new UI components
Implement responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states for new UI components

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/TitleView.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use Pinia for frontend state management
Renderer to Main: Use usePresenter.ts composable for direct presenter method calls

Files:

  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
src/renderer/src/**/*

📄 CodeRabbit Inference Engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
src/renderer/**/*.{vue,ts}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
src/renderer/**/*.{ts,vue}

📄 CodeRabbit Inference Engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Files:

  • src/renderer/src/components/TitleView.vue
  • src/renderer/src/stores/chat.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/renderer/src/stores/chat.ts
  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/sqlitePresenter/tables/conversations.ts
src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/pinia-best-practices.mdc)

src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}: Use modules to organize related state and actions
Implement proper state persistence for maintaining data across sessions
Use getters for computed state properties
Utilize actions for side effects and asynchronous operations
Keep the store focused on global state, not component-specific data

Files:

  • src/renderer/src/stores/chat.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/sqlitePresenter/tables/conversations.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/sqlitePresenter/tables/conversations.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Create provider file in src/main/presenter/llmProviderPresenter/providers/ when adding a new LLM provider
Implement coreStream method following standardized event interface in LLM provider files

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (`tool_call_star...

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
  • src/main/presenter/sqlitePresenter/tables/conversations.ts
🧠 Learnings (10)
📚 Learning: 2025-07-21T01:47:28.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-07-21T01:47:28.817Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Leverage Vue's built-in reactivity system for efficient data handling

Applied to files:

  • src/renderer/src/components/TitleView.vue
📚 Learning: 2025-06-21T15:49:17.044Z
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both `text` and `content` properties in the `handleMeetingInstruction` function's `sendMessage` call, even though they are redundant, rather than removing the `content` property.

Applied to files:

  • src/renderer/src/stores/chat.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield reasoning events in the standardized format when applicable.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
📚 Learning: 2025-07-21T01:45:33.790Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Implement coreStream method following standardized event interface in LLM provider files

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts
🔇 Additional comments (11)
src/shared/presenter.d.ts (1)

628-630: Types added to CONVERSATION_SETTINGS look consistent and non-breaking

Adding optional reasoningEffort and verbosity aligns with ModelConfig/DefaultModelSetting and downstream usage. No API breakage and good forward-compat.

src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts (1)

178-186: Non-streaming “thinking” handling is correct and preserves content

The logic cleanly prioritizes response.message.thinking, falls back to …, and finally to plain content. Good preservation of normal content outside the think tag.

Also applies to: 187-206

src/main/presenter/sqlitePresenter/tables/conversations.ts (6)

22-24: Row schema extended correctly

Adding nullable TEXT columns reasoning_effort and verbosity at the row type level matches the new settings and DB schema.


114-115: Schema version bump acknowledged

Returning 6 here keeps the migration path aligned. Good.


183-186: Mapping logic from DB → settings is solid

thinkingBudget null-to-undefined normalization and union type narrowing for reasoningEffort/verbosity are correct.

Also applies to: 212-217


277-284: Update path covers new fields

Updates for reasoning_effort and verbosity are properly gated. One edge case: if undefined is passed, better-sqlite3 will bind it as NULL — desired?

If you want “unset” to mean “leave unchanged,” guard with if ('field' in data.settings) instead of !== undefined. Otherwise, current behavior writes NULL, which is acceptable if that's the intended meaning.


330-333: List query/mapper include new columns

Listing includes and maps the new fields correctly, consistent with get().

Also applies to: 358-363


100-108: Migration v6 confirmed — migrations are applied sequentially

Verified: migrate() collects getMigrationSQL(version) from each table, groups SQL by version, sorts versions ascending, and executes each version's SQL inside a transaction. The conversations table implements version === 6 to add reasoning_effort and verbosity.

Files inspected:

  • src/main/presenter/sqlitePresenter/tables/conversations.ts — getMigrationSQL(version === 6) adds reasoning_effort and verbosity.
  • src/main/presenter/sqlitePresenter/index.ts — migrate() aggregates per-version SQL and executes versions in ascending order (see lines ~160–200).

No changes required; approve.

src/renderer/src/stores/chat.ts (1)

60-64: Store initialization includes new optional settings

Initializing thinkingBudget, reasoningEffort, verbosity as undefined is fine, given TitleView fills defaults from model config.

src/renderer/src/components/TitleView.vue (2)

107-109: Local refs for reasoningEffort/verbosity are wired correctly

Refs mirror store config and are typed as intended.


288-290: Store → local sync for new fields is correct

Mirrors chatConfig changes into local refs and triggers default model config reload on model/provider change.

Comment on lines +610 to +615
// 处理 thinking 字段
const currentThinking = chunk.message?.thinking || ''
if (currentThinking) {
yield { type: 'reasoning', reasoning_content: currentThinking }
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Potential duplication: streaming message.thinking is likely cumulative — emit only the delta

Many providers expose thinking as the full-so-far content. Appending currentThinking verbatim each chunk will duplicate previously emitted text, causing runaway growth in the renderer (which concatenates reasoning_content). Track and emit only the delta.

Apply this diff to track and emit incremental deltas:

@@
-      // --- 流处理循环 ---
+      // --- 流处理循环 ---
+      // Track last emitted reasoning for streaming `thinking`
+      let lastThinking = ''
       for await (const chunk of stream) {
@@
-        // 处理 thinking 字段
-        const currentThinking = chunk.message?.thinking || ''
-        if (currentThinking) {
-          yield { type: 'reasoning', reasoning_content: currentThinking }
-        }
+        // 处理 thinking 字段(只输出增量)
+        const currentThinking = chunk.message?.thinking ?? ''
+        if (currentThinking && currentThinking !== lastThinking) {
+          const delta = currentThinking.startsWith(lastThinking)
+            ? currentThinking.slice(lastThinking.length)
+            : currentThinking
+          if (delta) {
+            yield { type: 'reasoning', reasoning_content: delta }
+          }
+          lastThinking = currentThinking
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// 处理 thinking 字段
const currentThinking = chunk.message?.thinking || ''
if (currentThinking) {
yield { type: 'reasoning', reasoning_content: currentThinking }
}
// --- 流处理循环 ---
// Track last emitted reasoning for streaming `thinking`
let lastThinking = ''
for await (const chunk of stream) {
// 处理 thinking 字段(只输出增量)
const currentThinking = chunk.message?.thinking ?? ''
if (currentThinking && currentThinking !== lastThinking) {
const delta = currentThinking.startsWith(lastThinking)
? currentThinking.slice(lastThinking.length)
: currentThinking
if (delta) {
yield { type: 'reasoning', reasoning_content: delta }
}
lastThinking = currentThinking
}
// ... (other chunk handling)
}
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/ollamaProvider.ts around
lines 610 to 615, the code emits chunk.message.thinking verbatim which
duplicates previously streamed content; fix by keeping a local previousThinking
string, compute the incremental delta between chunk.message.thinking and
previousThinking (e.g., if current startsWith previous, delta =
current.slice(previous.length)), emit only the delta when non-empty as the
reasoning chunk, and then update previousThinking to the full currentThinking
(handle null/undefined as empty string and fall back to emitting full current if
not a simple prefix).

@zerob13 zerob13 merged commit a1b2f7f into dev Aug 15, 2025
6 checks passed
@zerob13 zerob13 deleted the fix/reasoning-effort-transmission branch November 23, 2025 13:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants