Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Dec 18, 2025

Summary by CodeRabbit

  • Chores

    • Updated development dependencies.
  • Bug Fixes

    • Improved error handling for tool calls in function-calling models to ensure conversation history remains properly structured during error scenarios.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 18, 2025

Walkthrough

Updates markstream-vue dependency from 0.0.3-beta.3 to 0.0.3-beta.6. Modifies tool call error handling in toolCallProcessor to ensure tool_call entries are paired with preceding assistant messages, creating synthetic messages if needed.

Changes

Cohort / File(s) Summary
Dependency Update
package.json
Bumps markstream-vue devDependency from 0.0.3-beta.3 to 0.0.3-beta.6
Tool Call Error Handling
src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
Ensures tool_call entries are paired with an assistant message when errors occur during tool calls, updating/inserting tool_calls on the last assistant message or creating a synthetic one if absent

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • toolCallProcessor.ts: Review the error-handling logic that pairs tool_call entries with assistant messages and validates synthetic message creation logic
  • package.json: Verify dependency version compatibility and changelog review for markstream-vue 0.0.3-beta.6

Possibly related PRs

Poem

🐰 A tool call stumbled, lost without a pair,
So we crafted an assistant message fair,
Paired them snugly in the history stream,
While markstream-vue dreams its beta dream!
Errors handled, dependencies bright—
The conversation flows just right! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: tool call error' is directly related to the main change in the PR, which fixes tool call error handling in the toolCallProcessor by ensuring tool_call entries are properly paired with assistant messages.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bugfix/prompt

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 606a3e6 and 50507a3.

📒 Files selected for processing (2)
  • package.json (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (15)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and maintain strict TypeScript type checking for all files

**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Organize core business logic into dedicated Presenter classes, with one presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use EventBus from src/main/eventbus.ts for main-to-renderer communication, broadcasting events via mainWindow.webContents.send()

src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations

src/main/**/*.ts: Electron main process code belongs in src/main/ with presenters in presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider) and eventbus.ts for app events
Use the Presenter pattern in the main process for UI coordination

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Write logs and comments in English

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/main/presenter/llmProviderPresenter/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

Define the standardized LLMCoreStreamEvent interface with fields: type (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), content (for text), reasoning_content (for reasoning), tool_call_id, tool_call_name, tool_call_arguments_chunk (for streaming), tool_call_arguments_complete (for complete arguments), error_message, usage object with token counts, stop_reason (tool_use | max_tokens | stop_sequence | error | complete), and image_data object with Base64-encoded data and mimeType

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/**/*

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

New features should be developed in the src directory

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/main/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Main process code for Electron should be placed in src/main

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/**/*.{ts,tsx,vue,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Prettier with single quotes, no semicolons, and 100 character width

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use OxLint for linting JavaScript and TypeScript files

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use EventBus for inter-process communication events

Files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
package.json

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

package.json: Node.js >= 22 required
pnpm >= 9 required

Files:

  • package.json
🧠 Learnings (13)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Tool call events: strictly follow sequence `tool_call_start → tool_call_chunk* → tool_call_end`, ensure `tool_call_id` is required and stable
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `stop` events by checking `stop_reason`: if `'tool_use'`, add the buffered assistant message and prepare for the next loop iteration; otherwise, add the final assistant message and exit the loop

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:39.200Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.200Z
Learning: Applies to **/*Provider**/index.ts : Tool call events: strictly follow sequence `tool_call_start → tool_call_chunk* → tool_call_end`, ensure `tool_call_id` is required and stable

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts
📚 Learning: 2025-11-25T05:26:15.929Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/development-setup.mdc:0-0
Timestamp: 2025-11-25T05:26:15.929Z
Learning: Applies to package.json : Node.js >= 22 required

Applied to files:

  • package.json
📚 Learning: 2025-11-25T05:26:15.929Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/development-setup.mdc:0-0
Timestamp: 2025-11-25T05:26:15.929Z
Learning: Applies to package.json : pnpm >= 9 required

Applied to files:

  • package.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (2)
src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1)

329-357: LGTM! Excellent consistency improvement.

The new error handling logic correctly ensures that tool error messages follow the same OpenAI-compatible pattern as successful tool calls: every tool message (whether success or error) is now paired with a preceding assistant message that declares the tool_call in its tool_calls array.

Key improvements:

  • Consistent with appendNativeFunctionCallMessages (lines 235-274)
  • Defensive fallback creates a synthetic assistant message when none exists (rare edge case)
  • Maintains proper conversation history structure required by native function-calling models

Based on learnings, this aligns with the requirement that tool calls must follow proper sequences with stable tool_call_id references.

package.json (1)

150-150: The markstream-vue package version 0.0.3-beta.6 is verified to exist and is properly published on npm.

The version has been confirmed on the npm registry with proper integrity signatures. This is an incremental pre-release update from 0.0.3-beta.3 to 0.0.3-beta.6 within the same version line. For detailed information on any changes between beta versions, refer to the package's release notes or changelog.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@zerob13 zerob13 merged commit 6040f3f into dev Dec 18, 2025
2 checks passed
@zerob13 zerob13 deleted the bugfix/prompt branch December 18, 2025 07:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants