Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Nov 4, 2025

Summary by CodeRabbit

  • Improvements
    • Enhanced function call handling to support a broader range of AI provider configurations, improving compatibility and reliability across different model types.
    • Refined tool response message processing for more robust message formatting in multi-turn conversations involving function calls.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 4, 2025

Walkthrough

Implements dual-mode tool-call handling for OpenAI-compatible providers: native function calls when supported, or mock non-native flows using a pendingToolCalls map. Updates message formatting in both the LLM provider and thread presenter to handle separate tool response messages with tool_call_id linking.

Changes

Cohort / File(s) Summary
OpenAI Provider Tool Call Modes
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
Added supportsFunctionCall boolean parameter to formatMessages() to branch between native OpenAI tool_calls format and mock non-native flow. Introduced pendingToolCalls map to pair non-native tool calls with subsequent responses. Modified assistant and tool message handling to emit either native format (tool role with tool_call_id) or mock format (function_call_record in user messages). Updated parseFunctionCalls() signature to accept fallbackIdPrefix for robust ID generation in non-native mode. Updated openAICompletion() to determine and pass supportsFunctionCall from model config.
Thread Presenter Tool Response Messaging
src/main/presenter/threadPresenter/promptBuilder.ts
Added toolResponses collection to emit separate tool response messages with role "tool" and tool_call_id after assistant messages containing tool calls. Generates missing tool call IDs using nanoid. Modified addContextMessages() and formatMessagesForCompletion() to separate tool responses from assistant content and emit distinct tool response messages.

Sequence Diagram(s)

sequenceDiagram
    participant Provider as OpenAI Provider
    participant Formatter as formatMessages()
    participant Parser as parseFunctionCalls()
    
    rect rgb(200, 220, 255)
    note over Provider,Parser: Native Mode (supportsFunctionCall = true)
    Provider->>Formatter: formatMessages(..., true)
    Formatter->>Formatter: Process tool_calls natively
    Formatter-->>Provider: ChatCompletionMessageParam[]<br/>(tool_calls in assistant message)
    end
    
    rect rgb(255, 220, 200)
    note over Provider,Parser: Non-Native Mode (supportsFunctionCall = false)
    Provider->>Formatter: formatMessages(..., false)
    Formatter->>Formatter: Emit assistant content first<br/>Store tool_calls in pendingToolCalls
    Formatter->>Formatter: Emit function_call_record<br/>in user message
    Formatter-->>Provider: ChatCompletionMessageParam[]<br/>(mock function calls)
    end
    
    Provider->>Parser: parseFunctionCalls(response, fallbackIdPrefix)
    Parser->>Parser: Parse with fallback ID generation
    Parser-->>Provider: Tool call objects
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45–75 minutes

  • Areas requiring extra attention:
    • Dual-mode branching logic in formatMessages() — verify correct message structure for both native and mock paths
    • pendingToolCalls map lifecycle — ensure proper pairing of non-native tool calls with responses and cleanup
    • parseFunctionCalls() signature and fallback ID generation logic — confirm ID collision handling and parsing robustness
    • Integration between openAICompletion() and message formatting — verify supportsFunctionCall flag correctly propagates through completion flow
    • Tool response message emission in promptBuilder.ts — check tool_call_id linking correctness and interaction with the provider's pending map
    • Message role and format consistency across native vs. non-native paths

Possibly related PRs

  • PR #515: Both PRs modify non-native function-call handling in openAICompatibleProvider.ts, specifically parseFunctionCalls ID generation and message/tool-call formatting.
  • PR #1079: Both PRs modify formatMessages and parseFunctionCalls in openAICompatibleProvider.ts to change how assistant/tool messages and non-native function calls are formatted and parsed.
  • PR #526: Both PRs modify function-call message formatting in the presenter (threadPresenter and LLM provider), including function_call_record blocks and tool call/response message emission.

Poem

🐰 Hop hop, the tools now split in two,
Native flows or mock—we'll see them through!
Pending calls are stored with care,
Response messages float through air,
One path true, one path fake,
Both shall work for goodness' sake!

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title is incomplete and vague—it ends with 'conv' (likely truncated) and doesn't clearly describe the main changes to formatMessages, parseFunctionCalls, or tool call handling. Complete and clarify the title to accurately describe the primary change, such as 'Add support for non-native tool call handling in OpenAI-compatible provider' or 'Improve tool call context with native and mock formats'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bugfix/tool-call-context

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0bee52f and c3b2911.

📒 Files selected for processing (2)
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (4 hunks)
  • src/main/presenter/threadPresenter/promptBuilder.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Use PascalCase for TypeScript types and classes

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}

📄 CodeRabbit inference engine (AGENTS.md)

Prettier style: single quotes, no semicolons, print width 100; run pnpm run format

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants

Files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Tool calls must follow tool_call_start → tool_call_chunk* → tool_call_end; tool_call_id is required and stable
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: zerob13
Repo: ThinkInAIXYZ/deepchat PR: 804
File: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts:153-156
Timestamp: 2025-08-28T05:55:31.482Z
Learning: TokenFlux models generally support function calling by default, so it's reasonable to assume hasFunctionCalling = true for TokenFlux provider implementations in src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/mcpPresenter/index.ts : Register new MCP tools in src/main/presenter/mcpPresenter/index.ts

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.

Applied to files:

  • src/main/presenter/threadPresenter/promptBuilder.ts
  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • ChatMessage (1384-1384)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)

Comment on lines +329 to +386
for (const toolCall of msg.tool_calls) {
const toolCallId = toolCall.id || `tool-${Date.now()}-${Math.random()}`
pendingToolCalls.set(toolCallId, {
name: toolCall.function?.name || 'unknown',
arguments:
typeof toolCall.function?.arguments === 'string'
? toolCall.function.arguments
: JSON.stringify(toolCall.function?.arguments || {}),
assistantContent: baseMessage.content as string | undefined
})
}
contentParts.push(`${toolCallText}\nArguments:\n\`\`\`json\n${argsText}\n\`\`\``)
}

// Create merged assistant message
result.push({
role: 'assistant',
content: contentParts.join('\n\n')
} as ChatCompletionMessageParam)
continue
}

// Handle tool messages - append to previous message (should be assistant)
// Handle tool messages
if (msg.role === 'tool') {
const toolContent =
typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content)
const toolResultText = `[Tool Result]\n${toolContent}`

// Find the last message in result and append
if (result.length > 0) {
const lastMessage = result[result.length - 1]
// Ensure lastMessage.content is a string before appending
const currentContent =
typeof lastMessage.content === 'string'
? lastMessage.content
: JSON.stringify(lastMessage.content || '')
lastMessage.content = currentContent
? `${currentContent}\n\n${toolResultText}`
: toolResultText
} else {
// If no previous message, create a new assistant message
if (supportsFunctionCall) {
// Standard OpenAI format - preserve role:tool with tool_call_id
result.push({
role: 'assistant',
content: toolResultText
role: 'tool',
content: typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content),
tool_call_id: msg.tool_call_id || ''
} as ChatCompletionMessageParam)
} else {
// Mock format: Create user message with function_call_record
const toolCallId = msg.tool_call_id || ''
const pendingCall = pendingToolCalls.get(toolCallId)

if (pendingCall) {
// Parse arguments to JSON if it's a string
let argsObj
try {
argsObj =
typeof pendingCall.arguments === 'string'
? JSON.parse(pendingCall.arguments)
: pendingCall.arguments
} catch {
argsObj = {}
}

// Format as function_call_record in user message
const mockRecord = {
function_call_record: {
name: pendingCall.name,
arguments: argsObj,
response:
typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content)
}
}

result.push({
role: 'user',
content: `<function_call>${JSON.stringify(mockRecord)}</function_call>`
} as ChatCompletionMessageParam)

pendingToolCalls.delete(toolCallId)
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure tool responses without IDs stay paired with their calls.

When the model lacks native function calling we generate fallback IDs inside pendingToolCalls, but tool messages that were persisted before IDs existed still arrive with tool_call_id === undefined. In that case we hit the fallback branch and emit <function_call> with name: 'unknown' and empty args, so the next round loses both the tool name and arguments. This breaks every legacy transcript that’s replayed through the non-native flow because the model can no longer reason over prior tool output.

Please make sure we reuse the generated fallback ID when the response comes back without one (e.g. track insertion order and shift a queued fallback ID). One way to fix it:

@@
-    const pendingToolCalls: Map<
-      string,
-      { name: string; arguments: string; assistantContent?: string }
-    > = new Map()
+    const pendingToolCalls: Map<
+      string,
+      { name: string; arguments: string; assistantContent?: string }
+    > = new Map()
+    const pendingToolCallQueue: string[] = []
@@
-          const toolCallId = toolCall.id || `tool-${Date.now()}-${Math.random()}`
+          const toolCallId = toolCall.id || `tool-${Date.now()}-${Math.random()}`
+          pendingToolCallQueue.push(toolCallId)
           pendingToolCalls.set(toolCallId, {
@@
-          const toolCallId = msg.tool_call_id || ''
-          const pendingCall = pendingToolCalls.get(toolCallId)
+          let toolCallId = msg.tool_call_id || ''
+          if (!toolCallId && pendingToolCallQueue.length > 0) {
+            toolCallId = pendingToolCallQueue.shift()!
+          }
+          const pendingCall = toolCallId ? pendingToolCalls.get(toolCallId) : undefined
@@
-            pendingToolCalls.delete(toolCallId)
+            if (toolCallId) pendingToolCalls.delete(toolCallId)

This keeps legacy conversations intact while still supporting native IDs.

Committable suggestion skipped: line range outside the PR's diff.

@zerob13 zerob13 merged commit 9215de7 into dev Nov 4, 2025
2 checks passed
zerob13 added a commit that referenced this pull request Nov 10, 2025
* Merge pull request #1079 from ThinkInAIXYZ/bugfix/refresh-model

fix: custom provider add refresh-model

* fix: add tool call context for better conv (#1081)

* fix: update `tag_name` for release artifact urls (#1084)

Signed-off-by: Rui Chen <rui@chenrui.dev>

* refactor: standardize image block data structure (#1082)

* refactor: standardize image block data structure with backward compatibility

Normalize image_data structure in ThreadPresenter to ensure consistent mimeType handling. Update MessageBlockImage component to support legacy data formats (content as object/string/data URI) while maintaining compatibility with new image_data field.

* fix: properly normalize image data URIs before persistence

Extract base64 content and mime type from data URIs (data:image/jpeg;base64,...) to prevent double-encoding in renderer. This fixes image display errors where data:image/png;base64,data:image/jpeg;base64,... was being constructed.

- Parse data URIs to extract real mime type and base64 content
- Force URL schemes (http://, https://, imgcache://) to deepchat/image-url
- Preserve provided mime types when available
- Fallback to image/png only for raw base64 without metadata

* fix: normalize legacy data URIs in renderer to prevent double-encoding

Handle historical image_data records that may still contain full data:image/...;base64,... URIs. Extract base64 content and mime type before template binding to prevent constructing invalid data:image/png;base64,data:image/png;base64,... URIs.

- Parse data URIs in both new image_data and legacy content formats
- Always provide mimeType fallback for historical records
- Ensure normalized data format before template consumption

* feat: add request trace for llm (#1085)

* feat: add trace support wip

* feat: add trace dialog with monaco

* feat: add i18n for trace dialog

* feat: add config for trace params

* fix: prevent stale previews when messageId changes

* fix: toggle model config refresh (#1086)

* release: 0.4.5

---------

Signed-off-by: Rui Chen <rui@chenrui.dev>
Co-authored-by: Rui Chen <rui@chenrui.dev>
Co-authored-by: 韦伟 <xweimvp@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants