-
Notifications
You must be signed in to change notification settings - Fork 614
fix: add tool call context for better conv #1081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughImplements dual-mode tool-call handling for OpenAI-compatible providers: native function calls when supported, or mock non-native flows using a pendingToolCalls map. Updates message formatting in both the LLM provider and thread presenter to handle separate tool response messages with tool_call_id linking. Changes
Sequence Diagram(s)sequenceDiagram
participant Provider as OpenAI Provider
participant Formatter as formatMessages()
participant Parser as parseFunctionCalls()
rect rgb(200, 220, 255)
note over Provider,Parser: Native Mode (supportsFunctionCall = true)
Provider->>Formatter: formatMessages(..., true)
Formatter->>Formatter: Process tool_calls natively
Formatter-->>Provider: ChatCompletionMessageParam[]<br/>(tool_calls in assistant message)
end
rect rgb(255, 220, 200)
note over Provider,Parser: Non-Native Mode (supportsFunctionCall = false)
Provider->>Formatter: formatMessages(..., false)
Formatter->>Formatter: Emit assistant content first<br/>Store tool_calls in pendingToolCalls
Formatter->>Formatter: Emit function_call_record<br/>in user message
Formatter-->>Provider: ChatCompletionMessageParam[]<br/>(mock function calls)
end
Provider->>Parser: parseFunctionCalls(response, fallbackIdPrefix)
Parser->>Parser: Parse with fallback ID generation
Parser-->>Provider: Tool call objects
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45–75 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts(4 hunks)src/main/presenter/threadPresenter/promptBuilder.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (11)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Use PascalCase for TypeScript types and classes
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Place Electron main-process presenters under src/main/presenter/ (Window, Tab, Thread, Mcp, Config, LLMProvider)
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue,css,scss,md,json,yml,yaml}
📄 CodeRabbit inference engine (AGENTS.md)
Prettier style: single quotes, no semicolons, print width 100; run pnpm run format
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx,vue}: Use OxLint for JS/TS code; keep lint clean
Use camelCase for variables and functions
Use SCREAMING_SNAKE_CASE for constants
Files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧠 Learnings (12)
📓 Common learnings
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Tool calls must follow tool_call_start → tool_call_chunk* → tool_call_end; tool_call_id is required and stable
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Learnt from: zerob13
Repo: ThinkInAIXYZ/deepchat PR: 804
File: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts:153-156
Timestamp: 2025-08-28T05:55:31.482Z
Learning: TokenFlux models generally support function calling by default, so it's reasonable to assume hasFunctionCalling = true for TokenFlux provider implementations in src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield tool call events (`tool_call_start`, `tool_call_chunk`, `tool_call_end`) in the standardized format.
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop should buffer text content, handle tool call events, format tool results for the next LLM call, and manage conversation continuation logic.
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : The main Agent loop in `llmProviderPresenter/index.ts` should handle multi-round LLM calls and tool usage, maintaining conversation state and controlling the loop with `needContinueConversation` and `toolCallCount`.
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/mcpPresenter/index.ts : Register new MCP tools in src/main/presenter/mcpPresenter/index.ts
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using `convertToProviderTools`) and included in the API request.
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., `prepareFunctionCallPrompt`) before making the API call.
Applied to files:
src/main/presenter/threadPresenter/promptBuilder.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : The `coreStream` method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider implementations should yield text events in the standardized format.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (1)
ChatMessage(1384-1384)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
| for (const toolCall of msg.tool_calls) { | ||
| const toolCallId = toolCall.id || `tool-${Date.now()}-${Math.random()}` | ||
| pendingToolCalls.set(toolCallId, { | ||
| name: toolCall.function?.name || 'unknown', | ||
| arguments: | ||
| typeof toolCall.function?.arguments === 'string' | ||
| ? toolCall.function.arguments | ||
| : JSON.stringify(toolCall.function?.arguments || {}), | ||
| assistantContent: baseMessage.content as string | undefined | ||
| }) | ||
| } | ||
| contentParts.push(`${toolCallText}\nArguments:\n\`\`\`json\n${argsText}\n\`\`\``) | ||
| } | ||
|
|
||
| // Create merged assistant message | ||
| result.push({ | ||
| role: 'assistant', | ||
| content: contentParts.join('\n\n') | ||
| } as ChatCompletionMessageParam) | ||
| continue | ||
| } | ||
|
|
||
| // Handle tool messages - append to previous message (should be assistant) | ||
| // Handle tool messages | ||
| if (msg.role === 'tool') { | ||
| const toolContent = | ||
| typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content) | ||
| const toolResultText = `[Tool Result]\n${toolContent}` | ||
|
|
||
| // Find the last message in result and append | ||
| if (result.length > 0) { | ||
| const lastMessage = result[result.length - 1] | ||
| // Ensure lastMessage.content is a string before appending | ||
| const currentContent = | ||
| typeof lastMessage.content === 'string' | ||
| ? lastMessage.content | ||
| : JSON.stringify(lastMessage.content || '') | ||
| lastMessage.content = currentContent | ||
| ? `${currentContent}\n\n${toolResultText}` | ||
| : toolResultText | ||
| } else { | ||
| // If no previous message, create a new assistant message | ||
| if (supportsFunctionCall) { | ||
| // Standard OpenAI format - preserve role:tool with tool_call_id | ||
| result.push({ | ||
| role: 'assistant', | ||
| content: toolResultText | ||
| role: 'tool', | ||
| content: typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content), | ||
| tool_call_id: msg.tool_call_id || '' | ||
| } as ChatCompletionMessageParam) | ||
| } else { | ||
| // Mock format: Create user message with function_call_record | ||
| const toolCallId = msg.tool_call_id || '' | ||
| const pendingCall = pendingToolCalls.get(toolCallId) | ||
|
|
||
| if (pendingCall) { | ||
| // Parse arguments to JSON if it's a string | ||
| let argsObj | ||
| try { | ||
| argsObj = | ||
| typeof pendingCall.arguments === 'string' | ||
| ? JSON.parse(pendingCall.arguments) | ||
| : pendingCall.arguments | ||
| } catch { | ||
| argsObj = {} | ||
| } | ||
|
|
||
| // Format as function_call_record in user message | ||
| const mockRecord = { | ||
| function_call_record: { | ||
| name: pendingCall.name, | ||
| arguments: argsObj, | ||
| response: | ||
| typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content) | ||
| } | ||
| } | ||
|
|
||
| result.push({ | ||
| role: 'user', | ||
| content: `<function_call>${JSON.stringify(mockRecord)}</function_call>` | ||
| } as ChatCompletionMessageParam) | ||
|
|
||
| pendingToolCalls.delete(toolCallId) | ||
| } else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure tool responses without IDs stay paired with their calls.
When the model lacks native function calling we generate fallback IDs inside pendingToolCalls, but tool messages that were persisted before IDs existed still arrive with tool_call_id === undefined. In that case we hit the fallback branch and emit <function_call> with name: 'unknown' and empty args, so the next round loses both the tool name and arguments. This breaks every legacy transcript that’s replayed through the non-native flow because the model can no longer reason over prior tool output.
Please make sure we reuse the generated fallback ID when the response comes back without one (e.g. track insertion order and shift a queued fallback ID). One way to fix it:
@@
- const pendingToolCalls: Map<
- string,
- { name: string; arguments: string; assistantContent?: string }
- > = new Map()
+ const pendingToolCalls: Map<
+ string,
+ { name: string; arguments: string; assistantContent?: string }
+ > = new Map()
+ const pendingToolCallQueue: string[] = []
@@
- const toolCallId = toolCall.id || `tool-${Date.now()}-${Math.random()}`
+ const toolCallId = toolCall.id || `tool-${Date.now()}-${Math.random()}`
+ pendingToolCallQueue.push(toolCallId)
pendingToolCalls.set(toolCallId, {
@@
- const toolCallId = msg.tool_call_id || ''
- const pendingCall = pendingToolCalls.get(toolCallId)
+ let toolCallId = msg.tool_call_id || ''
+ if (!toolCallId && pendingToolCallQueue.length > 0) {
+ toolCallId = pendingToolCallQueue.shift()!
+ }
+ const pendingCall = toolCallId ? pendingToolCalls.get(toolCallId) : undefined
@@
- pendingToolCalls.delete(toolCallId)
+ if (toolCallId) pendingToolCalls.delete(toolCallId)This keeps legacy conversations intact while still supporting native IDs.
Committable suggestion skipped: line range outside the PR's diff.
* Merge pull request #1079 from ThinkInAIXYZ/bugfix/refresh-model fix: custom provider add refresh-model * fix: add tool call context for better conv (#1081) * fix: update `tag_name` for release artifact urls (#1084) Signed-off-by: Rui Chen <rui@chenrui.dev> * refactor: standardize image block data structure (#1082) * refactor: standardize image block data structure with backward compatibility Normalize image_data structure in ThreadPresenter to ensure consistent mimeType handling. Update MessageBlockImage component to support legacy data formats (content as object/string/data URI) while maintaining compatibility with new image_data field. * fix: properly normalize image data URIs before persistence Extract base64 content and mime type from data URIs (data:image/jpeg;base64,...) to prevent double-encoding in renderer. This fixes image display errors where data:image/png;base64,data:image/jpeg;base64,... was being constructed. - Parse data URIs to extract real mime type and base64 content - Force URL schemes (http://, https://, imgcache://) to deepchat/image-url - Preserve provided mime types when available - Fallback to image/png only for raw base64 without metadata * fix: normalize legacy data URIs in renderer to prevent double-encoding Handle historical image_data records that may still contain full data:image/...;base64,... URIs. Extract base64 content and mime type before template binding to prevent constructing invalid data:image/png;base64,data:image/png;base64,... URIs. - Parse data URIs in both new image_data and legacy content formats - Always provide mimeType fallback for historical records - Ensure normalized data format before template consumption * feat: add request trace for llm (#1085) * feat: add trace support wip * feat: add trace dialog with monaco * feat: add i18n for trace dialog * feat: add config for trace params * fix: prevent stale previews when messageId changes * fix: toggle model config refresh (#1086) * release: 0.4.5 --------- Signed-off-by: Rui Chen <rui@chenrui.dev> Co-authored-by: Rui Chen <rui@chenrui.dev> Co-authored-by: 韦伟 <xweimvp@gmail.com>
Summary by CodeRabbit