-
Notifications
You must be signed in to change notification settings - Fork 614
fix: context trim remove tools first #1196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughReworks prompt builder's context selection to be token-aware: adds token-calculation helpers, removes/compresses assistant tool_call blocks, and iteratively prunes user–assistant pairs while supporting function-call and vision flags. Adds tests for these flows and a single UI icon swap. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
test/main/presenter/threadPresenter/promptBuilder.test.ts (1)
83-83: Consider adding tests for additional edge cases.The current tests cover the main scenarios well. For more comprehensive coverage, consider adding tests for:
remainingContextLength <= 0returns empty array- Orphaned assistant message removal (assistant without corresponding user)
is_variantprioritization when multiple assistants match the same uservision=trueparameter behaviorsrc/main/presenter/threadPresenter/utils/promptBuilder.ts (1)
506-511: Minor: Redundant token recalculation.Line 508 calculates
totalTokens = Math.max(0, totalTokens - removedTokens), but line 510 immediately recalculatestotalTokens = calculateMessagesTokens(chatMessages). The line 508 calculation is redundant.🔎 Proposed simplification
if (supportsFunctionCall) { const { removedTokens } = compressToolCallsFromContext( selectedMessages, excessTokens, supportsFunctionCall ) - totalTokens = Math.max(0, totalTokens - removedTokens) chatMessages = addContextMessages(selectedMessages, vision, supportsFunctionCall) totalTokens = calculateMessagesTokens(chatMessages) }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/main/presenter/threadPresenter/utils/promptBuilder.ts(3 hunks)src/renderer/shell/components/AppBar.vue(1 hunks)test/main/presenter/threadPresenter/promptBuilder.test.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (21)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Files:
src/renderer/shell/components/AppBar.vuetest/main/presenter/threadPresenter/promptBuilder.test.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.vue: Use Vue 3 Composition API for all components instead of Options API
Use Tailwind CSS with scoped styles for component styling
Files:
src/renderer/shell/components/AppBar.vue
src/renderer/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/**/*.vue: All user-facing strings must use i18n keys via vue-i18n for internationalization
Ensure proper error handling and loading states in all UI components
Implement responsive design using Tailwind CSS utilities for all UI components
src/renderer/**/*.vue: Use composition API and declarative programming patterns; avoid options API
Structure files: exported component, composables, helpers, static content, types
Use PascalCase for component names (e.g., AuthWizard.vue)
Use Vue 3 with TypeScript, leveraging defineComponent and PropType
Use template syntax for declarative rendering
Use Shadcn Vue, Radix Vue, and Tailwind for components and styling
Implement responsive design with Tailwind CSS; use a mobile-first approach
Use Suspense for asynchronous components
Use <script setup> syntax for concise component definitions
Prefer 'lucide:' icon family as the primary choice for Iconify icons
Import Icon component from '@iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'
Files:
src/renderer/shell/components/AppBar.vue
src/**/*
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
New features should be developed in the
srcdirectory
Files:
src/renderer/shell/components/AppBar.vuesrc/main/presenter/threadPresenter/utils/promptBuilder.ts
src/renderer/**/*.{vue,js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Renderer process code should be placed in
src/renderer(Vue 3 application)
Files:
src/renderer/shell/components/AppBar.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Write concise, technical TypeScript code with accurate examples
Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError)
Avoid enums; use const objects instead
Use arrow functions for methods and computed properties
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statementsVue 3 app code in
src/renderer/srcshould be organized intocomponents/,stores/,views/,i18n/,lib/directories with shell UI insrc/renderer/shell/
Files:
src/renderer/shell/components/AppBar.vue
src/renderer/**
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Use lowercase with dashes for directories (e.g., components/auth-wizard)
Files:
src/renderer/shell/components/AppBar.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching
Leverage ref, reactive, and computed for reactive state management
Use provide/inject for dependency injection when appropriate
Use Iconify/Vue for icon implementation
Files:
src/renderer/shell/components/AppBar.vue
src/**/*.{ts,tsx,vue,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with single quotes, no semicolons, and 100 character width
Files:
src/renderer/shell/components/AppBar.vuesrc/main/presenter/threadPresenter/utils/promptBuilder.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and maintain strict TypeScript type checking for all files
**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs
Files:
test/main/presenter/threadPresenter/promptBuilder.test.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
test/**/*.{test,spec}.{ts,tsx,js}
📄 CodeRabbit inference engine (CLAUDE.md)
Use Vitest framework for unit and integration tests
Files:
test/main/presenter/threadPresenter/promptBuilder.test.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits
Files:
test/main/presenter/threadPresenter/promptBuilder.test.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Write logs and comments in English
Files:
test/main/presenter/threadPresenter/promptBuilder.test.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
test/**/*.{test,spec}.ts
📄 CodeRabbit inference engine (AGENTS.md)
test/**/*.{test,spec}.ts: Test files should follow the same directory structure as source code undertest/mainandtest/rendererdirectories
Use Vitest and Vue Test Utils for testing with jsdom configuration
Test files must be named with.test.tsor.spec.tsextension
Files:
test/main/presenter/threadPresenter/promptBuilder.test.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Organize core business logic into dedicated Presenter classes, with one presenter per functional domain
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use EventBus from
src/main/eventbus.tsfor main-to-renderer communication, broadcasting events viamainWindow.webContents.send()
src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations
src/main/**/*.ts: Electron main process code belongs insrc/main/with presenters inpresenter/(Window/Tab/Thread/Mcp/Config/LLMProvider) andeventbus.tsfor app events
Use the Presenter pattern in the main process for UI coordination
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/main/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Main process code for Electron should be placed in
src/main
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use OxLint for linting JavaScript and TypeScript files
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use EventBus for inter-process communication events
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
🧠 Learnings (13)
📚 Learning: 2025-11-25T05:28:04.454Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.454Z
Learning: Applies to src/renderer/**/*.vue : Prefer 'lucide:' icon family as the primary choice for Iconify icons
Applied to files:
src/renderer/shell/components/AppBar.vue
📚 Learning: 2025-11-25T05:28:04.454Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.454Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Use Iconify/Vue for icon implementation
Applied to files:
src/renderer/shell/components/AppBar.vue
📚 Learning: 2025-11-25T05:28:04.454Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.454Z
Learning: Applies to src/renderer/**/*.vue : Import Icon component from 'iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'
Applied to files:
src/renderer/shell/components/AppBar.vue
📚 Learning: 2025-11-25T05:28:04.454Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.454Z
Learning: Applies to src/renderer/**/*.vue : Use PascalCase for component names (e.g., AuthWizard.vue)
Applied to files:
src/renderer/shell/components/AppBar.vue
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`
Applied to files:
test/main/presenter/threadPresenter/promptBuilder.test.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
Applied to files:
test/main/presenter/threadPresenter/promptBuilder.test.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `stop` events by checking `stop_reason`: if `'tool_use'`, add the buffered assistant message and prepare for the next loop iteration; otherwise, add the final assistant message and exit the loop
Applied to files:
test/main/presenter/threadPresenter/promptBuilder.test.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Applied to files:
test/main/presenter/threadPresenter/promptBuilder.test.tssrc/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/mcpPresenter/**/*.ts : Register new MCP tools in `mcpPresenter/index.ts` after implementing them in `inMemoryServers/`
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/renderer/**/*.ts : Use the `usePresenter.ts` composable for renderer-to-main IPC communication to call presenter methods directly
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
🧬 Code graph analysis (1)
test/main/presenter/threadPresenter/promptBuilder.test.ts (2)
src/main/presenter/threadPresenter/utils/messageContent.ts (1)
buildUserMessageContext(156-165)src/main/presenter/threadPresenter/utils/promptBuilder.ts (1)
selectContextMessages(477-584)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (6)
src/renderer/shell/components/AppBar.vue (1)
93-93: LGTM!Icon change from
lucide:globetolucide:compassfollows the project's Iconify/lucide pattern. Based on learnings, using thelucide:icon family is the preferred choice.test/main/presenter/threadPresenter/promptBuilder.test.ts (1)
83-176: LGTM! Good test coverage for the context pruning scenarios.The three test cases effectively cover:
- Basic pair dropping without tool calls
- Tool call block removal before pair dropping
- Combined tool call removal and pair dropping
The tests correctly validate that the last user/assistant pair is preserved even when over the token limit, which aligns with the safeguard in the implementation (lines 523-526 in
promptBuilder.ts).src/main/presenter/threadPresenter/utils/promptBuilder.ts (4)
364-408: LGTM!The token calculation helpers are well-structured with clear separation of concerns:
calculateToolCallTokensfor ChatMessage tool callscalculateToolCallBlockTokensfor AssistantMessageBlock tool calls- Correctly handles the distinction between message formats
517-561: LGTM on pair dropping logic.The algorithm correctly:
- Finds oldest user message (first in reversed array)
- Matches it with corresponding assistant via
parentId- Prefers non-variant assistants when multiple match
- Preserves at least one user/assistant pair
- Handles orphaned messages in a cleanup pass
568-584: LGTM!The orphaned assistant cleanup correctly filters out assistants whose
parentIddoesn't correspond to a user in the selected set. The final reverse ensures chronological order for downstream consumers.
477-483: All call sites ofselectContextMessageshave been properly updated with the new function signature. The function now correctly requiressupportsFunctionCallandvisionparameters, and all usages in both the main code (line 139) and test files (lines 94, 128, 166) pass the required five arguments in the correct order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const lastUserIndex = selectedMessages.findLastIndex((msg) => msg.role === 'user') | ||
| if (userIndex === lastUserIndex) { | ||
| break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid returning context that still exceeds limits
The trimming loop breaks when only one user message remains (userIndex === lastUserIndex), even if totalTokens still exceeds remainingContextLength. This can leave an over‑budget context (e.g., a single long user/assistant pair) and cause the final prompt to exceed the model’s context window, which the previous implementation avoided by skipping oversized messages entirely. You likely need a fallback that drops the remaining pair or truncates content when the last pair alone is too large.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/main/presenter/threadPresenter/utils/promptBuilder.ts (1)
548-592: Edge case: Context may still exceed token limit when final pair is too large.The loop exits when only one user message remains (lines 554-556), but doesn't handle the case where this final user/assistant pair still exceeds
remainingContextLength. This could result in returning an over-budget context to downstream consumers.Consider adding a fallback to either:
- Truncate the last message content
- Return an empty array with a warning
- Skip the oversized pair entirely
🧹 Nitpick comments (3)
src/main/presenter/threadPresenter/utils/promptBuilder.ts (3)
410-428: Type safety concern: Array content casting assumesAssistantMessageBlocktype.The function casts array elements to
AssistantMessageBlock(line 415), but it's called on all message types including user messages. User messages may have different content structures (UserMessageContent), which could lead to incorrect cloning behavior.Consider adding a role check or handling different content structures:
🔎 Proposed fix
function cloneMessageWithContent(message: Message): Message { const cloned: Message = { ...message } if (Array.isArray(message.content)) { - cloned.content = message.content.map((block) => { - const clonedBlock: AssistantMessageBlock = { ...(block as AssistantMessageBlock) } - if (block.type === 'tool_call' && block.tool_call) { - clonedBlock.tool_call = { ...block.tool_call } - } - return clonedBlock - }) + if (message.role === 'assistant') { + cloned.content = message.content.map((block) => { + const clonedBlock: AssistantMessageBlock = { ...(block as AssistantMessageBlock) } + if (block.type === 'tool_call' && block.tool_call) { + clonedBlock.tool_call = { ...block.tool_call } + } + return clonedBlock + }) + } else { + // For user or other roles, deep clone the array + cloned.content = JSON.parse(JSON.stringify(message.content)) + } } else if (message.content && typeof message.content === 'object') { cloned.content = JSON.parse(JSON.stringify(message.content)) } else { cloned.content = message.content } return cloned }
460-503: Minor: In-place array mutation could be clarified.The function mutates the input
messagesarray directly (line 489) then returns it ascompressedMessages. While this is safe becauseselectContextMessagescreates a cloned array at line 518, the function signature suggests immutability. Consider either documenting this behavior or making the return explicit:🔎 Suggestion for clarity
+/** + * Compresses tool calls from context messages to reduce token count. + * Note: This function mutates the input messages array in place. + */ function compressToolCallsFromContext( messages: Message[], excessTokens: number, supportsFunctionCall: boolean ): { compressedMessages: Message[]; removedTokens: number } {
528-528: Useconsole.debugfor diagnostic logging.Per coding guidelines, use structured logging with appropriate log levels. This diagnostic log should use
console.debuginstead ofconsole.log.🔎 Proposed fix
- console.log('totalTokens', totalTokens, 'remainingContextLength', remainingContextLength) + console.debug('totalTokens', totalTokens, 'remainingContextLength', remainingContextLength)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/main/presenter/threadPresenter/utils/promptBuilder.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (13)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and maintain strict TypeScript type checking for all files
**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Organize core business logic into dedicated Presenter classes, with one presenter per functional domain
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use EventBus from
src/main/eventbus.tsfor main-to-renderer communication, broadcasting events viamainWindow.webContents.send()
src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations
src/main/**/*.ts: Electron main process code belongs insrc/main/with presenters inpresenter/(Window/Tab/Thread/Mcp/Config/LLMProvider) andeventbus.tsfor app events
Use the Presenter pattern in the main process for UI coordination
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Write logs and comments in English
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
New features should be developed in the
srcdirectory
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/main/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Main process code for Electron should be placed in
src/main
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*.{ts,tsx,vue,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with single quotes, no semicolons, and 100 character width
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use OxLint for linting JavaScript and TypeScript files
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use EventBus for inter-process communication events
Files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
🧠 Learnings (10)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Store and retrieve custom prompts via `configPresenter.getCustomPrompts()` for config-based data source management
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-06-21T15:49:17.044Z
Learnt from: neoragex2002
Repo: ThinkInAIXYZ/deepchat PR: 550
File: src/renderer/src/stores/chat.ts:1011-1035
Timestamp: 2025-06-21T15:49:17.044Z
Learning: In src/renderer/src/stores/chat.ts, the user prefers to keep both `text` and `content` properties in the `handleMeetingInstruction` function's `sendMessage` call, even though they are redundant, rather than removing the `content` property.
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/mcpPresenter/**/*.ts : Register new MCP tools in `mcpPresenter/index.ts` after implementing them in `inMemoryServers/`
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/renderer/**/*.ts : Use the `usePresenter.ts` composable for renderer-to-main IPC communication to call presenter methods directly
Applied to files:
src/main/presenter/threadPresenter/utils/promptBuilder.ts
🧬 Code graph analysis (1)
src/main/presenter/threadPresenter/utils/promptBuilder.ts (1)
src/shared/types/presenters/legacy.presenters.d.ts (1)
ChatMessage(1709-1709)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (3)
src/main/presenter/threadPresenter/utils/promptBuilder.ts (3)
364-408: LGTM! Token calculation helpers are well-structured.The helper functions correctly handle different content types and provide consistent token estimation using the
tokenxlibrary.
430-458: Previous mutation concern has been addressed.The function now properly clones the message via
cloneMessageWithContent(line 439) before modifying, preventing side effects on the originalcontextMessages.
599-614: LGTM! Orphan cleanup and ordering are well-implemented.The orphan assistant removal logic correctly ensures consistency by filtering assistants whose
parentIddoesn't correspond to a user in the selected messages. The final reverse restores chronological order for downstream consumers.
Summary by CodeRabbit
New Features
Style
Tests
✏️ Tip: You can customize this high-level summary in your review settings.