✨ feat: add message queue for agent runtime #13343
Conversation
Implement per-context message queue that allows users to send messages while the agent is executing. Messages are queued and consumed via two paths: injected at step boundaries during execution (Path A), or triggering a new sendMessage after completion (Path B). - Add QueuedMessage type and queuedMessages state in operation store - Add enqueue/drain/remove/clear actions and selectors - Modify sendMessage to enqueue when execAgentRuntime is running - Add queue checkpoint in step loop (streamingExecutor) - Add Path B: drain remaining queue after completion → new sendMessage - Keep input enabled during agent execution (remove isInputLoading guard) - Add QueueTray component showing "N Queued" above ChatInput - Add electron-testing skill for agent-browser CLI automation Fixes LOBE-6001 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…tRuntime Use setTimeout(0) to break out of the current execution context when triggering a new agent runtime for queued messages after completion. Direct recursive calls caused issues with zustand state batching. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Path A (step checkpoint injection) was draining the queue before the last LLM step, leaving nothing for Path B. For agents without tool calls, this meant queued messages were consumed but never acted upon. Fix: remove Path A for now (will be re-added for tool-call scenarios), and use useChatStore.getState() in Path B setTimeout to get a fresh store reference instead of a stale closure capture. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Based on lessons learned during message queue testing: - Must cd to apps/desktop before npx electron-vite dev - Use polling loop for startup detection - snapshot -i -C required for contenteditable (chat input) - Use sleep + screenshot instead of agent-browser wait for long ops - Access store via window.__LOBE_STORES.chat() - Add error interceptor and store inspection patterns - Document all gotchas (HMR, daemon blocking, fill vs type) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When tools finish and queue has messages, break the step loop early and let Path B create user message + start new operation. The new LLM call sees full context including tool results + new user message. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Use sendMessage instead of optimisticCreateMessage + internal_execAgentRuntime. sendMessage handles the full lifecycle correctly: creates user message on server, creates assistant message placeholder, and triggers internal_execAgentRuntime — ensuring both messages are visible in UI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Each queued message shows as a card with icon, text preview, and delete button. Uses antd CSS variables for consistent theming. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
QueueTray and ChatInput now share a connected border: - QueueTray has top-rounded corners, no bottom border - ChatInput gets bottom-only rounded corners when queue has items - Uses cssVar for proper theme token styling - Zero gap between tray and input Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 1b9d40ec60
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| const currentContextKey = messageMapKey(operationContext); | ||
| const contextOpIds = this.#get().operationsByContext[currentContextKey] || []; | ||
| const runningAgentOp = contextOpIds |
There was a problem hiding this comment.
Use indexed context key when checking for running operations
This lookup builds currentContextKey from the full operationContext, but operationsByContext is indexed elsewhere using only agentId/groupId/topicId; in thread or scoped contexts (for example with threadId or scope), this key mismatch means runningAgentOp is not found, so a second sendMessage starts instead of being queued. That leads to overlapping generations in the same conversation. Please derive this key with the same shape as the operation index (or reuse a selector that already filters by context).
Useful? React with 👍 / 👎.
| setTimeout(() => { | ||
| useChatStore | ||
| .getState() | ||
| .sendMessage({ message: mergedContent, context: execContext }) |
There was a problem hiding this comment.
Forward queued file attachments when replaying queued messages
Path B merges queued messages (including collected files) but then schedules sendMessage with only message and context, so queued messages that included uploaded files are replayed as text-only prompts. Because the UI clears attachments immediately on send, this silently drops user input and changes model behavior. The replay call should include the merged file payload instead of discarding it.
Useful? React with 👍 / 👎.
| // ━━━ Message Queue Post-Completion (Path B) ━━━ | ||
| // If the loop finished (done) and queue still has messages (enqueued during the | ||
| // last step), complete this operation and trigger a new sendMessage. | ||
| const remainingQueued = this.#get().drainQueuedMessages(contextKey); |
There was a problem hiding this comment.
Drain queued messages only after successful runtime completion
The queue is drained before checking state.status === 'done', so if runtime exits in a non-done state (for example error), queued messages are removed and never retried because the resend branch is skipped. This causes silent loss of user-queued prompts after failed generations. Drain should happen only in the success path (or be restored for non-done outcomes).
Useful? React with 👍 / 👎.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## canary #13343 +/- ##
==========================================
- Coverage 66.83% 66.78% -0.06%
==========================================
Files 1891 1891
Lines 151326 151483 +157
Branches 17982 15233 -2749
==========================================
+ Hits 101144 101170 +26
- Misses 50063 50194 +131
Partials 119 119
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
|
@ONLY-yours @canisminor1990 - This PR adds a message queue mechanism to the agent runtime, touching conversation lifecycle, streaming executor, operation store slices, and ChatInput UI components. Please take a look. |
Move the "finish early when queue has messages" decision from streamingExecutor into GeneralChatAgent.runner(). The agent now checks stepContext.hasQueuedMessages at tools_batch_result phase and returns finish instruction, which is architecturally cleaner. - Add hasQueuedMessages to RuntimeStepContext and computeStepContext - GeneralChatAgent returns finish when tools complete + queue non-empty - Remove Path A/B labels from comments - streamingExecutor just passes hasQueuedMessages via stepContext Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Forward merged file attachments when replaying queued messages (sendMessage now receives files from merged queue) - Move drainQueuedMessages inside the status==='done' branch so queued messages are preserved on error/interrupted states - Add queued_message_interrupt to FinishReason enum - Add hasQueuedMessages check to tool_result and tasks_batch_result phases in GeneralChatAgent (not just tools_batch_result) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- operationsByContext index now uses messageMapKey(context) with full context (including threadId, scope, etc.) instead of stripped key - Fixes key mismatch where thread/scoped contexts couldn't find running operations, causing overlapping generations - Move mergeQueuedMessages from services/messageQueue.ts into operation/types.ts alongside QueuedMessage type - Delete services/messageQueue.ts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
❤️ Great PR @arvinxx ❤️ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
* ✨ feat: add message queue for agent runtime (soft interrupt) Implement per-context message queue that allows users to send messages while the agent is executing. Messages are queued and consumed via two paths: injected at step boundaries during execution (Path A), or triggering a new sendMessage after completion (Path B). - Add QueuedMessage type and queuedMessages state in operation store - Add enqueue/drain/remove/clear actions and selectors - Modify sendMessage to enqueue when execAgentRuntime is running - Add queue checkpoint in step loop (streamingExecutor) - Add Path B: drain remaining queue after completion → new sendMessage - Keep input enabled during agent execution (remove isInputLoading guard) - Add QueueTray component showing "N Queued" above ChatInput - Add electron-testing skill for agent-browser CLI automation Fixes LOBE-6001 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * 🐛 fix: Path B deferred execution to avoid recursive internal_execAgentRuntime Use setTimeout(0) to break out of the current execution context when triggering a new agent runtime for queued messages after completion. Direct recursive calls caused issues with zustand state batching. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * 🐛 fix: remove premature Path A drain, fix Path B with fresh store ref Path A (step checkpoint injection) was draining the queue before the last LLM step, leaving nothing for Path B. For agents without tool calls, this meant queued messages were consumed but never acted upon. Fix: remove Path A for now (will be re-added for tool-call scenarios), and use useChatStore.getState() in Path B setTimeout to get a fresh store reference instead of a stale closure capture. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * 📝 docs: update electron-testing skill with real-world patterns Based on lessons learned during message queue testing: - Must cd to apps/desktop before npx electron-vite dev - Use polling loop for startup detection - snapshot -i -C required for contenteditable (chat input) - Use sleep + screenshot instead of agent-browser wait for long ops - Access store via window.__LOBE_STORES.chat() - Add error interceptor and store inspection patterns - Document all gotchas (HMR, daemon blocking, fill vs type) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ✨ feat: add Path A - early handoff to Path B at tool completion When tools finish and queue has messages, break the step loop early and let Path B create user message + start new operation. The new LLM call sees full context including tool results + new user message. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * 🐛 fix: Path B use sendMessage for proper message creation Use sendMessage instead of optimisticCreateMessage + internal_execAgentRuntime. sendMessage handles the full lifecycle correctly: creates user message on server, creates assistant message placeholder, and triggers internal_execAgentRuntime — ensuring both messages are visible in UI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * 💄 style: redesign QueueTray to Codex-style card layout Each queued message shows as a card with icon, text preview, and delete button. Uses antd CSS variables for consistent theming. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * 💄 style: connect QueueTray with ChatInput as unified container QueueTray and ChatInput now share a connected border: - QueueTray has top-rounded corners, no bottom border - ChatInput gets bottom-only rounded corners when queue has items - Uses cssVar for proper theme token styling - Zero gap between tray and input Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ♻️ refactor: move queue check logic into GeneralChatAgent Move the "finish early when queue has messages" decision from streamingExecutor into GeneralChatAgent.runner(). The agent now checks stepContext.hasQueuedMessages at tools_batch_result phase and returns finish instruction, which is architecturally cleaner. - Add hasQueuedMessages to RuntimeStepContext and computeStepContext - GeneralChatAgent returns finish when tools complete + queue non-empty - Remove Path A/B labels from comments - streamingExecutor just passes hasQueuedMessages via stepContext Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * 🐛 fix: forward queued files in sendMessage and drain only on success - Forward merged file attachments when replaying queued messages (sendMessage now receives files from merged queue) - Move drainQueuedMessages inside the status==='done' branch so queued messages are preserved on error/interrupted states - Add queued_message_interrupt to FinishReason enum - Add hasQueuedMessages check to tool_result and tasks_batch_result phases in GeneralChatAgent (not just tools_batch_result) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * ♻️ refactor: use full operationContext for context key indexing - operationsByContext index now uses messageMapKey(context) with full context (including threadId, scope, etc.) instead of stripped key - Fixes key mismatch where thread/scoped contexts couldn't find running operations, causing overlapping generations - Move mergeQueuedMessages from services/messageQueue.ts into operation/types.ts alongside QueuedMessage type - Delete services/messageQueue.ts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
实现 Agent Runtime 的消息队列机制,允许用户在 agent 执行过程中继续发送消息。消息进入 per-context 队列,根据 agent 状态走不同的消费路径。
核心机制
execAgentRuntime正在运行时,sendMessage自动将消息入队而非创建新 operation\n\n连接)消费路径
Path A(工具调用场景): 工具执行完成后,如果队列有消息,提前结束当前 operation,交由 Path B 处理。agent 不再做最后的 LLM 总结,而是让新 operation 看到完整上下文(包含工具结果 + 用户新消息)。
Path B(完成后触发): agent 执行完毕后,drain 队列,通过
sendMessage创建正确的 user message 和 assistant message,触发新的execAgentRuntime。改动文件
operation/types.tsQueuedMessage类型operation/initialState.tsqueuedMessagesstate(per-context)operation/actions.tsenqueueMessage、drainQueuedMessages、removeQueuedMessage、clearMessageQueueoperation/selectors.tsqueuedMessageCount、getQueuedMessagesservices/messageQueue.tsmergeQueuedMessages工具函数conversationLifecycle.tssendMessage在 running 时走 enqueuestreamingExecutor.tsChatInput/index.tsxChatInput/QueueTray.tsxelectron-testing/SKILL.mdTest Plan
Fixes LOBE-6001
🤖 Generated with Claude Code