✨ feat: support client-side function tool execution in Response API#13414
✨ feat: support client-side function tool execution in Response API#13414
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
@nekomeowww - This PR implements client-side function tool execution in the Response API, touching backend server services and agent runtime. Please take a look. |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## canary #13414 +/- ##
==========================================
- Coverage 66.95% 66.91% -0.05%
==========================================
Files 1930 1930
Lines 156582 156708 +126
Branches 15137 18774 +3637
==========================================
+ Hits 104837 104856 +19
- Misses 51625 51732 +107
Partials 120 120
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: fbbd9715a5
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Implement LOBE-6543: when the Response API receives tools with type='function', inject them into the LLM and pause execution when the LLM calls them, allowing the client to provide results via function_call_output input items. Key changes: - Add 'client' to ToolSource type - Inject function tools into LLM via execAgent with source='client' in sourceMap - Pause agent loop (interrupt) when LLM calls a client function tool - Handle function_call_output resume flow via previous_response_id - Add response.function_call_arguments.done streaming event - Emit response.incomplete when interrupted for client tool execution - Use original function name for client tools instead of identifier/apiName - Simplify response ID to use topicId directly (includes LOBE-6536 fix) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fbbd971 to
fd22e09
Compare
MessageModel is not exported from @lobechat/database package. Replace direct DB writes with prompt-based approach for tool result resume. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…o ToolSource CLIENT_FN_IDENTIFIER `__fn__` caused ambiguous splits with PLUGIN_SCHEMA_SEPARATOR `____`, breaking tool name resolution. Renamed to `lobe-client-fn` and added `client` to the ToolSource union in @lobechat/types to match context-engine's definition. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
❤️ Great PR @arvinxx ❤️ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
# 🚀 release: 20260407 This release includes **148 commits**. Key updates are below. - **Response API tool execution is more capable and reliable** — Added hosted builtin tools + client-side function tools and improved tool-call streaming/completion behavior. [#13406](#13406) [#13414](#13414) [#13506](#13506) [#13555](#13555) - **Input and composition UX upgraded** — Added AI input auto-completion and multiple chat-input stability fixes. [#13458](#13458) [#13551](#13551) [#13481](#13481) - **Model/provider compatibility improved** — Better Gemini/Google tool schema handling and additional model updates. [#13429](#13429) [#13465](#13465) [#13613](#13613) - **Desktop and CLI reliability improved** — Gateway WebSocket support and desktop runtime upgrades. [#13608](#13608) [#13550](#13550) [#13557](#13557) - **Security hardening continued** — Fixed auth and sanitization risks and upgraded vulnerable dependencies. [#13535](#13535) [#13529](#13529) [#13479](#13479) ### Models & Providers - Added/updated support for `glm-5v-turbo`, GLM-5.1 updates, and qwen3.5-omni series. [#13487](#13487) [#13405](#13405) [#13422](#13422) - Added additional ImageGen providers/models (Wanxiang 2.7 and Keling from Qwen). [#13478](#13478) - Improved Gemini/Google tool schema and compatibility handling across runtime paths. [#13429](#13429) [#13465](#13465) [#13613](#13613) ### Response API & Runtime - Added hosted builtin tools in Response API and client-side function tool execution support. [#13406](#13406) [#13414](#13414) - Improved stream tool-call argument handling and `response.completed` output correctness. [#13506](#13506) [#13555](#13555) - Improved runtime error/context handling for intervention and provider edge cases. [#13420](#13420) [#13607](#13607) ### Desktop App - Bumped desktop dependencies and runtime integrations (`agent-browser`, `electron`). [#13550](#13550) [#13557](#13557) - Simplified desktop release channel setup by removing nightly release flow. [#13480](#13480) ### CLI - Added OpenClaw migration command. [#13566](#13566) - Added local device binding support for `lh agent run`. [#13277](#13277) - Added WebSocket gateway support and reconnect reliability improvements. [#13608](#13608) [#13418](#13418) ### Security - Removed risky `apiKey` fallback behavior in webapi auth path to prevent bypass risk. [#13535](#13535) - Sanitized HTML artifact rendering and iframe sandboxing to reduce XSS-to-RCE risk. [#13529](#13529) - Upgraded nodemailer to v8 to address SMTP command injection advisory. [#13479](#13479) ### Bug Fixes - Fixed image generation model default switch issues. [#13587](#13587) - Fixed subtopic re-fork message scope behavior and agent panel reset edge cases. [#13606](#13606) [#13556](#13556) - Fixed chat-input freeze on paste and mention plugin behavior. [#13551](#13551) [#13415](#13415) - Fixed auth/social sign-in and settings UX edge cases. [#13368](#13368) [#13392](#13392) [#13338](#13338) ### Credits Huge thanks to these contributors: @chriszf @hardy-one @Innei @lijian @neko @OctopusNote @rdmclin2 @rivertwilight @RylanCai @suyua9 @sxjeru @Tsuki @wangyk @WindSpiritSR @yizhuo @YuTengjing @hezhijie0327 @arvinxx
Summary
Implement client-side function tool execution for the Response API (LOBE-6543):
type: 'function'tools are converted toUniformTooland injected into the LLM withsource='client'in the toolSourceMapRuntimeExecutors.call_tool/call_tools_batchdetectssource='client'and interrupts the agent loop instead of executingfunction_call_outputinput items writes tool results to the topic and resumes the agent viaprevious_response_idresponse.function_call_arguments.doneevent; emitsresponse.incompletewhen paused for client toolget_weather) instead of internalidentifier/apiNameformatFlow
1st request → LLM calls client function → stream emits
function_callevents →response.incomplete2nd request with
function_call_output→ tool results injected → LLM generates final response →response.completedFixes LOBE-6543
Test plan
response.incompletestatus when agent pauses for client toolfunction_call_outputresume flow produces final response🤖 Generated with Claude Code