Skip to content

✨ feat: support client-side function tool execution in Response API#13414

Merged
arvinxx merged 3 commits intocanaryfrom
feat/client-function-tool-execution
Mar 31, 2026
Merged

✨ feat: support client-side function tool execution in Response API#13414
arvinxx merged 3 commits intocanaryfrom
feat/client-function-tool-execution

Conversation

@arvinxx
Copy link
Copy Markdown
Member

@arvinxx arvinxx commented Mar 30, 2026

Summary

Implement client-side function tool execution for the Response API (LOBE-6543):

  • Function tool injection: type: 'function' tools are converted to UniformTool and injected into the LLM with source='client' in the toolSourceMap
  • Pause mechanism: When the LLM calls a client function tool, RuntimeExecutors.call_tool / call_tools_batch detects source='client' and interrupts the agent loop instead of executing
  • Resume flow: Second request with function_call_output input items writes tool results to the topic and resumes the agent via previous_response_id
  • Streaming events: Adds response.function_call_arguments.done event; emits response.incomplete when paused for client tool
  • Tool name format: Client function tools use their original name (e.g., get_weather) instead of internal identifier/apiName format
  • Response ID simplification: Uses topicId directly as response ID (includes 🔧 refactor: simplify response ID to use topicId directly #13410)

Flow

1st request → LLM calls client function → stream emits function_call events → response.incomplete
2nd request with function_call_output → tool results injected → LLM generates final response → response.completed

Fixes LOBE-6543

Test plan

  • Verify function tool definitions are passed to LLM and callable
  • Verify streaming emits correct function_call event sequence (added → delta → done → item.done)
  • Verify response.incomplete status when agent pauses for client tool
  • Verify function_call_output resume flow produces final response
  • Verify mixed hosted + client tools work correctly in batch
  • Run openresponses compliance tests (tool-calling, tool-calling-streaming)

🤖 Generated with Claude Code

@vercel
Copy link
Copy Markdown

vercel bot commented Mar 30, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
lobehub Error Error Mar 31, 2026 7:05am

Request Review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've reviewed this pull request using the Sourcery rules engine

@lobehubbot
Copy link
Copy Markdown
Member

@nekomeowww - This PR implements client-side function tool execution in the Response API, touching backend server services and agent runtime. Please take a look.

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 30, 2026

Codecov Report

❌ Patch coverage is 19.23077% with 105 lines in your changes missing coverage. Please review.
✅ Project coverage is 66.91%. Comparing base (6402656) to head (4f21f9c).
⚠️ Report is 14 commits behind head on canary.

Additional details and impacted files
@@            Coverage Diff             @@
##           canary   #13414      +/-   ##
==========================================
- Coverage   66.95%   66.91%   -0.05%     
==========================================
  Files        1930     1930              
  Lines      156582   156708     +126     
  Branches    15137    18774    +3637     
==========================================
+ Hits       104837   104856      +19     
- Misses      51625    51732     +107     
  Partials      120      120              
Flag Coverage Δ
app 58.68% <19.23%> (-0.05%) ⬇️
database 96.66% <ø> (ø)
packages/agent-runtime 89.11% <ø> (ø)
packages/context-engine 86.51% <ø> (ø)
packages/conversation-flow 92.36% <ø> (ø)
packages/file-loaders 87.02% <ø> (ø)
packages/memory-user-memory 66.68% <ø> (ø)
packages/model-bank 99.85% <ø> (ø)
packages/model-runtime 84.48% <ø> (ø)
packages/prompts 67.76% <ø> (ø)
packages/python-interpreter 92.90% <ø> (ø)
packages/ssrf-safe-fetch 0.00% <ø> (ø)
packages/utils 90.41% <ø> (ø)
packages/web-crawler 88.82% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Store 67.08% <ø> (ø)
Services 49.21% <ø> (ø)
Server 67.27% <19.23%> (-0.17%) ⬇️
Libs 51.03% <ø> (ø)
Utils 89.08% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fbbd9715a5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Implement LOBE-6543: when the Response API receives tools with type='function',
inject them into the LLM and pause execution when the LLM calls them, allowing
the client to provide results via function_call_output input items.

Key changes:
- Add 'client' to ToolSource type
- Inject function tools into LLM via execAgent with source='client' in sourceMap
- Pause agent loop (interrupt) when LLM calls a client function tool
- Handle function_call_output resume flow via previous_response_id
- Add response.function_call_arguments.done streaming event
- Emit response.incomplete when interrupted for client tool execution
- Use original function name for client tools instead of identifier/apiName
- Simplify response ID to use topicId directly (includes LOBE-6536 fix)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@arvinxx arvinxx force-pushed the feat/client-function-tool-execution branch from fbbd971 to fd22e09 Compare March 30, 2026 15:29
MessageModel is not exported from @lobechat/database package.
Replace direct DB writes with prompt-based approach for tool result resume.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…o ToolSource

CLIENT_FN_IDENTIFIER `__fn__` caused ambiguous splits with PLUGIN_SCHEMA_SEPARATOR `____`,
breaking tool name resolution. Renamed to `lobe-client-fn` and added `client` to the
ToolSource union in @lobechat/types to match context-engine's definition.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@arvinxx arvinxx merged commit 674c849 into canary Mar 31, 2026
32 of 34 checks passed
@arvinxx arvinxx deleted the feat/client-function-tool-execution branch March 31, 2026 08:24
@lobehubbot
Copy link
Copy Markdown
Member

❤️ Great PR @arvinxx ❤️

The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world.

arvinxx added a commit that referenced this pull request Apr 7, 2026
# 🚀 release: 20260407

This release includes **148 commits**. Key updates are below.

- **Response API tool execution is more capable and reliable** — Added
hosted builtin tools + client-side function tools and improved tool-call
streaming/completion behavior.
[#13406](#13406)
[#13414](#13414)
[#13506](#13506)
[#13555](#13555)
- **Input and composition UX upgraded** — Added AI input auto-completion
and multiple chat-input stability fixes.
[#13458](#13458)
[#13551](#13551)
[#13481](#13481)
- **Model/provider compatibility improved** — Better Gemini/Google tool
schema handling and additional model updates.
[#13429](#13429)
[#13465](#13465)
[#13613](#13613)
- **Desktop and CLI reliability improved** — Gateway WebSocket support
and desktop runtime upgrades.
[#13608](#13608)
[#13550](#13550)
[#13557](#13557)
- **Security hardening continued** — Fixed auth and sanitization risks
and upgraded vulnerable dependencies.
[#13535](#13535)
[#13529](#13529)
[#13479](#13479)

### Models & Providers

- Added/updated support for `glm-5v-turbo`, GLM-5.1 updates, and
qwen3.5-omni series.
[#13487](#13487)
[#13405](#13405)
[#13422](#13422)
- Added additional ImageGen providers/models (Wanxiang 2.7 and Keling
from Qwen). [#13478](#13478)
- Improved Gemini/Google tool schema and compatibility handling across
runtime paths. [#13429](#13429)
[#13465](#13465)
[#13613](#13613)

### Response API & Runtime

- Added hosted builtin tools in Response API and client-side function
tool execution support.
[#13406](#13406)
[#13414](#13414)
- Improved stream tool-call argument handling and `response.completed`
output correctness.
[#13506](#13506)
[#13555](#13555)
- Improved runtime error/context handling for intervention and provider
edge cases. [#13420](#13420)
[#13607](#13607)

### Desktop App

- Bumped desktop dependencies and runtime integrations (`agent-browser`,
`electron`). [#13550](#13550)
[#13557](#13557)
- Simplified desktop release channel setup by removing nightly release
flow. [#13480](#13480)

### CLI

- Added OpenClaw migration command.
[#13566](#13566)
- Added local device binding support for `lh agent run`.
[#13277](#13277)
- Added WebSocket gateway support and reconnect reliability
improvements. [#13608](#13608)
[#13418](#13418)

### Security

- Removed risky `apiKey` fallback behavior in webapi auth path to
prevent bypass risk.
[#13535](#13535)
- Sanitized HTML artifact rendering and iframe sandboxing to reduce
XSS-to-RCE risk. [#13529](#13529)
- Upgraded nodemailer to v8 to address SMTP command injection advisory.
[#13479](#13479)

### Bug Fixes

- Fixed image generation model default switch issues.
[#13587](#13587)
- Fixed subtopic re-fork message scope behavior and agent panel reset
edge cases. [#13606](#13606)
[#13556](#13556)
- Fixed chat-input freeze on paste and mention plugin behavior.
[#13551](#13551)
[#13415](#13415)
- Fixed auth/social sign-in and settings UX edge cases.
[#13368](#13368)
[#13392](#13392)
[#13338](#13338)

### Credits

Huge thanks to these contributors:

@chriszf @hardy-one @Innei @lijian @neko @OctopusNote @rdmclin2
@rivertwilight @RylanCai @suyua9 @sxjeru @Tsuki @wangyk @WindSpiritSR
@yizhuo @YuTengjing @hezhijie0327 @arvinxx
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants