Skip to content

✨ feat(openapi): support hosted builtin tools in Response API#13406

Merged
arvinxx merged 3 commits intocanaryfrom
feat/openapi-hosted-tools
Mar 30, 2026
Merged

✨ feat(openapi): support hosted builtin tools in Response API#13406
arvinxx merged 3 commits intocanaryfrom
feat/openapi-hosted-tools

Conversation

@arvinxx
Copy link
Copy Markdown
Member

@arvinxx arvinxx commented Mar 30, 2026

Summary

  • Add HostedToolSchema ({ type: 'lobe-xxx' }) to the Response API's tool type system alongside existing FunctionToolSchema
  • Extract hosted tool identifiers from the tools array and pass them as additionalPluginIds to execAgent
  • The existing ToolsEngine automatically handles manifest resolution, tool generation, and source mapping

Usage

POST /api/v1/responses
{
  "model": "agent-id",
  "input": "搜索一下最近的AI新闻",
  "tools": [{ "type": "lobe-web-browsing" }],
  "stream": true
}

Closes LOBE-6535

Test plan

  • Call Response API with tools: [{ type: 'lobe-web-browsing' }] and verify agent enables web browsing
  • Call with tools: [{ type: 'lobe-cloud-sandbox' }] and verify sandbox tool is available
  • Call with mixed tools [{ type: 'lobe-web-browsing' }, { type: 'function', name: '...', ... }] and verify both work
  • Call without tools param and verify existing behavior unchanged

🤖 Generated with Claude Code

Allow declaring builtin tools via { type: 'lobe-xxx' } syntax in the
tools array of POST /api/v1/responses. Hosted tool identifiers are
extracted and passed as additionalPluginIds to execAgent, where the
existing ToolsEngine handles manifest resolution automatically.

LOBE-6535

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@vercel
Copy link
Copy Markdown

vercel bot commented Mar 30, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
lobehub Ready Ready Preview, Comment Mar 30, 2026 1:19pm

Request Review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've reviewed this pull request using the Sourcery rules engine

@lobehubbot
Copy link
Copy Markdown
Member

@nekomeowww - This is a backend API (openapi) feature that adds hosted builtin tool support to the Response API. Please take a look.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7c2a7be69d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".


export const ToolSchema = FunctionToolSchema;
export const HostedToolSchema = z.object({
type: z.string().startsWith('lobe-'),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reject unknown hosted tool identifiers

The new hosted-tool schema accepts any type that starts with lobe-, so typos like lobe-web-browsnig now pass request validation and get forwarded as plugin IDs. In the execution path, unknown IDs are only filtered as not_found by the tools engine and the request still succeeds, which silently drops the requested tool and can cause clients to believe a hosted capability is enabled when it is not. Please validate hosted IDs against the actually available manifest identifiers (or return invalid_request_error when any requested hosted tool is unresolved).

Useful? React with 👍 / 👎.

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 30, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 66.87%. Comparing base (2f5a31f) to head (0f605bb).
⚠️ Report is 4 commits behind head on canary.

Additional details and impacted files
@@           Coverage Diff           @@
##           canary   #13406   +/-   ##
=======================================
  Coverage   66.87%   66.87%           
=======================================
  Files        1907     1907           
  Lines      154473   154473           
  Branches    15622    15620    -2     
=======================================
  Hits       103303   103303           
  Misses      51050    51050           
  Partials      120      120           
Flag Coverage Δ
app 58.46% <ø> (ø)
database 96.66% <ø> (ø)
packages/agent-runtime 89.26% <ø> (ø)
packages/context-engine 86.51% <ø> (ø)
packages/conversation-flow 92.36% <ø> (ø)
packages/file-loaders 87.02% <ø> (ø)
packages/memory-user-memory 66.68% <ø> (ø)
packages/model-bank 99.85% <ø> (ø)
packages/model-runtime 84.48% <ø> (ø)
packages/prompts 67.76% <ø> (ø)
packages/python-interpreter 92.90% <ø> (ø)
packages/ssrf-safe-fetch 0.00% <ø> (ø)
packages/utils 90.41% <ø> (ø)
packages/web-crawler 88.82% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Store 67.03% <ø> (ø)
Services 49.30% <ø> (ø)
Server 67.19% <ø> (ø)
Libs 51.03% <ø> (ø)
Utils 91.01% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Add full streaming support for tool execution events in the Response
API. Previously only text deltas were streamed; tool calls and results
were only visible in the final response.completed event.

Now emits:
- response.output_item.added (function_call) when LLM invokes a tool
- response.function_call_arguments.delta for tool arguments
- response.output_item.done (function_call) when tool call is complete
- response.output_item.added/done (function_call_output) when tool
  execution finishes with results
- Proper text message lifecycle (added/delta/done) across multi-step
  agent loops

LOBE-6535

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The tools field from CreateResponseRequest uses .nullish() in zod,
so it can be null. Accept null in the parameter type.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@arvinxx arvinxx merged commit 689d5a5 into canary Mar 30, 2026
33 of 34 checks passed
@arvinxx arvinxx deleted the feat/openapi-hosted-tools branch March 30, 2026 12:37
@lobehubbot
Copy link
Copy Markdown
Member

❤️ Great PR @arvinxx ❤️

The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world.

cy948 pushed a commit to cy948/lobe-chat that referenced this pull request Mar 31, 2026
…b#13406)

* ✨ feat(openapi): support hosted builtin tools in Response API

Allow declaring builtin tools via { type: 'lobe-xxx' } syntax in the
tools array of POST /api/v1/responses. Hosted tool identifiers are
extracted and passed as additionalPluginIds to execAgent, where the
existing ToolsEngine handles manifest resolution automatically.

LOBE-6535

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* ✨ feat(openapi): stream tool calls and results in Response API

Add full streaming support for tool execution events in the Response
API. Previously only text deltas were streamed; tool calls and results
were only visible in the final response.completed event.

Now emits:
- response.output_item.added (function_call) when LLM invokes a tool
- response.function_call_arguments.delta for tool arguments
- response.output_item.done (function_call) when tool call is complete
- response.output_item.added/done (function_call_output) when tool
  execution finishes with results
- Proper text message lifecycle (added/delta/done) across multi-step
  agent loops

LOBE-6535

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* 🐛 fix(openapi): handle nullable tools param in extractHostedToolIds

The tools field from CreateResponseRequest uses .nullish() in zod,
so it can be null. Accept null in the parameter type.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
arvinxx added a commit that referenced this pull request Apr 7, 2026
# 🚀 release: 20260407

This release includes **148 commits**. Key updates are below.

- **Response API tool execution is more capable and reliable** — Added
hosted builtin tools + client-side function tools and improved tool-call
streaming/completion behavior.
[#13406](#13406)
[#13414](#13414)
[#13506](#13506)
[#13555](#13555)
- **Input and composition UX upgraded** — Added AI input auto-completion
and multiple chat-input stability fixes.
[#13458](#13458)
[#13551](#13551)
[#13481](#13481)
- **Model/provider compatibility improved** — Better Gemini/Google tool
schema handling and additional model updates.
[#13429](#13429)
[#13465](#13465)
[#13613](#13613)
- **Desktop and CLI reliability improved** — Gateway WebSocket support
and desktop runtime upgrades.
[#13608](#13608)
[#13550](#13550)
[#13557](#13557)
- **Security hardening continued** — Fixed auth and sanitization risks
and upgraded vulnerable dependencies.
[#13535](#13535)
[#13529](#13529)
[#13479](#13479)

### Models & Providers

- Added/updated support for `glm-5v-turbo`, GLM-5.1 updates, and
qwen3.5-omni series.
[#13487](#13487)
[#13405](#13405)
[#13422](#13422)
- Added additional ImageGen providers/models (Wanxiang 2.7 and Keling
from Qwen). [#13478](#13478)
- Improved Gemini/Google tool schema and compatibility handling across
runtime paths. [#13429](#13429)
[#13465](#13465)
[#13613](#13613)

### Response API & Runtime

- Added hosted builtin tools in Response API and client-side function
tool execution support.
[#13406](#13406)
[#13414](#13414)
- Improved stream tool-call argument handling and `response.completed`
output correctness.
[#13506](#13506)
[#13555](#13555)
- Improved runtime error/context handling for intervention and provider
edge cases. [#13420](#13420)
[#13607](#13607)

### Desktop App

- Bumped desktop dependencies and runtime integrations (`agent-browser`,
`electron`). [#13550](#13550)
[#13557](#13557)
- Simplified desktop release channel setup by removing nightly release
flow. [#13480](#13480)

### CLI

- Added OpenClaw migration command.
[#13566](#13566)
- Added local device binding support for `lh agent run`.
[#13277](#13277)
- Added WebSocket gateway support and reconnect reliability
improvements. [#13608](#13608)
[#13418](#13418)

### Security

- Removed risky `apiKey` fallback behavior in webapi auth path to
prevent bypass risk.
[#13535](#13535)
- Sanitized HTML artifact rendering and iframe sandboxing to reduce
XSS-to-RCE risk. [#13529](#13529)
- Upgraded nodemailer to v8 to address SMTP command injection advisory.
[#13479](#13479)

### Bug Fixes

- Fixed image generation model default switch issues.
[#13587](#13587)
- Fixed subtopic re-fork message scope behavior and agent panel reset
edge cases. [#13606](#13606)
[#13556](#13556)
- Fixed chat-input freeze on paste and mention plugin behavior.
[#13551](#13551)
[#13415](#13415)
- Fixed auth/social sign-in and settings UX edge cases.
[#13368](#13368)
[#13392](#13392)
[#13338](#13338)

### Credits

Huge thanks to these contributors:

@chriszf @hardy-one @Innei @lijian @neko @OctopusNote @rdmclin2
@rivertwilight @RylanCai @suyua9 @sxjeru @Tsuki @wangyk @WindSpiritSR
@yizhuo @YuTengjing @hezhijie0327 @arvinxx
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants