Skip to content

feat(anthropic): add web_fetch_20260209 and web_search_20260209#12668

Merged
felixarntz merged 16 commits intovercel:mainfrom
MehediH:main
Mar 3, 2026
Merged

feat(anthropic): add web_fetch_20260209 and web_search_20260209#12668
felixarntz merged 16 commits intovercel:mainfrom
MehediH:main

Conversation

@MehediH
Copy link
Copy Markdown
Contributor

@MehediH MehediH commented Feb 18, 2026

Background

Anthropic introduced new web tool versions (web_search_20260209 and web_fetch_20260209) that include dynamic filtering capabilities, but those versions were not yet available in AI SDK.

Summary

This PR adds support for web_search_20260209 and web_fetch_20260209 in the Anthropic provider.

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

@tigent tigent bot added ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label feature New feature or request minor marker for PRs with minor version bump provider/anthropic Issues related to the @ai-sdk/anthropic provider labels Feb 18, 2026
Copy link
Copy Markdown
Collaborator

@felixarntz felixarntz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MehediH Thank you, this looks great!

Just one thing we should add to the Bedrock provider.

If possible, could you add examples for using these tools in examples/ai-functions? Not a blocker (since we don't have examples for the old versions of the tools either), but would be great to have for quick manual verification and for the future.

could be 4 examples:

  • 1 for new web_fetch, 1 for new web_search in examples/ai-functions/src/generate-text
  • 1 for new web_fetch, 1 for new web_search in examples/ai-functions/src/stream-text

@felixarntz
Copy link
Copy Markdown
Collaborator

@MehediH Hmm, when I run the new anthropic-web-fetch-tool-* example, I see an error part in the output:

{
  type: 'tool-error',
  toolCallId: 'srvtoolu_01SRKEgo9u5sQkvT3PNrfzZP',
  toolName: 'code_execution',
  input: {
    type: 'programmatic-tool-call',
    code: '\n' +
      'import json\n' +
      'result = await web_fetch({"url": "https://en.wikipedia.org/wiki/Maglemosian_culture"})\n' +
      'parsed = json.loads(result)\n' +
      'if isinstance(parsed, list):\n' +
      '    print(parsed[0]["title"])\n' +
      '    print(parsed[0]["content"][:4000])\n' +
      'else:\n' +
      '    print(parsed)\n'
  },
  error: "Model tried to call unavailable tool 'code_execution'. Available tools: web_fetch.",
  dynamic: true
}

I was wondering whether that's a model problem, but not entirely sure. It's strange it tries to use code_execution when we have web_fetch available.

Copy link
Copy Markdown
Collaborator

@felixarntz felixarntz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, after further investigation, I've found the caveat with the examples not working as expected.

With examples/ai-functions/src/stream-text/anthropic/anthropic-web-fetch-tool-20260209.ts, the tool call results in errors like this:

Model tried to call unavailable tool 'code_execution'. Available tools: web_fetch.

This happens because the new web_fetch_2026* and web_search_2026* tools internally rely on Anthropic's code_execution_* tool. On the API level this is solved automatically, but the AI SDK has a validation mechanism that ensures only tools that are explicitly provided are considered valid, as part of packages/ai/src/generate-text/parse-tool-call.ts (line 16).

The model ends up making a code_execution tool call which is rejected with an error by the AI SDK.

It then makes a web_fetch tool call, which results in this error:

Invalid input for tool web_fetch: Type validation failed: Value: {}.\nError message: [\n  {\n    \"expected\": \"string\",\n    \"code\": \"invalid_type\",\n    \"path\": [\n      \"url\"\n    ],\n    \"message\": \"Invalid input: expected string, received undefined\"\n  }\n]

At the end, it somehow works and the content from the Wikipedia page is successfully fetched. But there are a few problems happening on the way, which are probably bugs that we need to fix.

This will require further investigation. I'll look into it.

…0260209

Adds generate-text and stream-text examples for the new webFetch_20260209 and webSearch_20260209 provider tools in the anthropic subfolder following the updated examples structure.
…t-4-6 and add missing schema types

Updates examples to use claude-sonnet-4-6 model (required for 20260209 web tools) and adds
encrypted_code_execution_result and server_tool_use caller schemas needed for programmatic
tool calling responses.
@felixarntz felixarntz added backport Admins only: add this label to a pull request in order to backport it to the prior version and removed minor marker for PRs with minor version bump labels Mar 2, 2026
Copy link
Copy Markdown
Collaborator

@felixarntz felixarntz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@MehediH This looks almost good to go, but the original problem of the AI SDK blocking those code_execution tool calls that Anthropic automatically injects still persists.

Since the validation happens centrally in the ai package, we can't easily bypass it just for this particular Anthropic consideration.

I can think of two options here:

  • Either, for now we require explicitly passing code_execution: anthropic.tools.codeExecution_20260120() every time you use the new web_fetch or web_search tools.
    • This would obviously be a workaround, not an actual fix. Not ideal, but would allow us to unblock this and think about a proper solution decoupled from this PR.
  • Or, we somehow relax the conditions in the parseToolCall function in the ai package.
    • Currently, we only allow tool calls to go through that are providerExecuted and dynamic. However, the code_execution tool calls returned here are only marked with providerExecuted, but not dynamic. This is where the validation fails.
    • We could allow providerExecuted tool calls to go through in a similar way, even when they're not dynamic. That would fix this problem.
    • But this needs careful consideration, because obviously reducing strictness of validation increases a risk of a security gap.
    • For reference: This logic was originally implemented in 81d4308 (cc @lgrammel).

Please feel free to share your thoughts. I hope we can get to a decision to unblock this PR by tomorrow.

…les and warn when missing

the 20260209 web tools (web_search, web_fetch) require code_execution_20260120
to work correctly in streaming mode. added the tool to stream-text examples
and a warning when users use web tools without it.
@MehediH
Copy link
Copy Markdown
Contributor Author

MehediH commented Mar 3, 2026

hey @felixarntz thank you for looking into this! as discussed, I went ahead and opted for the workaround for now 28855c8 to unblock this PR; added a warning so users know the need for the new code execution tool when using these new web tools. Lmk what you think

@felixarntz
Copy link
Copy Markdown
Collaborator

@MehediH I found a cleaner solution that doesn't require the workaround of passing code_execution explicitly: We annotate the code_execution tool call conditionally for just this situation so that we can bypass the validation here. This keeps the regular validation safe as is, but handles this particular case where Anthropic injects the tool and we want to allow it even though it's implicit. See MehediH@c0e0b63

I also found then another bug where we were ignoring server_tool_use input in streaming if it wasn't provided in chunks - that was also relevant here because the new web_fetch and web_search send their input right away, not via individual JSON deltas. See MehediH@b750896

I'm going to add some more testing here, mostly focused on E2E including UI, and then we should be good to go 🎉

Copy link
Copy Markdown
Collaborator

@felixarntz felixarntz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added UI examples for this, which surface a few follow up errors which affect UI specifically:

  1. UI tool validation does not support dynamic tools. This leads to an error when sending any follow-up messages. While problematic here, this is an existing bug that is best fixed in its own PR.
    • The error is: No tool schema found for tool part code_execution
    • While this is somewhat expected because we don't include code_execution explicitly in the agent, it should be possible to bypass this because we treat it as an internal tool.
  2. The types in the UI break because they infer tools from the agent, but that doesn't work because the code_execution tool is not explicitly set.

Since this implementation works for a non-interactive example, I think this is acceptable to merge as a first pass. We will need to follow up on the two remaining problems - neither of which are trivial enough to address as part of this.

@felixarntz felixarntz removed the backport Admins only: add this label to a pull request in order to backport it to the prior version label Mar 3, 2026
@felixarntz
Copy link
Copy Markdown
Collaborator

OK, I found one more fix (see 9df88f3): We were not yet using the injected code_execution tool calls as dynamic: true consistently.

Doing that allows us to avoid the UI type errors, by properly treating the injected calls as dynamic tool calls, since that's what they are in these circumstances.

What remains is the error that there's no tool result included for the code_execution tool call, which happens once you send another follow up message. Example: messages.1: code_executiontool use with idsrvtoolu_01HopAaXMHo9To3RavUYwqy7was found without a correspondingcode_execution_tool_result block"},"request_id":"req_011CYgrdHD1W1jtxevekpz2N"}

I find this one confusing, because what would even that result be? As far as I understand, that code just calls the web_fetch or web_search tool. In the official Anthropic docs, I also don't see a tool result for the code_execution call, only for the underlying call, which we have too: https://platform.claude.com/docs/en/agents-and-tools/tool-use/programmatic-tool-calling#step-3-provide-tool-result

Let's look into this one separately.

@felixarntz felixarntz merged commit 56c67d5 into vercel:main Mar 3, 2026
17 of 18 checks passed
@vercel-ai-sdk
Copy link
Copy Markdown
Contributor

vercel-ai-sdk bot commented Mar 3, 2026

🚀 Published in:

Package Version
@ai-sdk/amazon-bedrock 4.0.73
@ai-sdk/anthropic 3.0.54
@ai-sdk/google-vertex 4.0.73

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label feature New feature or request provider/anthropic Issues related to the @ai-sdk/anthropic provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants