Skip to content

Tool errors silently lost for tools with needsApproval (approval continuation flow uses errorMode 'json' on Error objects) #13048

@jaimalchohan

Description

@jaimalchohan

Description

When a tool with needsApproval throws an error during execution, the error message is silently lost. The model receives "{}" instead of the actual error text, causing it to hallucinate success.

This only affects tools that go through the approval continuation flow. Tools without needsApproval (or with readOnlyHint: true) work correctly because they use the normal multi-step flow.

Root Cause (2 issues)

Issue 1 — errorMode: "json" produces "{}" for Error objects

The approval continuation path (streamTextstreamStep → approval continuation at line ~6882 in ai/dist/index.mjs) uses:

errorMode: output2.type === "tool-error" ? "json" : "none"

This flows to createToolModelOutput which does:

if (errorMode === "json") {
  return { type: "error-json", value: toJSONValue(output) };
}

toJSONValue passes the Error object through unchanged. The provider (e.g., @ai-sdk/amazon-bedrock) then does JSON.stringify(output.value) on the Error object. Since Error properties (message, stack, etc.) are non-enumerable, JSON.stringify(new Error("foo")) returns "{}". The model sees an empty object instead of the error message.

The normal flow (non-approval) at line ~3882 uses errorMode: "text" for tool results, which correctly calls getErrorMessage(error) to extract the message string.

Issue 2 — Approval flow tool-error events not captured in step results

The approval continuation flow executes tools via a toolExecutionStepStream (line ~6814-6819) that is added to the main stream. tool-error events from this stream are enqueued but aren't wrapped in start-step/finish-step boundaries. The next start-step from streamStep() clears recordedContent, losing the tool-error entries. As a result, step.content never contains the tool-error for approved tools that fail.

Reproduction

import { streamText, tool } from "ai";

const result = streamText({
  model: bedrockModel, // or any provider
  tools: {
    create_page: tool({
      description: "Create a page",
      inputSchema: z.object({ title: z.string() }),
      needsApproval: true,
      execute: async () => {
        throw new Error("No valid token for plugin");
      },
    }),
  },
  // ... prompt that triggers the tool
});

// After user approves the tool call:
// Expected: model sees error message "No valid token for plugin"
// Actual: model sees "{}" and hallucinates success

Verification

// This is the core issue:
JSON.stringify(new Error("No valid token")) // → "{}"

// The normal flow correctly uses:
function getErrorMessage(error) {
  if (error instanceof Error) return error.message;
  // ...
}
getErrorMessage(new Error("No valid token")) // → "No valid token"

Suggested Fix

In the approval continuation path (~line 6882), use errorMode: "text" instead of "json" for tool errors, matching the normal flow:

// Before (broken):
errorMode: output2.type === "tool-error" ? "json" : "none"

// After (fixed):
errorMode: output2.type === "tool-error" ? "text" : "none"

This ensures getErrorMessage extracts the message string from the Error object, which providers can then serialize correctly.

For Issue 2, the tool execution events from the approval continuation should be captured within the step's recordedContent so they appear in step.content.

Related Issues

AI SDK Version

  • ai: 6.0.105
  • @ai-sdk/amazon-bedrock: 4.0.69

Code of Conduct

  • I agree to follow this project's Code of Conduct

Metadata

Metadata

Assignees

Labels

ai/corecore functions like generateText, streamText, etc. Provider utils, and provider spec.bugSomething isn't working as documentedprovider/amazon-bedrockIssues related to the @ai-sdk/amazon-bedrock providerreproduction provided

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions