Description
Background
In multi-step conversations involving tool use with Gemini 3 Flash Preview, the @ai-sdk/google-vertex provider fails during the second step with a 500 Internal Server Error.
Reference: Per Google Cloud Vertex AI Documentation, Gemini 3 models require the thought_signature from the previous step to be passed back. If it is missing during a tool-result turn, the API rejects the request.
Error Message:
Gemini Step 2 Error: {"error":{"message":"Unable to submit request because function call `get_weather` in the 2. content block is missing a `thought_signature`. Learn more: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures","type":"server_error","code":"INTERNAL_SERVER_ERROR"}}
Step-by-Step Reproduction & Root Cause
The issue is a "sticky variable" bug in the provider's fallback logic:
Step 1: The Output (Generation)
When the Gemini 3 model generates a tool call, the Vertex provider captures the thought_signature and stores it under the vertex namespace in the metadata.
- State:
providerMetadata: { vertex: { thoughtSignature: '...' } }
Step 2: The Input (Follow-up)
Following standard Google AI conventions, the conversation history is often passed back with the signature under the google namespace to ensure cross-compatibility between Google and Vertex AI providers.
-
Problem 1 (The Fallback): In google-generative-ai-language-model.ts#L100-L115, the code detects that vertex options are missing and falls back to google. However, the variable providerOptionsName sticks to vertex.
-
Problem 2 (The Disconnect): Consequently, the conversion logic at lines 136-139 continues to look for the signature under the vertex key because of the sticky variable providerOptionsName.
Code example:
import { createVertex } from "@ai-sdk/google-vertex";
import { generateText, tool } from "ai";
import { z } from "zod";
// Config: Replace with your actual project details
const vertex = createVertex({
project: process.env.GOOGLE_CLOUD_PROJECT,
location: process.env.GOOGLE_CLOUD_LOCATION || "us-central1",
});
const model = vertex("gemini-3-flash-preview");
async function reproduceIssue() {
console.log("--- Initializing Reproduction for Namespace Desync ---");
/**
* STEP 1: Scenario Simulation
* We simulate a message history where Step 1 was already completed.
* Following the standard Google AI provider convention, the thoughtSignature
* is provided back in the 'google' namespace.
*/
const messages = [
{ role: "user", content: "What is the weather in London?" },
{
role: "assistant",
content: [
{
type: "text",
text: "",
},
{
type: "tool-call",
toolCallId: "call_123",
toolName: "getWeather",
input: { location: "London" },
providerOptions: {
google: { thoughtSignature: step1Signature },
},
},
],
},
{
role: "tool",
content: [
{
type: "tool-result",
toolCallId: "call_123",
toolName: "getWeather",
output: {
type: "json",
value: { temperature: 15, unit: "celsius" },
},
},
],
},
];
try {
/**
* STEP 2: The Follow-up
* This execution will trigger convertToGoogleGenerativeAIMessages.
* * Due to the bug in google-generative-ai-language-model.ts (L100-L115),
* the providerOptionsName remains 'vertex', failing to find 'google.thoughtSignature'.
*/
const result = await generateText({
model,
tools: {
getWeather: tool({
description: "Get the weather",
inputSchema: z.object({ location: z.string() }),
}),
},
messages: messages as any,
});
console.log("Response:", result.text);
} catch (error: any) {
console.error("❌ Reproduction Error caught:");
console.error(error.message);
}
}
reproduceIssue();
Proposed Fix
The providerOptionsName should be dynamically updated to google if the vertex key is missing and the google key is present. Alternatively, the extraction logic for thoughtSignature should check both namespaces as a fallback to ensure that mandatory Gemini 3 state tokens are never dropped during the conversion to the wire format.
AI SDK Version
ai: ^6.0.67
@ai-sdk/google-vertex: ^4.0.37
Code of Conduct
Description
Background
In multi-step conversations involving tool use with Gemini 3 Flash Preview, the
@ai-sdk/google-vertexprovider fails during the second step with a 500 Internal Server Error.Reference: Per Google Cloud Vertex AI Documentation, Gemini 3 models require the
thought_signaturefrom the previous step to be passed back. If it is missing during a tool-result turn, the API rejects the request.Error Message:
Step-by-Step Reproduction & Root Cause
The issue is a "sticky variable" bug in the provider's fallback logic:
Step 1: The Output (Generation)
When the Gemini 3 model generates a tool call, the Vertex provider captures the
thought_signatureand stores it under thevertexnamespace in the metadata.providerMetadata: { vertex: { thoughtSignature: '...' } }Step 2: The Input (Follow-up)
Following standard Google AI conventions, the conversation history is often passed back with the signature under the
googlenamespace to ensure cross-compatibility between Google and Vertex AI providers.Problem 1 (The Fallback): In google-generative-ai-language-model.ts#L100-L115, the code detects that
vertexoptions are missing and falls back togoogle. However, the variableproviderOptionsNamesticks tovertex.Problem 2 (The Disconnect): Consequently, the conversion logic at lines 136-139 continues to look for the signature under the
vertexkey because of the sticky variableproviderOptionsName.Code example:
Proposed Fix
The
providerOptionsNameshould be dynamically updated togoogleif thevertexkey is missing and thegooglekey is present. Alternatively, the extraction logic forthoughtSignatureshould check both namespaces as a fallback to ensure that mandatory Gemini 3 state tokens are never dropped during the conversion to the wire format.AI SDK Version
ai: ^6.0.67
@ai-sdk/google-vertex: ^4.0.37
Code of Conduct