Bug: Vertex AI Gemini function calls fail with Function call is missing a thought_signature in functionCall parts
Summary
When OpenClaw routes a tool-using request through LiteLLM to Vertex AI Gemini (gemini-flash, gemini-2.x, etc.), Vertex returns HTTP 400 with the error message Function call is missing a thought_signature in functionCall parts. This is required for tools to work correctly, and missing thought_signature may lead to degraded model performance. The function call payload OpenClaw constructs for Vertex/Gemini doesn't include the thought_signature field that Vertex's beta API now requires for tool use.
The error surfaces immediately on any agent run that:
- Has tool definitions registered (built-in OR MCP)
- Falls through to a Gemini model in the fallback chain
Environment
- OpenClaw
2026.4.5 (3e72c03)
- Ubuntu 24.04.4 LTS, Node v24.14.1 via nvm
- LiteLLM proxy with Gemini configured via
gemini-flash model alias
- MCP server:
@softeria/ms-365-mcp-server (110+ tool definitions, but the issue is general — happens with any tool-using request)
Steps to reproduce
-
Configure a LiteLLM model entry for Gemini Flash and a fallback chain that points to it. Example:
model_list:
- model_name: gemini-flash
litellm_params:
model: gemini/gemini-flash-latest
api_key: os.environ/GEMINI_API_KEY
router_settings:
fallbacks:
- claude-haiku-4-5: [claude-sonnet-4-6, gemini-flash]
-
Configure an OpenClaw agent that uses litellm/claude-haiku-4-5 and has access to any tool (built-in or MCP).
-
Trigger an agent run that requires the agent to call a tool. (Easiest: register an MCP server with a tool the agent will plausibly use, then send a Teams/Signal/etc. message that triggers the tool call.)
-
Force fallback to Gemini by either rate-limiting Anthropic temporarily, or by setting Gemini as the primary model directly.
-
Observed: HTTP 400 from Vertex with the thought_signature error. Full error from our logs:
FailoverError: HTTP 400: litellm.BadRequestError: Vertex_ai_betaException BadRequestError - b'{
"error": {
"code": 400,
"message": "Function call is missing a thought_signature in functionCall parts. This is required for tools to work correctly, and missing thought_signature may lead to degraded model performance. Additional data, function call `default_api:ms365__list-calendar-events`, position 37. Please refer to https://ai.google.dev/gemini-api/docs/thought-signatures for more details.",
"status": "INVALID_ARGUMENT"
}
}'
Expected behavior
OpenClaw's Gemini provider adapter (or the LiteLLM-passed payload) should include a thought_signature field in the functionCall parts of any request that includes tool definitions. The required format is documented at https://ai.google.dev/gemini-api/docs/thought-signatures.
Actual behavior
The function call payload OpenClaw constructs lacks the thought_signature field. Vertex returns HTTP 400. The agent run fails. With retries enabled in the fallback chain, the error compounds (multiple failed retries before giving up).
Severity
Medium-high. Anyone who lists Gemini in their LiteLLM fallback chain will hit this the moment a tool call routes through Gemini. For OpenClaw deployments using LiteLLM with mixed-provider fallback chains (a common pattern), this is a hidden trap that only surfaces when the primary provider fails.
Workaround
Remove gemini-flash (and any other Gemini variant) from any LiteLLM fallback chain that may include tool-using models:
router_settings:
fallbacks:
- claude-haiku-4-5: [claude-sonnet-4-6] # was: [claude-sonnet-4-6, gemini-flash]
- routellm-auto: [gpt-5-mini] # was: [gemini-flash, gpt-5-mini]
Gemini can still be used as a primary model for non-tool-using requests, but should not be reached via fallback for any agent that has tools enabled.
Suggested fix paths
- Add
thought_signature to the function call payload in OpenClaw's Gemini/Vertex provider adapter. The Vertex API docs at https://ai.google.dev/gemini-api/docs/thought-signatures specify the format.
- Detect Gemini in the LiteLLM target and either include the field automatically OR refuse to send tool-using requests to Gemini until support lands.
- Document the limitation in the OpenClaw docs page for
litellm provider configuration so operators know to exclude Gemini from tool-using fallback chains.
Where this bit us
Synap deployment, 2026-04-08. We were wiring @softeria/ms-365-mcp-server to give pleres and family agents access to Microsoft 365 calendar/email/SharePoint. When Anthropic Haiku rate-limited under heavy testing, the LiteLLM fallback chain dispatched to Gemini, which returned the Vertex 400 on every tool call. Once removed from the fallback chain, the immediate issue went away — but Gemini is now effectively unusable for any tool-using request in this deployment until this is fixed.
Related upstream issues
Bug: Vertex AI Gemini function calls fail with
Function call is missing a thought_signature in functionCall partsSummary
When OpenClaw routes a tool-using request through LiteLLM to Vertex AI Gemini (
gemini-flash,gemini-2.x, etc.), Vertex returns HTTP 400 with the error messageFunction call is missing a thought_signature in functionCall parts. This is required for tools to work correctly, and missing thought_signature may lead to degraded model performance.The function call payload OpenClaw constructs for Vertex/Gemini doesn't include thethought_signaturefield that Vertex's beta API now requires for tool use.The error surfaces immediately on any agent run that:
Environment
2026.4.5 (3e72c03)gemini-flashmodel alias@softeria/ms-365-mcp-server(110+ tool definitions, but the issue is general — happens with any tool-using request)Steps to reproduce
Configure a LiteLLM model entry for Gemini Flash and a fallback chain that points to it. Example:
Configure an OpenClaw agent that uses
litellm/claude-haiku-4-5and has access to any tool (built-in or MCP).Trigger an agent run that requires the agent to call a tool. (Easiest: register an MCP server with a tool the agent will plausibly use, then send a Teams/Signal/etc. message that triggers the tool call.)
Force fallback to Gemini by either rate-limiting Anthropic temporarily, or by setting Gemini as the primary model directly.
Observed: HTTP 400 from Vertex with the
thought_signatureerror. Full error from our logs:Expected behavior
OpenClaw's Gemini provider adapter (or the LiteLLM-passed payload) should include a
thought_signaturefield in thefunctionCallparts of any request that includes tool definitions. The required format is documented at https://ai.google.dev/gemini-api/docs/thought-signatures.Actual behavior
The function call payload OpenClaw constructs lacks the
thought_signaturefield. Vertex returns HTTP 400. The agent run fails. With retries enabled in the fallback chain, the error compounds (multiple failed retries before giving up).Severity
Medium-high. Anyone who lists Gemini in their LiteLLM fallback chain will hit this the moment a tool call routes through Gemini. For OpenClaw deployments using LiteLLM with mixed-provider fallback chains (a common pattern), this is a hidden trap that only surfaces when the primary provider fails.
Workaround
Remove
gemini-flash(and any other Gemini variant) from any LiteLLM fallback chain that may include tool-using models:Gemini can still be used as a primary model for non-tool-using requests, but should not be reached via fallback for any agent that has tools enabled.
Suggested fix paths
thought_signatureto the function call payload in OpenClaw's Gemini/Vertex provider adapter. The Vertex API docs at https://ai.google.dev/gemini-api/docs/thought-signatures specify the format.litellmprovider configuration so operators know to exclude Gemini from tool-using fallback chains.Where this bit us
Synap deployment, 2026-04-08. We were wiring
@softeria/ms-365-mcp-serverto give pleres and family agents access to Microsoft 365 calendar/email/SharePoint. When Anthropic Haiku rate-limited under heavy testing, the LiteLLM fallback chain dispatched to Gemini, which returned the Vertex 400 on every tool call. Once removed from the fallback chain, the immediate issue went away — but Gemini is now effectively unusable for any tool-using request in this deployment until this is fixed.Related upstream issues