Skip to content

[Security] Model catalog with resolved apiKey values injected into LLM prompt context on every turn #11202

@rplakas

Description

@rplakas

Summary

The runtime model catalog (resolved from openclaw.json providers) is serialized into the LLM request payload as part of the system prompt context on every turn. Because environment variable references (${VAR}) are resolved to plaintext before serialization, all provider API keys are sent to whichever LLM provider handles the request.

This means every provider sees every other provider's keys on every single turn.

Important: This happens even when keys are NOT hardcoded in openclaw.json. Using the recommended ${ENV_VAR} syntax does not help — the variable references are resolved to plaintext at runtime, and the resolved values are what get serialized into the LLM context. Moving keys to environment variables, systemd EnvironmentFile=, or .env files does not mitigate this issue.

Reproduction

  1. Configure multiple providers in openclaw.json with apiKey: "${ENV_VAR}" syntax
  2. Enable request logging on any provider (e.g., OpenRouter, or use a local LLM proxy)
  3. Send a message to any agent
  4. Inspect the request payload — the full model catalog including resolved apiKey fields appears in the system context

Safe local reproduction: Point the main agent at a local LLM (e.g., Ollama) and inspect the incoming request payload. The resolved apiKey fields for all configured cloud providers will be visible in the system context. This confirms the bug without actually leaking keys to a third party.

Impact

  • Cross-provider key leakage: OpenRouter sees your NVIDIA key, Anthropic sees your Google key, etc.
  • Leakage scales linearly: Keys are sent on every turn, not just once
  • Affects all users with multiple providers configured
  • Keys persist in provider logs — even after rotation, historical logs retain old keys

Expected Behavior

apiKey fields (and any other secret fields) should be stripped from the model catalog before it is serialized into LLM prompt context. The agent does not need provider credentials to function — it only needs model names, capabilities, and cost info.

Environment

  • openclaw v2026.2.3-1 (confirmed still present in v2026.2.6-3)
  • Gateway mode, systemd service
  • Multiple providers (Google, OpenRouter, NVIDIA, X.AI)

Related Issues

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingsecuritySecurity documentation

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions