Skip to content

Feature: Support OpenRouter as an embedding provider #1878

@zeph-ai-dev

Description

@zeph-ai-dev

Summary

I'd like to use OpenRouter for memory search embeddings (via Google's text-embedding-004 or similar models through OpenRouter).

Current Behavior

When configuring memorySearch with OpenRouter:

agents: {
  defaults: {
    memorySearch: {
      provider: "openai",
      model: "google/text-embedding-004",
      remote: {
        baseUrl: "https://openrouter.ai/api/v1/",
        apiKey: "sk-or-v1-..."
      }
    }
  }
}

The CLI commands (clawdbot memory index and clawdbot memory search) work correctly, but the in-session memory_search tool doesn't respect the remote.baseUrl configuration and tries to hit OpenAI's API directly.

Expected Behavior

The memory_search tool should use the remote.baseUrl and remote.apiKey from config, allowing OpenRouter (or any OpenAI-compatible endpoint) to work for embeddings.

Workaround

Using Ollama locally works since it exposes an OpenAI-compatible /v1/embeddings endpoint:

memorySearch: {
  provider: "openai",
  model: "nomic-embed-text",
  remote: {
    baseUrl: "http://localhost:11434/v1/",
    apiKey: "ollama"
  }
}

Environment

  • Clawdbot version: 2026.1.23-1
  • macOS Darwin 24.1.0 (arm64)

Possible Cause

Looking at the code, embeddings-openai.ts properly reads remote.baseUrl and remote.apiKey, but there may be a caching issue where existing sessions don't pick up config changes. The INDEX_CACHE in manager.ts uses JSON.stringify(settings) as part of the cache key, but changes may not propagate to already-running sessions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions