Skip to content

memory-lancedb: Support custom baseUrl + arbitrary embedding models (Ollama, local providers) #21811

@reptarwins-commits

Description

@reptarwins-commits

Summary

The memory-lancedb plugin currently restricts embedding configuration to OpenAI models only (text-embedding-3-small / text-embedding-3-large). The configSchema in openclaw.plugin.json blocks both baseUrl and arbitrary model names.

This prevents using local embedding providers like Ollama (which serves an OpenAI-compatible /v1/embeddings endpoint) or other self-hosted solutions.

Related

Issue #17650 requests adding gemini-embedding-001 and baseUrl to the schema. This request extends that to support any OpenAI-compatible embedding endpoint, including local ones.

Use Case

Many users run Ollama locally with embedding models like nomic-embed-text. Ollama exposes an OpenAI-compatible API at http://localhost:11434/v1. The memory-lancedb plugin's Embeddings class already uses the openai npm package, which accepts a baseURL constructor parameter — so the runtime code is nearly ready.

The memorySearch config in agents.defaults already supports custom baseUrl + arbitrary models for the core memory search. It would be consistent for memory-lancedb to support the same.

Proposed Changes

  1. Add baseUrl to the embedding config schema in openclaw.plugin.json
  2. Remove the enum restriction on model — or add an escape hatch for arbitrary model names. Different providers use different model IDs.
  3. Pass baseUrl to the OpenAI client constructor in index.ts: new OpenAI({ apiKey, baseURL: baseUrl })
  4. Make vector dimensions configurable (or auto-detect) since local models have varying dimensions (e.g., nomic-embed-text = 768).

Example Config (Ollama)

{
  "plugins": {
    "entries": {
      "memory-lancedb": {
        "enabled": true,
        "config": {
          "embedding": {
            "apiKey": "ollama",
            "model": "nomic-embed-text",
            "baseUrl": "http://localhost:11434/v1"
          },
          "autoCapture": true,
          "autoRecall": true
        }
      }
    },
    "slots": { "memory": "memory-lancedb" }
  }
}

Environment

  • OpenClaw: 2026.2.17
  • OS: Windows 11
  • Ollama: nomic-embed-text on local network

This would make memory-lancedb provider-agnostic — consistent with how the rest of OpenClaw handles OpenAI-compatible endpoints.

Metadata

Metadata

Assignees

No one assigned

    Labels

    staleMarked as stale due to inactivity

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions