-
-
Notifications
You must be signed in to change notification settings - Fork 52.7k
Description
Summary
The memory-lancedb plugin currently restricts embedding configuration to OpenAI models only (text-embedding-3-small / text-embedding-3-large). The configSchema in openclaw.plugin.json blocks both baseUrl and arbitrary model names.
This prevents using local embedding providers like Ollama (which serves an OpenAI-compatible /v1/embeddings endpoint) or other self-hosted solutions.
Related
Issue #17650 requests adding gemini-embedding-001 and baseUrl to the schema. This request extends that to support any OpenAI-compatible embedding endpoint, including local ones.
Use Case
Many users run Ollama locally with embedding models like nomic-embed-text. Ollama exposes an OpenAI-compatible API at http://localhost:11434/v1. The memory-lancedb plugin's Embeddings class already uses the openai npm package, which accepts a baseURL constructor parameter — so the runtime code is nearly ready.
The memorySearch config in agents.defaults already supports custom baseUrl + arbitrary models for the core memory search. It would be consistent for memory-lancedb to support the same.
Proposed Changes
- Add
baseUrlto the embedding config schema inopenclaw.plugin.json - Remove the
enumrestriction onmodel— or add an escape hatch for arbitrary model names. Different providers use different model IDs. - Pass
baseUrlto the OpenAI client constructor inindex.ts:new OpenAI({ apiKey, baseURL: baseUrl }) - Make vector dimensions configurable (or auto-detect) since local models have varying dimensions (e.g.,
nomic-embed-text= 768).
Example Config (Ollama)
{
"plugins": {
"entries": {
"memory-lancedb": {
"enabled": true,
"config": {
"embedding": {
"apiKey": "ollama",
"model": "nomic-embed-text",
"baseUrl": "http://localhost:11434/v1"
},
"autoCapture": true,
"autoRecall": true
}
}
},
"slots": { "memory": "memory-lancedb" }
}
}Environment
- OpenClaw: 2026.2.17
- OS: Windows 11
- Ollama: nomic-embed-text on local network
This would make memory-lancedb provider-agnostic — consistent with how the rest of OpenClaw handles OpenAI-compatible endpoints.