🐛 Describe the bug
I have never done this before so bare with me.
First is that openmemory does not work immediately on windows or linux if trying to use ollama with a different model dimensions size from ~1536. I spent a week trying to get mem0 connected to openwebui and ollama. I found the main candidates and I propose some fixes . It is a bit opionated.
Linux
1 - run.sh
There is a reference to
curl -fsS -X PUT "${NEXT_PUBLIC_API_URL}/api/v1/config/mem0/vector_store" # Line 280 and for each vector store
But the api route is not defined. Also the openai is required this should be documented in the promotion or just implement it.
# Line 8
OPENAI_API_KEY="${OPENAI_API_KEY:-}"
USER="${USER:-$(whoami)}"
NEXT_PUBLIC_API_URL="${NEXT_PUBLIC_API_URL:-http://localhost:8765}"
if [ -z "$OPENAI_API_KEY" ]; then
echo "❌ OPENAI_API_KEY not set. Please run with: curl -sL https://raw.githubusercontent.com/mem0ai/mem0/main/openmemory/run.sh | OPENAI_API_KEY=your_api_key bash"
echo "❌ OPENAI_API_KEY not set. You can also set it as global environment variable: export OPENAI_API_KEY=your_api_key"
exit 1
fi
It should not be required unless the user chooses openai.
Windows
1 - run.ps1 a conversion
I converted run.sh to powershell script using ai. I looked over it and tested only ollama and windows. This does have a hotfix for using different embedding model dimensions by defining them before ui is loaded. But it would be easier if ui had the settings for vector store.
I won't go through everything here but the usage is very similar.
.\run.ps1 -VectorStore "qdrant" -EmbeddingDims "768"
Args
param(
[string]$VectorStore = "qdrant",
[string]$EmbeddingDims = "768",
[switch]$UseLocalBuild = $false
)
UseLocalBuild - This just disables docker rebuilding for development.
# Create docker-compose.yml file based on selected vector store (unless using local build)
if (-not $UseLocalBuild) {
Write-Host "📝 Creating docker-compose.yml..." -ForegroundColor Cyan
New-ComposeFile $VectorStore
} else {
Write-Host "🔧 Using existing docker-compose.yml with local build..." -ForegroundColor Cyan
if (-not (Test-Path "docker-compose.yml")) {
Write-Host "❌ docker-compose.yml not found. Please run without -UseLocalBuild flag first." -ForegroundColor Red
exit 1
}
}
API
Config Keys
I added all python configuration keys from the documentation; It's mainly self documented. Here is the main one from api/app/routers/config.py.
class VectorStoreConfig(BaseModel):
collection_name: Optional[str] = Field(None, description="Name of the collection")
embedding_model_dims: Optional[int] = Field(None, description="Dimensions of the embedding model")
client: Optional[str] = Field(None, description="Custom client for the database")
path: Optional[str] = Field(None, description="Path for the database")
host: Optional[str] = Field(None, description="Host where the server is running")
port: Optional[int] = Field(None, description="Port where the server is running")
user: Optional[str] = Field(None, description="Username for database connection")
password: Optional[str] = Field(None, description="Password for database connection")
dbname: Optional[str] = Field(None, description="Name of the database")
url: Optional[str] = Field(None, description="Full URL for the server")
api_key: Optional[str] = Field(None, description="API key for the server")
on_disk: Optional[bool] = Field(None, description="Enable persistent storage")
endpoint_id: Optional[str] = Field(None, description="Endpoint ID (vertex_ai_vector_search)")
index_id: Optional[str] = Field(None, description="Index ID (vertex_ai_vector_search)")
deployment_index_id: Optional[str] = Field(None, description="Deployment index ID (vertex_ai_vector_search)")
project_id: Optional[str] = Field(None, description="Project ID (vertex_ai_vector_search)")
project_number: Optional[str] = Field(None, description="Project number (vertex_ai_vector_search)")
vector_search_api_endpoint: Optional[str] = Field(None, description="Vector search API endpoint (vertex_ai_vector_search)")
connection_string: Optional[str] = Field(None, description="PostgreSQL connection string (for Supabase/PGVector)")
index_method: Optional[str] = Field(None, description="Vector index method (for Supabase)")
index_measure: Optional[str] = Field(None, description="Distance measure for similarity search (for Supabase)")
class VectorStoreProvider(BaseModel):
provider: str = Field(..., description="Vector store provider name")
config: VectorStoreConfig
#...
class Mem0Config(BaseModel):
llm: Optional[LLMProvider] = None
embedder: Optional[EmbedderProvider] = None
vector_store: Optional[VectorStoreProvider] = None # Added to implement
I don't know if all keys work but atleast it's there for others to fix.
Vector Store Route
This is in similar form from other routes. It uses the config from db and saves after setting vector store.
@router.get("/mem0/vector_store", response_model=VectorStoreProvider)
async def get_vector_store_configuration(db: Session = Depends(get_db)):
"""Get only the Vector Store configuration."""
config = get_config_from_db(db)
vector_store_config = config.get("mem0", {}).get("vector_store", {})
return vector_store_config
@router.put("/mem0/vector_store", response_model=VectorStoreProvider)
async def update_vector_store_configuration(vector_store_config: VectorStoreProvider, db: Session = Depends(get_db)):
"""Update only the Vector Store configuration."""
current_config = get_config_from_db(db)
# Ensure mem0 key exists
if "mem0" not in current_config:
current_config["mem0"] = {}
# Update the Vector Store configuration
current_config["mem0"]["vector_store"] = vector_store_config.dict(exclude_none=True)
# Save the configuration to database
save_config_to_db(db, current_config)
reset_memory_client()
return current_config["mem0"]["vector_store"]
Default Config
I noticed that config.json is never used. I suggest we rename to example or make it a dependency to remove the hardcoded default. In my version I added config_loader.py in utils and used it in get_default_memory_config.
def get_default_memory_config():
"""Get default memory client configuration from default_config.json file."""
from app.utils.config_loader import get_mem0_config
# Load mem0 config from JSON file
base_config = get_mem0_config()
# Use the configs directly from JSON - environment variables will be applied later
vector_store_config = base_config.get("vector_store", {})
llm_config = base_config.get("llm", {})
embedder_config = base_config.get("embedder", {})
return {
"vector_store": vector_store_config,
"llm": llm_config,
"embedder": embedder_config,
"version": "v1.1"
}
Config loader
This loads the json file or uses the hardcoded json if it fails. This is the main function.
def load_default_config() -> Dict[str, Any]:
"""
Load the default configuration from default_config.json file.
Returns:
Dict containing the configuration loaded from JSON file.
Falls back to hardcoded defaults if file is not found or invalid.
"""
# Get the path to default_config.json relative to this file
config_path = os.path.join(
os.path.dirname(__file__),
"..",
"..",
"default_config.json"
)
try:
with open(config_path, 'r') as f:
config = json.load(f)
# Ensure openmemory section exists
if "openmemory" not in config:
config["openmemory"] = {
"custom_instructions": None
}
return config
except (FileNotFoundError, json.JSONDecodeError) as e:
print(f"Warning: Could not load default_config.json: {e}")
return get_fallback_config()
UI
I have only suggestions. I am not a designer but I think vector store settings is a must even if it's only for changing embedding dimensions.
Conclusion
This is mainly a windows embedding fix. I hope this is good
🐛 Describe the bug
I have never done this before so bare with me.
First is that openmemory does not work immediately on windows or linux if trying to use ollama with a different model dimensions size from ~1536. I spent a week trying to get mem0 connected to openwebui and ollama. I found the main candidates and I propose some fixes . It is a bit opionated.
Linux
1 - run.sh
There is a reference to
But the api route is not defined. Also the openai is required this should be documented in the promotion or just implement it.
It should not be required unless the user chooses openai.
Windows
1 - run.ps1 a conversion
I converted run.sh to powershell script using ai. I looked over it and tested only ollama and windows. This does have a hotfix for using different embedding model dimensions by defining them before ui is loaded. But it would be easier if ui had the settings for vector store.
I won't go through everything here but the usage is very similar.
Args
UseLocalBuild - This just disables docker rebuilding for development.
API
Config Keys
I added all python configuration keys from the documentation; It's mainly self documented. Here is the main one from api/app/routers/config.py.
I don't know if all keys work but atleast it's there for others to fix.
Vector Store Route
This is in similar form from other routes. It uses the config from db and saves after setting vector store.
Default Config
I noticed that config.json is never used. I suggest we rename to example or make it a dependency to remove the hardcoded default. In my version I added config_loader.py in utils and used it in get_default_memory_config.
Config loader
This loads the json file or uses the hardcoded json if it fails. This is the main function.
UI
I have only suggestions. I am not a designer but I think vector store settings is a must even if it's only for changing embedding dimensions.
Conclusion
This is mainly a windows embedding fix. I hope this is good