Persistent memory for AI agents
Hosted Version • MCP Server • Dashboard • Local Embeddings • TypeScript SDK • API Docs
Engram is a memory layer for AI agents — store, recall, and evolve memories with semantic search, knowledge graphs, and autonomous consolidation. It gives your agents persistent, structured memory so they never wake up blank again.
An engram is a hypothetical permanent change in the brain accounting for the existence of memory — a memory trace.
- 🧠 Semantic memory storage with vector embeddings — find memories by meaning, not keywords
- 🔍 Ensemble search (4 models) — Reciprocal Rank Fusion eliminates single-model blind spots
- 🌙 Dream Cycle — autonomous memory consolidation inspired by sleep neuroscience
- 🕸️ Knowledge graph extraction — entities and relationships visualized with D3
- 🔒 Multi-tenant with API key auth — cryptographic user isolation
- 💳 SaaS-ready — usage tracking and cloud features built in
- 🐳 Docker Compose for easy self-hosting — up and running in 3 commands
- 🔗 Hybrid mode — self-hosted + cloud link for backup, sync, and cloud ensemble models
- 🏠 Self-hosted setup wizard — first-run detection, guided setup, zero config
- 📡 Webhooks with HMAC signing — real-time event notifications
- 🛡️ Safety-critical detection — 16 patterns for allergies, medications, legal directives
- ⏰ Temporal reasoning — understands "yesterday," "last week," natural language time
- 📊 Fog Index — cognitive health scoring to monitor memory drift
git clone https://github.com/heybeaux/engram && cd engram
cp .env.example .env
docker compose up -dAPI at localhost:3001 · Dashboard at localhost:3000
On first run, the setup wizard walks you through creating an admin account and choosing your mode (local-only or linked to OpenEngram Cloud). No manual config needed — just open the dashboard.
Hosted cloud coming soon — join the waitlist at openengram.ai.
Run self-hosted with full local features, then link to OpenEngram Cloud from Settings to unlock cloud ensemble models, backup, and cross-device sync. Best of both worlds — your data stays local, premium features from the cloud.
See the Getting Started Guide for detailed walkthroughs.
Store a memory:
curl -X POST https://api.openengram.ai/v1/memories \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"content": "The user prefers dark mode and is allergic to peanuts", "metadata": {}}'Search memories:
curl -X POST https://api.openengram.ai/v1/memories/search \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"query": "What are the user preferences?", "limit": 5}'Hosted cloud coming soon — join the waitlist at openengram.ai.
Self-hosting is fully supported today with no feature limits.
- Getting Started — Self-hosted, cloud, and hybrid setup
- API Reference — Full endpoint documentation
- Deployment Architecture — Mode detection, feature gating, cloud link, sync
- Configuration — All environment variables and deployment modes
- Swagger UI — Interactive API explorer (when running locally)
- Online Docs — Hosted documentation
See QUICKSTART.md for detailed self-hosting instructions including:
- Docker Compose setup
- Building from source
- Fully local mode (Ollama + engram-embed, zero cloud dependency)
- Environment configuration
Engram is built on NestJS with PostgreSQL + pgvector for storage. The system includes:
- Core API — CRUD, search, context generation, 120+ endpoints
- Ensemble Search — 4 embedding models fused via Reciprocal Rank Fusion
- Dream Cycle — 4-stage consolidation: dedup → staleness → patterns → report
- engram-embed — Local Rust embedding server with Metal GPU acceleration (~10ms per vector)
- Dashboard — Next.js app for memory browsing, knowledge graph visualization, and system monitoring
See the Architecture Documentation for the full technical breakdown.
npm install -g @engram/mcp-server6 tools: engram_remember, engram_recall, engram_search, engram_context, engram_observe, engram_forget
Point any AI agent at the API. Works with OpenAI, Anthropic, Ollama, LM Studio — swap LLM providers with one env var.
npm install @engram/client| Feature | Engram | Mem0 | Zep | LangMem |
|---|---|---|---|---|
| Self-hosted | ✅ | ✅ | ✅ | ✅ |
| Local embeddings (zero cost) | ✅ Metal GPU | ❌ | ❌ | ❌ |
| Multi-model ensemble search | ✅ 4 models | ❌ | ❌ | ❌ |
| Dream Cycle (consolidation) | ✅ 4-stage | ❌ | ❌ | ❌ |
| Safety-critical detection | ✅ 16 patterns | ❌ | ❌ | ❌ |
| Knowledge graph | ✅ | ❌ | ✅ | ❌ |
| Temporal reasoning | ✅ | ❌ | ❌ | ❌ |
| SaaS-ready (billing, limits) | ✅ | ❌ | ❌ | ❌ |
| License | Apache 2.0 | Apache 2.0 | Apache 2.0 | MIT |

Dashboard — Memory stats, Fog Index, API volume

Knowledge Graph — Entities and relationships visualized with D3

Memory Browser — Semantic search, layer filtering, importance scores
We'd love your help! See CONTRIBUTING.md for guidelines.
High-impact areas:
- Python SDK
- Integration adapters (LangChain, CrewAI, AutoGen)
- New embedding/LLM providers
- Documentation and examples
Every agent deserves to remember.

