Skip to content

heybeaux/engram

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

714 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Engram

Engram

Persistent memory for AI agents

Build License npm Stars

Hosted VersionMCP ServerDashboardLocal EmbeddingsTypeScript SDKAPI Docs


What is Engram?

Engram is a memory layer for AI agents — store, recall, and evolve memories with semantic search, knowledge graphs, and autonomous consolidation. It gives your agents persistent, structured memory so they never wake up blank again.

An engram is a hypothetical permanent change in the brain accounting for the existence of memory — a memory trace.

Key Features

  • 🧠 Semantic memory storage with vector embeddings — find memories by meaning, not keywords
  • 🔍 Ensemble search (4 models) — Reciprocal Rank Fusion eliminates single-model blind spots
  • 🌙 Dream Cycle — autonomous memory consolidation inspired by sleep neuroscience
  • 🕸️ Knowledge graph extraction — entities and relationships visualized with D3
  • 🔒 Multi-tenant with API key auth — cryptographic user isolation
  • 💳 SaaS-ready — usage tracking and cloud features built in
  • 🐳 Docker Compose for easy self-hosting — up and running in 3 commands
  • 🔗 Hybrid mode — self-hosted + cloud link for backup, sync, and cloud ensemble models
  • 🏠 Self-hosted setup wizard — first-run detection, guided setup, zero config
  • 📡 Webhooks with HMAC signing — real-time event notifications
  • 🛡️ Safety-critical detection — 16 patterns for allergies, medications, legal directives
  • Temporal reasoning — understands "yesterday," "last week," natural language time
  • 📊 Fog Index — cognitive health scoring to monitor memory drift

Quick Start

Self-Hosted

git clone https://github.com/heybeaux/engram && cd engram
cp .env.example .env
docker compose up -d

API at localhost:3001 · Dashboard at localhost:3000

On first run, the setup wizard walks you through creating an admin account and choosing your mode (local-only or linked to OpenEngram Cloud). No manual config needed — just open the dashboard.

Cloud

Hosted cloud coming soon — join the waitlist at openengram.ai.

Hybrid Mode

Run self-hosted with full local features, then link to OpenEngram Cloud from Settings to unlock cloud ensemble models, backup, and cross-device sync. Best of both worlds — your data stays local, premium features from the cloud.

See the Getting Started Guide for detailed walkthroughs.

API Example

Store a memory:

curl -X POST https://api.openengram.ai/v1/memories \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"content": "The user prefers dark mode and is allergic to peanuts", "metadata": {}}'

Search memories:

curl -X POST https://api.openengram.ai/v1/memories/search \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{"query": "What are the user preferences?", "limit": 5}'

Cloud

Hosted cloud coming soon — join the waitlist at openengram.ai.

Self-hosting is fully supported today with no feature limits.

Documentation

Self-Hosting

See QUICKSTART.md for detailed self-hosting instructions including:

  • Docker Compose setup
  • Building from source
  • Fully local mode (Ollama + engram-embed, zero cloud dependency)
  • Environment configuration

Architecture

Engram Architecture

Engram is built on NestJS with PostgreSQL + pgvector for storage. The system includes:

  • Core API — CRUD, search, context generation, 120+ endpoints
  • Ensemble Search — 4 embedding models fused via Reciprocal Rank Fusion
  • Dream Cycle — 4-stage consolidation: dedup → staleness → patterns → report
  • engram-embed — Local Rust embedding server with Metal GPU acceleration (~10ms per vector)
  • Dashboard — Next.js app for memory browsing, knowledge graph visualization, and system monitoring

See the Architecture Documentation for the full technical breakdown.

Integration

MCP (Claude Desktop, Cursor, etc.)

npm install -g @engram/mcp-server

6 tools: engram_remember, engram_recall, engram_search, engram_context, engram_observe, engram_forget

REST API

Point any AI agent at the API. Works with OpenAI, Anthropic, Ollama, LM Studio — swap LLM providers with one env var.

TypeScript SDK

npm install @engram/client

Comparison

Feature Engram Mem0 Zep LangMem
Self-hosted
Local embeddings (zero cost) ✅ Metal GPU
Multi-model ensemble search ✅ 4 models
Dream Cycle (consolidation) ✅ 4-stage
Safety-critical detection ✅ 16 patterns
Knowledge graph
Temporal reasoning
SaaS-ready (billing, limits)
License Apache 2.0 Apache 2.0 Apache 2.0 MIT

Screenshots

Dashboard Overview
Dashboard — Memory stats, Fog Index, API volume

Knowledge Graph
Knowledge Graph — Entities and relationships visualized with D3

Memory Browser
Memory Browser — Semantic search, layer filtering, importance scores

Contributing

We'd love your help! See CONTRIBUTING.md for guidelines.

High-impact areas:

  • Python SDK
  • Integration adapters (LangChain, CrewAI, AutoGen)
  • New embedding/LLM providers
  • Documentation and examples

License

Apache License 2.0


Every agent deserves to remember.

About

Memory infrastructure for AI agents. Type-aware, time-aware, safety-critical. Open source.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors