English | δΈζ | ζ₯ζ¬θͺ | νκ΅μ΄ | TΓΌrkΓ§e | EspaΓ±ol | PortuguΓͺs
Give your AI agent evidence-backed memory for your entire codebase.
Every claim grounded in code. Every context window optimized. Every session drift-resistant.
AI coding agents hallucinate about your code. They lose context between sessions. They can't prove their claims. AtlasMemory solves all three.
| Feature | Others | AtlasMemory | |
|---|---|---|---|
| π― | Claims about code | "Trust me" | Evidence-backed (line + hash) |
| π | Session continuity | Start from scratch | Drift-detecting contracts |
| π¦ | Context window | Stuff everything in | Token-budgeted packs |
| π | Dependencies | Cloud API keys | Local-first, zero config |
| π | Language support | Varies | 11 languages (TS/JS/Py/Go/Rust/Java/C#/C/C++/Ruby/PHP) |
| π₯ | Impact analysis | Manual | Automatic (reverse reference graph) |
| π§ | Session memory | None | Cross-session learning |
Important: AtlasMemory works out of the box, but enrichment unlocks its full potential. Without enrichment, search is keyword-based. With enrichment, search understands concepts.
# After indexing, run enrichment for maximum AI readiness:
npx atlasmemory index . # Step 1: Index (automatic)
npx atlasmemory enrich --all # Step 2: AI-enhance all files
npx atlasmemory generate # Step 3: Generate AI instructions
npx atlasmemory status # Check your AI Readiness ScoreDo all of these and AtlasMemory becomes a beast. Each step unlocks more capability:
| Step | What it unlocks | Command | |
|---|---|---|---|
| β | Index your project | Symbol extraction, anchors, basic search | npx atlasmemory index . |
| β | Enrich files | Semantic search, concept-level understanding | npx atlasmemory enrich --all |
| β | Generate AI instructions | AI agents auto-use AtlasMemory (5 formats) | npx atlasmemory generate |
| β | Add MCP config | Zero-config connection for your AI tool | See configs below |
| β | Use log_decision after changes |
Cross-session memory, institutional knowledge | AI agent calls it automatically |
| β | Use remember_project for milestones |
Project-level memory persists forever | AI agent calls it automatically |
| AI Readiness | Search Quality | What to do |
|---|---|---|
| 0-50 (Fair) | Keyword only | Run atlasmemory enrich β dramatically improves results |
| 50-80 (Good) | Partial semantic | Run atlasmemory enrich --all for full coverage |
| 80-100 (Excellent) | Full semantic + concept search | You're at maximum power! π |
What it does: Enrichment analyzes each file and adds semantic tags β "authentication", "middleware", "error handling", "database query", etc. Without enrichment, search is keyword-based. With enrichment, search understands concepts β you can search "how does authentication work?" and get the right files even if they don't contain the word "authentication".
How it works: AtlasMemory uses Claude CLI or OpenAI Codex (running locally) to analyze files. Requires an active Claude or OpenAI subscription with CLI access.
Estimated enrichment time by project size:
| Project Size | Files | Enrichment Time | What happens |
|---|---|---|---|
| Small | ~50 files | ~2 minutes | Instant boost β search quality jumps to 80+ |
| Medium | ~200 files | ~8 minutes | Full semantic coverage in one coffee break |
| Large (Coolify-scale) | ~1400 files | ~45 minutes | Use --batch 50 for controlled enrichment |
| Monorepo (Next.js-scale) | ~4000+ files | ~2 hours | Spread across sessions: enrich --batch 100 |
π‘ Tip: Run
atlasmemory enrich --dry-runfirst to see the token estimate before starting.
π Don't worry β enrichment is a one-time cost. You enrich your project once, and it's done. After that, only new or changed files need re-enrichment (a few seconds). Think of it like building an index β you do it once, then it stays up to date incrementally.
No CLI? No problem. Your AI agent can enrich files directly via MCP. Just paste this into your AI chat:
Please enrich my project with AtlasMemory for maximum AI readiness.
Run enrich_files(limit=100) to enhance all files with semantic tags.
Then check ai_readiness to verify the score improved.
After handshake, if enrichment is low, AtlasMemory will suggest: "π‘ X files can be enriched for better search."
"With just
index_repoandenrich_files, you can turn an entire codebase into an AI-readable neural map β optimized for any AI agent." β Google Antigravity, after enriching 73 files in a single call
npx atlasmemory demo # See it in action
npx atlasmemory index . # Index your project
npx atlasmemory search "authentication" # Search with FTS5 + graph
npx atlasmemory generate # Auto-generate CLAUDE.mdThat's it. No API key, no cloud, no config files. AtlasMemory runs entirely on your machine.
π£ Claude Desktop / Claude Code β add to claude_desktop_config.json:
{ "mcpServers": { "atlasmemory": { "command": "npx", "args": ["-y", "atlasmemory"] } } }π΅ Cursor β add to .cursor/mcp.json:
{ "mcpServers": { "atlasmemory": { "command": "npx", "args": ["-y", "atlasmemory"] } } }π’ VS Code / GitHub Copilot β add to settings or .vscode/mcp.json:
{ "mcp": { "servers": { "atlasmemory": { "command": "npx", "args": ["-y", "atlasmemory"] } } } }π Google Antigravity β add to MCP settings:
{ "mcpServers": { "atlasmemory": { "command": "npx", "args": ["-y", "atlasmemory"] } } }π OpenAI Codex β add to MCP config:
{ "mcpServers": { "atlasmemory": { "command": "npx", "args": ["-y", "atlasmemory"] } } }One config, all tools. Auto-indexes on first query. Works with any MCP-compatible AI tool.
Install AtlasMemory for VS Code for a visual dashboard right in your editor:
- AI Readiness Dashboard β see your score (0-100) with four metrics at a glance
- Atlas Explorer Sidebar β browse files, symbols, anchors, flows, cards directly
- Status Bar β always-visible readiness score, click to open dashboard
- Auto-Index on Save β files re-indexed automatically when you save
- Quick Actions β one-click index, generate CLAUDE.md, search, health check
Works alongside MCP β extension gives you the visual interface, MCP server gives AI agents the tools. Install both for the full experience.
A feature no other tool has. Every claim is linked to an anchor β a specific line range and content hash.
+ Claim: "handleLogin() validates credentials before creating a session"
+ Evidence:
+ src/auth.ts:42-58 [hash:5cde2a1f] β validateCredentials() call
+ src/auth.ts:60-72 [hash:a3b7c9d1] β createSession() after validation
+ Status: PROVEN β
(2 anchors, hashes match current code)
- β οΈ Someone edited auth.ts...
- Hash 5cde2a1f no longer matches lines 42-58
- Status: DRIFT DETECTED β β AI knows context is stale BEFORE hallucinatingYou ask your AI agent a question. Behind the scenes, this happens:
flowchart LR
subgraph YOU["π§βπ» You"]
Q["'Fix auth bug'"]
end
subgraph ATLAS["β‘ AtlasMemory"]
direction TB
A["π Search\nFTS5 + Graph"]
B["π Prove\nClaims β code anchors"]
C["π¦ Pack\nFit within token budget"]
D["π‘οΈ Contract\nDetect drift"]
end
subgraph AI["π€ AI Agent"]
R["Knows exactly where to look\nβ no hallucination"]
end
Q --> A
A -->|"Best files\nranked by relevance"| B
B -->|"Every claim has\nline:hash proof"| C
C -->|"2000 tokens instead\nof reading 50 files"| D
D -->|"β
Context is fresh\nno stale data"| R
style YOU fill:#1a1a3e,stroke:#00e5ff,color:#fff
style ATLAS fill:#0a1628,stroke:#00bcd4,color:#fff
style AI fill:#1a1a3e,stroke:#00e5ff,color:#fff
style Q fill:#162447,stroke:#00e5ff,color:#fff
style A fill:#0d2137,stroke:#00bcd4,color:#00e5ff
style B fill:#0d2137,stroke:#00bcd4,color:#00e5ff
style C fill:#0d2137,stroke:#00bcd4,color:#00e5ff
style D fill:#0d2137,stroke:#00bcd4,color:#00e5ff
style R fill:#162447,stroke:#00e5ff,color:#fff
flowchart TB
subgraph WITHOUT["β Without AtlasMemory"]
direction TB
W1["AI reads file 1"] --> W2["AI reads file 2"]
W2 --> W3["AI reads file 3..."]
W3 --> W4["...AI reads file 47"]
W4 --> W5["π₯ Context full!\nStarting over..."]
W5 -.->|"β loop"| W1
end
subgraph WITH["β
With AtlasMemory"]
direction TB
A1["AI asks: 'fix auth bug'"]
A1 --> A2["AtlasMemory returns:\n2000 tokens\nevidence-backed context"]
A2 --> A3["AI fixes the bug\n85% of context still free"]
end
style WITHOUT fill:#1a0a0a,stroke:#ff4444,color:#fff
style WITH fill:#0a1a0a,stroke:#00ff88,color:#fff
style W5 fill:#330000,stroke:#ff4444,color:#ff6666
style A3 fill:#003300,stroke:#00ff88,color:#00ff88
| Pillar | What it does | |
|---|---|---|
| π | Evidence-Backed | Every claim is linked to an anchor (line range + content hash). If the code changes, the anchor is marked stale. Hallucination is impossible. |
| π‘οΈ | Drift-Resistant | SHA-256 snapshot of database state + git HEAD. If the repo changes during a session, AtlasMemory detects and warns. |
| π¦ | Token-Budgeted | Greedy-optimized packs that fit your budget. Priority order: objectives > folders > cards > flows > code snippets. |
All 11 languages use precise AST parsing via Tree-sitter β no regex, no guessing.
| Language | What's extracted |
|---|---|
| TypeScript / JavaScript | functions, classes, methods, interfaces, types, imports, calls |
| Python | functions, classes, decorators, imports, calls |
| Go | functions, methods, structs, interfaces, imports, calls |
| Rust | functions, impl blocks, structs, traits, enums, use, calls |
| Java | methods, classes, interfaces, enums, imports, calls |
| C# | methods, classes, interfaces, structs, enums, using, calls |
| C / C++ | functions, classes, structs, enums, #include, calls |
| Ruby | methods, classes, modules, calls |
| PHP | functions, methods, classes, interfaces, use, calls |
Core β tools your AI agent uses every session:
| Tool | Description |
|---|---|
π search_repo |
Full-text + graph-powered codebase search |
π¦ build_context |
Unified context builder β task, project, delta, or session mode |
β
prove |
Prove claims with evidence anchors in your codebase |
π index_repo |
Full or incremental indexing |
π€ handshake |
Start agent session with project summary + memory |
Intelligence Tools
| Tool | Description |
|---|---|
π₯ analyze_impact |
Who depends on this symbol/file? Reverse reference graph |
π smart_diff |
Semantic git diff β symbol-level changes + breaking changes |
π§ remember |
Save decisions, constraints, insights for the session |
π session_context |
View accumulated context + related past sessions |
β¨ enrich_files |
AI-enrich file cards with semantic tags |
Agent Memory Tools
| Tool | Description |
|---|---|
π log_decision |
Record what you changed and why (persists across sessions) |
π get_file_history |
See what past AI agents changed in a file |
πΎ remember_project |
Store project-level knowledge (milestones, gaps, lessons) |
Utility Tools
| Tool | Description |
|---|---|
ποΈ generate_claude_md |
Auto-generate CLAUDE.md / .cursorrules / copilot-instructions |
π ai_readiness |
Calculate AI Readiness Score (0-100) |
π‘οΈ get_context_contract |
Check drift status with suggested actions |
π acknowledge_context |
Confirm that the context is understood |
AtlasMemory works with zero configuration. Optional settings:
| Setting | Default | Description |
|---|---|---|
ATLAS_DB_PATH |
.atlas/atlas.db |
Database location |
ATLAS_LLM_API_KEY |
β | API key for LLM-enriched card descriptions (experimental β will be strengthened in future releases) |
ATLAS_CONTRACT_ENFORCE |
warn |
Contract mode: strict / warn / off |
.atlasignore |
β | Custom file/directory exclusion rules (like .gitignore) |
block-beta
columns 4
block:ENTRY:4
CLI["β¬ CLI"]
MCP["π£ MCP Server"]
VSCODE["π’ VS Code"]
end
space:4
block:ENGINE:4
columns 4
INDEXER["π§ Indexer\n11 languages"]:1
SEARCH["π Search\nFTS5 + Graph"]:1
CARDS["π Cards\nSummaries"]:1
TASKPACK["π¦ TaskPack\nProof + Budget"]:1
end
space:4
block:INTEL:4
columns 4
IMPACT["π₯ Impact"]:1
MEMORY["π§ Memory"]:1
LEARNER["π Learner"]:1
ENRICH["β¨ Enrichment"]:1
end
space:4
block:DATA:4
DB["ποΈ SQLite + FTS5 β Single file, ~394KB bundle"]
end
ENTRY --> ENGINE
ENGINE --> INTEL
INTEL --> DATA
style ENTRY fill:#1a1a3e,stroke:#00e5ff,color:#fff
style ENGINE fill:#0a1628,stroke:#00bcd4,color:#fff
style INTEL fill:#0d2137,stroke:#00bcd4,color:#fff
style DATA fill:#162447,stroke:#00e5ff,color:#fff
What is the AI Readiness Score?
A score from 0-100 that measures how ready your codebase is for AI agents. Calculated from 4 metrics:
| Metric | Weight | What it measures |
|---|---|---|
| Code Coverage | 25% | Percentage of source files indexed by Tree-sitter |
| Description Quality | 25% | Percentage of files with AI descriptions enriched via enrich |
| Flow Analysis | 25% | Percentage of files with cross-file data flow cards |
| Evidence Anchors | 25% | Percentage of claims linked to code anchors (line + hash) |
Run atlasmemory status to see your score. Use atlasmemory enrich to improve it.
What are Symbols, Anchors, Flows, Cards, Imports, and References?
| Term | What it is | Example |
|---|---|---|
| Symbol | A named code entity extracted by Tree-sitter | function handleLogin(), class UserService, interface AuthConfig |
| Anchor | Line range + content hash β the "proof" of the evidence-backed system | src/auth.ts:42-58 [hash:5cde2a1f] |
| Flow | Cross-file data path (A calls B, B calls C) | login() β validateToken() β createSession() |
| File Card | Evidence-linked summary of what a file does | Purpose, public API, dependencies, side effects |
| Import | Cross-file dependency relationship | import { Store } from './store' |
| Reference | Call/usage reference between symbols | handleLogin() calls validateToken() |
All of these are automatically extracted by atlasmemory index. No manual work required.
Is there auto-indexing? Do I need to run the index command manually?
MCP mode (Claude/Cursor/VS Code): Yes, fully automatic. AtlasMemory checks git HEAD on every tool call. If files have changed since the last index, it incrementally re-indexes only the changed files. Zero manual work.
CLI mode: Run atlasmemory index . manually, or use atlasmemory index --incremental for quick updates.
Does it require an API key or cloud service?
No. AtlasMemory is 100% local-first. Core features (indexing, search, proving, context packs) work offline without depending on external services.
The optional enrich command uses Claude CLI or OpenAI Codex (running locally) to enhance file descriptions. Requires an active subscription with CLI access. If neither is installed, it falls back to deterministic AST-based descriptions β or your AI agent can enrich files directly via MCP tools.
How does the proof system prevent hallucinations?
Every claim AtlasMemory makes is linked to an anchor β a specific line range with a SHA-256 content hash.
- AI says: "handleLogin validates credentials" β linked to
auth.ts:42-58 [hash:5cde2a1f] - If someone edits
auth.tslines 42-58, the hash changes - AtlasMemory marks the claim as DRIFT DETECTED
- The AI agent knows its understanding is stale before hallucinating
No other tool does this. RAG-based tools retrieve text but can't prove it matches current code.
Which languages are supported?
11 languages via Tree-sitter: TypeScript, JavaScript, Python, Go, Rust, Java, C#, C, C++, Ruby, PHP. All extract functions, classes, methods, imports, and call references.
How does token budgeting work?
When you call build_context({mode: "task", objective: "fix auth bug", budget: 8000}), AtlasMemory:
- Searches for relevant files (FTS5 + graph ranking)
- Scores each file by relevance to your objective
- Uses a greedy algorithm to fit the most relevant context into your budget
- Priority order: objectives > folder summaries > file cards > flow traces > code snippets
- Returns exactly as much context as your token budget allows β no overflow
Result: Instead of reading 50 files (filling your context window), you get 2000 tokens of evidence-backed context and 85% of your context window remains free for actual work.
What happens when I run `atlasmemory generate`?
It generates AI instruction files (CLAUDE.md, .cursorrules, copilot-instructions.md) containing:
- Project architecture and key files
- Tech stack and conventions
- AI Readiness Score
- AtlasMemory MCP tool usage instructions β so your AI agent uses AtlasMemory automatically
If you have a hand-written CLAUDE.md, it merges the AtlasMemory section at the top without overwriting your content.
How is it different from Cursor's built-in indexing?
| Feature | Cursor Indexing | AtlasMemory |
|---|---|---|
| Proof system | None | Yes β every claim has line:hash proof |
| Drift detection | None | Yes β SHA-256 contract system |
| Token budgeting | None | Yes β greedy-optimized context packs |
| Cross-session memory | None | Yes β decisions persist across sessions |
| Impact analysis | None | Yes β reverse reference graph |
| Works with any AI tool | No (Cursor only) | Yes β MCP standard |
| Local-first | Partially | 100% |
git clone https://github.com/Bpolat0/atlasmemory.git
cd atlasmemory
npm install
npm run build:all # Build all packages + bundle
npm test # Run unit tests (147 tests, Vitest)
npm run eval:synth100 # Quick evaluation suite
npm run eval # Full evaluation (synth-100 + synth-500 + real-repo)- v1.0 β Core engine, proof system, MCP server, CLI, OpenAI Codex support
- Interactive dependency graph β visual topology of your codebase (like the screenshot below)
- VS Code extension improvements β enrich button, card browser, inline proof viewer
- Semantic search with embedding vectors
- Multi-repo support (monorepo + microservices)
- GitHub Actions integration (auto-index on push)
- Web dashboard with live graph visualization
See Discussions to view planned features and vote.
We welcome your contributions! Bug reports, feature requests, or pull requests β all are appreciated.
- CONTRIBUTING.md β Setup guide, PR process, commit format, testing
- CLAUDE.md β Project architecture and conventions
git clone https://github.com/Bpolat0/atlasmemory.git
cd atlasmemory
npm install && npm run build && npm test # 147 tests should passIf AtlasMemory saves you time, consider giving it a star β it helps others discover the project.
![]()
Powered by automiflow

