Chat with your documents, email, and the web — entirely on your own machine.
No API keys. No cloud. No subscription. Everything runs locally via Ollama.
Every other "chat with your docs" tool sends your data to OpenAI, Anthropic, or some other cloud. VaultMind doesn't. The LLM runs on your hardware. The vector database lives on your disk. Nothing is transmitted anywhere.
| VaultMind | ChatGPT / Claude | PrivateGPT | Obsidian Copilot | |
|---|---|---|---|---|
| 100% local | ✅ | ❌ | ✅ | ❌ |
| Gmail integration | ✅ | ❌ | ❌ | ❌ |
| Live web search | ✅ | ✅ | ❌ | ❌ |
| One-command setup | ✅ | ✅ | ❌ | ❌ |
| No API key needed | ✅ | ❌ | ✅ | ❌ |
| Electron Mac app | ✅ | — | ❌ | ❌ |
Prerequisite: Ollama installed and running.
git clone https://github.com/airblackbox/VaultMind.git
cd VaultMind
bash start.shThat's it. start.sh pulls the models (~4.5 GB, one-time), installs Python deps, starts the backend, and opens http://localhost:8000.
docker compose upThen open http://localhost:8000. Docker handles everything including Ollama.
Drop in any file — PDF, DOCX, TXT, Markdown, CSV — and ask questions across all of it.
Paste any URL — VaultMind fetches and indexes it instantly. Paste a job posting, a competitor's pricing page, a research paper.
Connect Gmail — OAuth into your inbox and VaultMind indexes your emails locally. Ask "what did my lawyer say about the contract?" or "summarize my inbox".
Connect Notion — Paste your integration token and your workspace syncs automatically on a configurable schedule.
Agent mode — Toggle 🌐 Agent and VaultMind combines your private vault with live web search for questions your docs can't answer alone.
Files / URLs / Gmail / Notion
│
▼
Text extraction
(pypdf · python-docx · BS4)
│
▼
150-word chunks, 20-word overlap
│
▼
nomic-embed-text (local, via Ollama)
│
▼
ChromaDB on disk
│
▼
Query → embed → top-k similarity search
│
▼
Mistral / Llama / Phi / Gemma (your choice)
│
▼
Streamed answer
100% local. The API is FastAPI on localhost:8000. The UI is a single HTML file — no framework, no build step.
| Layer | Tool |
|---|---|
| LLM + embeddings | Ollama |
| Vector store | ChromaDB |
| Backend | FastAPI + streaming SSE |
| Frontend | Vanilla JS — zero dependencies |
| Document parsing | pypdf, python-docx, BeautifulSoup |
| Gmail | Google OAuth 2.0 (readonly) |
| Desktop app | Electron (Mac) |
Switch models any time from the sidebar dropdown. All run locally via Ollama.
- Mistral 7B — fast, good all-rounder (default)
- Llama 3.2 — strong reasoning
- Phi-3 Mini — lightweight, great on older hardware
- Gemma 2 — Google's open model
- Qwen 2.5 — strong on technical content
- DeepSeek R1 — best for complex analysis
VaultMind ships as a native Electron app — no Terminal required.
npm install
npm start # dev mode
bash build-app.sh # builds distributable .dmgFirst launch automatically creates a Python virtualenv and installs dependencies. Subsequent launches skip straight to the app. User data lives in ~/Library/Application Support/VaultMind/data — safe across app updates.
VaultMind runs in any browser. On your local network:
ipconfig getifaddr en0 # find your Mac's IP
# open http://192.168.x.x:8000 on your phoneFrom anywhere via Tailscale (free, 5 min setup):
- Install Tailscale on your Mac and phone, sign in with the same account
- Open
http://100.x.x.x:8000from anywhere — stays completely private
Tap Add to Home Screen in Safari to install as a PWA.
- PDF, DOCX, TXT, MD, CSV upload
- URL ingestion (scrape any page)
- Gmail OAuth — index inbox locally
- Notion sync — auto-polls on configurable schedule
- Agent mode — vault + live web search
- Inbox digest — AI-ranked email summary
- 6 local model choices
- Electron Mac app
- Docker support
- Mobile-responsive PWA
- Slack integration
- WhatsApp conversation export
- Bulk URL ingestion
- Timeline view — "what happened in March?"
- MCP server — use VaultMind as context inside Cursor / VS Code
Apache 2.0. PRs welcome.
# Backend dev mode (hot reload)
cd backend && uvicorn main:app --reload --port 8000
# Frontend is at http://localhost:8000 — edit frontend/index.html directlyOpen an issue for bugs. Open a discussion for feature ideas.
Built by Jason Shotwell. Part of the AIR Blackbox ecosystem.
