AI-powered smart glasses assistant. Streams video from Meta Ray-Ban glasses for real-time face recognition, records conversations via iPhone mic, and surfaces contextual reminders when you see people again.
Pipeline:
Ray-Ban Glasses → iOS App → FastAPI Backend → InsightFace (faces) + Whisper (audio) + Claude (reminders)
copration/
├── backend/ Python FastAPI server
│ ├── app/
│ │ ├── main.py All endpoints (WebSocket + REST)
│ │ ├── config.py Settings loaded from .env
│ │ ├── database.py SQLite setup
│ │ ├── models.py SQLAlchemy ORM models
│ │ └── services/
│ │ ├── face_service.py InsightFace recognition
│ │ ├── enrollment_service.py Enroll new faces into face_db.npz
│ │ ├── claude_service.py Claude reminder extraction
│ │ └── reminder_service.py DB CRUD for people + reminders
│ ├── known_faces/ Photos for enrollment (one subdir per person)
│ ├── face_db.npz Compiled face embeddings (auto-generated)
│ ├── copration.db SQLite database (auto-generated)
│ ├── .env Secrets — copy from .env.example
│ └── requirements.txt
└── ios/ Swift iOS app (Xcode project)
└── MetaGlassesRelay/
├── AppState.swift Central state + auto-record logic
├── GlassesManager.swift Meta DAT SDK wrapper
├── AudioManager.swift Mic recording + TTS playback
├── WebSocketManager.swift WebSocket client (video + audio)
└── ContentView.swift SwiftUI UI
- Mac (required for iOS development)
- Python 3.11 — install via pyenv or python.org
- Xcode 15+ — install from the Mac App Store
- iPhone running iOS 26+
- Meta Ray-Ban glasses + the Meta AI app installed on the iPhone
- iPhone and Mac on the same Wi-Fi network
cd backend
python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtInsightFace requires a C compiler. If
pip installfails oninsightface, run:xcode-select --installthen retry.
cp .env.example .envEdit backend/.env and fill in:
ANTHROPIC_API_KEY=sk-ant-... # Claude API key — get from console.anthropic.com
OPENAI_API_KEY=sk-... # OpenAI API key — get from platform.openai.com
The other values can stay as defaults:
FACE_DB_PATH=./face_db.npz
KNOWN_FACES_PATH=./known_faces
SQLITE_URL=sqlite:///./copration.db
SIMILARITY_THRESHOLD=0.3
ipconfig getifaddr en0You'll need this IP in the next step. Example: 192.168.1.114
Open ios/MetaGlassesRelay/WebSocketManager.swift and update line 14:
static let serverHost = "YOUR_MAC_IP:8000"Add photos of people you want to recognize:
backend/known_faces/
Alice/
photo1.jpg
photo2.jpg
Bob/
photo1.png
Then build the face database:
cd backend
curl -X POST http://localhost:8000/face-db/enrollOr run it before the server starts:
source .venv/bin/activate
python -c "from app.services.enrollment_service import build_face_database; build_face_database()"Each enrolled person also needs a row in the SQLite people table for reminders to work:
curl -X POST http://localhost:8000/people \
-H "Content-Type: application/json" \
-d '{"full_name": "Alice"}'The
full_namemust exactly match the folder name inknown_faces/(case-sensitive).
cd backend
PYTHONUNBUFFERED=1 .venv/bin/uvicorn app.main:app --host 0.0.0.0 --port 8000 --reloadYou should see:
INFO: Application startup complete.
To watch live logs in another terminal:
# The server prints face detections, transcripts, and reminders in real time
# Just keep the uvicorn terminal visible or tail its outputFile → Open → copration/ios/MetaGlassesRelay.xcodeproj
- Select the
MetaGlassesRelaytarget in the project navigator - Go to Signing & Capabilities
- Set Team to your Apple Developer account
- Plug in your iPhone
- Select it as the run target in the top bar
- Hit ⌘R
On first launch, iOS will ask for:
- Microphone permission — tap Allow (required for conversation recording)
- Bluetooth permission — tap Allow (required for glasses connection)
- Make sure the Meta AI app is installed and you're logged in
- Tap "Start Glasses Stream" in the app
- The app will open the Meta AI app for pairing — approve it
- Once approved, video streaming starts automatically
Normal flow:
- Wear the glasses and open the app on your iPhone
- Tap "Start Glasses Stream"
- Point the glasses at a recognized person → mic icon appears, recording starts automatically
- Have a conversation (e.g. "Remind me to send you the slides")
- Look away for 5 seconds → recording stops, Whisper transcribes, Claude extracts reminders
- Next time you see that person → reminder plays through glasses speakers (via Bluetooth)
Status pills (top of screen):
- Glasses — green when camera stream is active
- Server — green when WebSocket connected to backend
- Mic — "Auto" when idle, "Recording" when actively capturing
| Method | Endpoint | Description |
|---|---|---|
| GET | /health |
Health check |
| GET | /people |
List all enrolled people |
| POST | /people |
Add a person {"full_name": "Alice"} |
| GET | /reminders |
List active reminders (optional ?owner_id=1) |
| POST | /reminders |
Manually create a reminder |
| DELETE | /reminders/{id} |
Soft-delete a reminder |
| POST | /face-db/enroll |
Re-run enrollment from known_faces/ and reload |
| POST | /face-db/reload |
Hot-reload face_db.npz without re-enrolling |
| WS | /ws/video |
JPEG frame stream → face + reminder results |
| WS | /ws/audio |
PCM audio stream → transcript + reminders |
| WS | /ws/dashboard |
Broadcast feed of all real-time events |
Backend won't start — InsightFace install fails
xcode-select --install
pip install insightfaceApp can't connect to backend (Server pill stays gray)
- Check
ipconfig getifaddr en0matches the IP inWebSocketManager.swift - Make sure both devices are on the same Wi-Fi network
- Check the backend is running:
curl http://YOUR_IP:8000/health
Face not recognized
- Add more photos to
known_faces/<Name>/(5–10 photos from different angles helps) - Re-run
POST /face-db/enroll - Lower
SIMILARITY_THRESHOLDin.env(try0.25)
Reminder not triggering when person is seen
- Check the person exists in the DB:
GET /people - The
full_namein the DB must exactly match the face DB folder name - Add them:
POST /peoplewith{"full_name": "Alice"}
Audio not playing through glasses
- Check your iPhone's audio output in Control Center — select the glasses
- Glasses must be connected as the active Bluetooth audio device
"No reminders extracted" even though conversation was relevant
- Check
ANTHROPIC_API_KEYis set in.env - Try being more explicit: "Remind me to send Alice the report"
- The trigger person's name must be mentioned in the conversation
| Service | Usage | Estimated Cost |
|---|---|---|
| OpenAI Whisper | ~60 min audio | ~$0.36 |
| Anthropic Claude | ~100 conversations | ~$2–5 |
| Total | ~$5–10/day |
Recommended: load $10 OpenAI credits + $20 Anthropic credits.
https://browser.neo4j.io/?_gl=1*tslynr*_gcl_au*NjI2Njc1NjQ4LjE3NzQxNDcxOTg.*_ga*MTM0MDQ3OTEzMy4xNzYyMTk3NzI4*_ga_DL38Q8KGQC*czE3NzQxNDcxOTUkbzIkZzEkdDE3NzQxNDcyMjEkajM0JGwwJGgw bolt: localhost:7687 user: neo4j password: password