Skip to content

paris-phan/hoohacks-2026

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

103 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Copration

AI-powered smart glasses assistant. Streams video from Meta Ray-Ban glasses for real-time face recognition, records conversations via iPhone mic, and surfaces contextual reminders when you see people again.

Pipeline:

Ray-Ban Glasses → iOS App → FastAPI Backend → InsightFace (faces) + Whisper (audio) + Claude (reminders)

Repo Structure

copration/
├── backend/                  Python FastAPI server
│   ├── app/
│   │   ├── main.py           All endpoints (WebSocket + REST)
│   │   ├── config.py         Settings loaded from .env
│   │   ├── database.py       SQLite setup
│   │   ├── models.py         SQLAlchemy ORM models
│   │   └── services/
│   │       ├── face_service.py        InsightFace recognition
│   │       ├── enrollment_service.py  Enroll new faces into face_db.npz
│   │       ├── claude_service.py      Claude reminder extraction
│   │       └── reminder_service.py    DB CRUD for people + reminders
│   ├── known_faces/          Photos for enrollment (one subdir per person)
│   ├── face_db.npz           Compiled face embeddings (auto-generated)
│   ├── copration.db          SQLite database (auto-generated)
│   ├── .env                  Secrets — copy from .env.example
│   └── requirements.txt
└── ios/                      Swift iOS app (Xcode project)
    └── MetaGlassesRelay/
        ├── AppState.swift        Central state + auto-record logic
        ├── GlassesManager.swift  Meta DAT SDK wrapper
        ├── AudioManager.swift    Mic recording + TTS playback
        ├── WebSocketManager.swift WebSocket client (video + audio)
        └── ContentView.swift     SwiftUI UI

Prerequisites

  • Mac (required for iOS development)
  • Python 3.11 — install via pyenv or python.org
  • Xcode 15+ — install from the Mac App Store
  • iPhone running iOS 26+
  • Meta Ray-Ban glasses + the Meta AI app installed on the iPhone
  • iPhone and Mac on the same Wi-Fi network

Backend Setup

1. Create virtual environment

cd backend
python3.11 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

InsightFace requires a C compiler. If pip install fails on insightface, run: xcode-select --install then retry.

2. Configure environment variables

cp .env.example .env

Edit backend/.env and fill in:

ANTHROPIC_API_KEY=sk-ant-...       # Claude API key — get from console.anthropic.com
OPENAI_API_KEY=sk-...              # OpenAI API key — get from platform.openai.com

The other values can stay as defaults:

FACE_DB_PATH=./face_db.npz
KNOWN_FACES_PATH=./known_faces
SQLITE_URL=sqlite:///./copration.db
SIMILARITY_THRESHOLD=0.3

3. Find your Mac's local IP address

ipconfig getifaddr en0

You'll need this IP in the next step. Example: 192.168.1.114

4. Update the iOS app with your IP

Open ios/MetaGlassesRelay/WebSocketManager.swift and update line 14:

static let serverHost = "YOUR_MAC_IP:8000"

5. Enroll faces

Add photos of people you want to recognize:

backend/known_faces/
    Alice/
        photo1.jpg
        photo2.jpg
    Bob/
        photo1.png

Then build the face database:

cd backend
curl -X POST http://localhost:8000/face-db/enroll

Or run it before the server starts:

source .venv/bin/activate
python -c "from app.services.enrollment_service import build_face_database; build_face_database()"

6. Add people to the database

Each enrolled person also needs a row in the SQLite people table for reminders to work:

curl -X POST http://localhost:8000/people \
  -H "Content-Type: application/json" \
  -d '{"full_name": "Alice"}'

The full_name must exactly match the folder name in known_faces/ (case-sensitive).

7. Run the backend

cd backend
PYTHONUNBUFFERED=1 .venv/bin/uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload

You should see:

INFO:     Application startup complete.

To watch live logs in another terminal:

# The server prints face detections, transcripts, and reminders in real time
# Just keep the uvicorn terminal visible or tail its output

iOS App Setup

1. Open in Xcode

File → Open → copration/ios/MetaGlassesRelay.xcodeproj

2. Set your development team

  • Select the MetaGlassesRelay target in the project navigator
  • Go to Signing & Capabilities
  • Set Team to your Apple Developer account

3. Run on device

  • Plug in your iPhone
  • Select it as the run target in the top bar
  • Hit ⌘R

On first launch, iOS will ask for:

  • Microphone permission — tap Allow (required for conversation recording)
  • Bluetooth permission — tap Allow (required for glasses connection)

4. Connect the glasses

  1. Make sure the Meta AI app is installed and you're logged in
  2. Tap "Start Glasses Stream" in the app
  3. The app will open the Meta AI app for pairing — approve it
  4. Once approved, video streaming starts automatically

Using the App

Normal flow:

  1. Wear the glasses and open the app on your iPhone
  2. Tap "Start Glasses Stream"
  3. Point the glasses at a recognized person → mic icon appears, recording starts automatically
  4. Have a conversation (e.g. "Remind me to send you the slides")
  5. Look away for 5 seconds → recording stops, Whisper transcribes, Claude extracts reminders
  6. Next time you see that person → reminder plays through glasses speakers (via Bluetooth)

Status pills (top of screen):

  • Glasses — green when camera stream is active
  • Server — green when WebSocket connected to backend
  • Mic — "Auto" when idle, "Recording" when actively capturing

API Reference

Method Endpoint Description
GET /health Health check
GET /people List all enrolled people
POST /people Add a person {"full_name": "Alice"}
GET /reminders List active reminders (optional ?owner_id=1)
POST /reminders Manually create a reminder
DELETE /reminders/{id} Soft-delete a reminder
POST /face-db/enroll Re-run enrollment from known_faces/ and reload
POST /face-db/reload Hot-reload face_db.npz without re-enrolling
WS /ws/video JPEG frame stream → face + reminder results
WS /ws/audio PCM audio stream → transcript + reminders
WS /ws/dashboard Broadcast feed of all real-time events

Troubleshooting

Backend won't start — InsightFace install fails

xcode-select --install
pip install insightface

App can't connect to backend (Server pill stays gray)

  • Check ipconfig getifaddr en0 matches the IP in WebSocketManager.swift
  • Make sure both devices are on the same Wi-Fi network
  • Check the backend is running: curl http://YOUR_IP:8000/health

Face not recognized

  • Add more photos to known_faces/<Name>/ (5–10 photos from different angles helps)
  • Re-run POST /face-db/enroll
  • Lower SIMILARITY_THRESHOLD in .env (try 0.25)

Reminder not triggering when person is seen

  • Check the person exists in the DB: GET /people
  • The full_name in the DB must exactly match the face DB folder name
  • Add them: POST /people with {"full_name": "Alice"}

Audio not playing through glasses

  • Check your iPhone's audio output in Control Center — select the glasses
  • Glasses must be connected as the active Bluetooth audio device

"No reminders extracted" even though conversation was relevant

  • Check ANTHROPIC_API_KEY is set in .env
  • Try being more explicit: "Remind me to send Alice the report"
  • The trigger person's name must be mentioned in the conversation

Cost Estimate (per hackathon/day of testing)

Service Usage Estimated Cost
OpenAI Whisper ~60 min audio ~$0.36
Anthropic Claude ~100 conversations ~$2–5
Total ~$5–10/day

Recommended: load $10 OpenAI credits + $20 Anthropic credits.

neo4j viewer

https://browser.neo4j.io/?_gl=1*tslynr*_gcl_au*NjI2Njc1NjQ4LjE3NzQxNDcxOTg.*_ga*MTM0MDQ3OTEzMy4xNzYyMTk3NzI4*_ga_DL38Q8KGQC*czE3NzQxNDcxOTUkbzIkZzEkdDE3NzQxNDcyMjEkajM0JGwwJGgw bolt: localhost:7687 user: neo4j password: password

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors