Real-time assistive vision for colorblind & low-vision users
Hackathon Winner • View on Devpost
TrueLight turns your phone into an intelligent camera that detects objects, analyzes their colors, and alerts you when something in your environment may be hard to distinguish given your specific type of color vision deficiency.
- 🎯 Real-time object detection — YOLOv3-tiny identifies objects in your camera feed
- 🎨 Personalized color analysis — Flags colors you specifically struggle with (protanopia, deuteranopia, tritanopia)
- 🔊 Audio alerts — Natural ElevenLabs voice (or built-in TTS) announces hazards hands-free
- 🗣️ Voice commands — Say "Hey TrueLight, what do you see?" for instant scene descriptions
- 🧪 Built-in Ishihara test — Determines your color vision type during onboarding
- ♿ Low Vision mode — Prioritizes alerts by proximity instead of color
- 🤖 AI Assistant — Gemini-powered conversational Q&A about your surroundings
- 🎨 Adaptive UI — Interface colors automatically adjust so they're always visible to you
Three decoupled services, each with a single responsibility:
┌─────────────────┐ REST ┌──────────────┐ REST ┌─────────────────────┐
│ Mobile App │ ──────────▶ │ Next.js API │ ──────────▶ │ Python Detection │
│ React Native │ ◀────────── │ Gateway │ ◀────────── │ FastAPI + YOLO │
│ Expo │ │ Auth / DB │ │ OpenCV HSV │
└─────────────────┘ └──────────────┘ └─────────────────────┘
| Layer | Tech | Role |
|---|---|---|
| Mobile | React Native, Expo, TypeScript, Zustand | Camera capture, bounding box rendering, TTS, voice commands |
| Backend | Next.js 15, SQLite, JWT, bcrypt | Auth, API proxy, Roboflow fallback, telemetry |
| Detection | Python, FastAPI, OpenCV, YOLOv3-tiny | Object detection, HSV color analysis, priority heuristics |
The Python service runs YOLOv3-tiny inference in under 80ms on CPU. Detected regions are passed through an HSV color analyzer that maps pixel distributions against your CVD profile. Red detection uses dual-range masking (0-10° and 170-180° on the hue wheel) to handle the HSV wraparound.
- ElevenLabs — Natural-sounding voice for a premium experience
- Expo Speech — Offline fallback that works without internet
- Priority-based debouncing ensures critical alerts are spoken first
Hands-free interaction powered by wake word detection:
| Command | Action |
|---|---|
| "What do you see?" | Describe current scene |
| "What color is the light?" | Check traffic signal state |
| "Can I cross?" | Traffic safety assessment |
| "Help me navigate" | Navigation assistance |
| "Stop" / "Quiet" | Silence current speech |
Built-in Ishihara plate test with enhanced plates to determine your specific CVD type during onboarding — no external diagnosis needed.
| Mode | Behavior |
|---|---|
| Walking | Low detection thresholds — catches small objects early |
| Driving | High thresholds — only road-relevant objects, no false positives |
| Low Vision | Ignores color entirely — alerts based on object proximity (bounding box area / frame area) |
Prerequisites: Node.js 18+, Python 3.8+, Expo Go on your phone. Computer and phone must be on the same Wi-Fi network.
cd python-detection
pip install -r requirements.txt
python main.py
# Runs on localhost:8000cd backend
cp .env.example .env # Fill in API keys
npm install
npm run dev
# Runs on localhost:3000cd mobile
cp .env.example .env # Fill in API keys and backend URL
npm install
npx expo start
# Scan QR code with Expo Gomobile/.env
| Variable | Description |
|---|---|
EXPO_PUBLIC_ROBOFLOW_API_KEY |
Roboflow API key (10k free calls/month) |
EXPO_PUBLIC_GEMINI_API_KEY |
Google Gemini API key for AI assistant |
EXPO_PUBLIC_ELEVENLABS_API_KEY |
ElevenLabs API key for natural TTS |
EXPO_PUBLIC_BACKEND_URL |
Backend URL (http://YOUR_IP:3000) |
backend/.env
| Variable | Description |
|---|---|
ROBOFLOW_API_KEY |
Roboflow API key |
GEMINI_API_KEY |
Gemini API key (optional backup) |
ELEVENLABS_API_KEY |
ElevenLabs API key for TTS endpoint |
TrueLight/
├── mobile/ # React Native (Expo) app
│ ├── app/ # Screens (home, camera, settings, login, onboarding)
│ ├── components/ # CameraView, BoundingBoxOverlay, ColorTestPlate
│ ├── constants/ # Accessibility colors, color profiles, Ishihara plates
│ ├── contexts/ # Auth context provider
│ ├── services/ # API, speech, voice commands, audio alerts,
│ │ # motion tracking, speed detection, screen recording
│ └── store/ # Zustand state management
├── backend/ # Next.js API gateway
│ ├── app/api/ # REST endpoints (detect, auth, health, TTS)
│ └── lib/ # Auth (JWT), DB (SQLite), Roboflow detection
└── python-detection/ # FastAPI computer vision service
├── main.py # Server + dynamic sensitivity configs
├── detector.py # YOLOv3-tiny + NMS + fallback color regions
├── color_analyzer.py # HSV color mapping + CVD problematic detection
└── models/ # YOLO weights and config files
MIT
