Inspiration
In Greek mythology, Tiresias was a blind prophet who could see what others couldn't. This inspired us to build technology that gives visually impaired individuals a new way to "see" their surroundings.
Over 2.2 billion people globally have vision impairment. While guide dogs and white canes remain essential tools, they have limitations — they can't warn you about overhead obstacles, identify objects, or describe your environment. We asked: What if your iPhone could become an intelligent guide that perceives the world for you?
The convergence of powerful on-device AI, LiDAR sensors in modern iPhones, and real-time edge computing made this vision possible.
What it does
Tiresias transforms an iPhone into an intelligent navigation assistant:
- Real-time obstacle detection — Uses the camera and LiDAR to identify objects, people, and hazards in your path
- Depth sensing — LiDAR measures precise distances to obstacles (walls, furniture, stairs, curbs)
- Voice guidance — Natural language descriptions: "Person approaching from the left, 3 meters" or "Door on your right"
- Haptic feedback — Distinct vibration patterns warn of obstacles:
- Intensity increases as objects get closer
- Different patterns for different danger levels
- Directional cues (left/right vibrations)
- Edge processing — Streams video to a local edge server for AI inference, minimizing latency
- Privacy-first — All processing happens locally, no cloud uploads
How we built it
iOS App (Swift/SwiftUI):
- Camera and LiDAR capture pipeline using AVFoundation and ARKit
- WebSocket streaming for real-time video transmission
- Haptic engine integration using Core Haptics for nuanced feedback
- VoiceOver-compatible UI with accessibility-first design
- Network resilience with WiFi/USB fallback connections
Edge Server (Python/FastAPI):
- Real-time object detection using computer vision models
- Depth map processing from LiDAR data
- Spatial reasoning to determine obstacle positions and trajectories
- Natural language generation for voice guidance
- WebSocket server for low-latency bidirectional communication
AI/ML Pipeline:
- Object detection for identifying obstacles, people, vehicles
- Depth estimation fusion (LiDAR + monocular depth)
- Path analysis to determine safest walking route
- Priority system to warn about most critical obstacles first
Track Submission:
- Social Good
- Machine Learning / AI
Built With
- ai
- fastapi
- machine-learning
- python
- react
- swift
- websockets
- yolo
Log in or sign up for Devpost to join the conversation.