IRIS - Turn Darkness into Direction
Intelligent Radar for Independent Sightless
💡 Inspiration
My uncle has been blind since birth. Growing up, I watched him navigate by touching every wall, counting steps, memorizing exact distances. In new places, he'd grip my shoulder while I'd describe everything. But words aren't enough. You can't describe every chair leg, every corner, every low-hanging sign.
The worst part? Watching him in crowded places. People don't see the cane in time. They bump into him, he apologizes even though it's not his fault. He has to touch everything - public handrails, dirty walls, random surfaces - just to know where he is.
One day he asked me, "You're studying computer science, right? Why can't my phone tell me where walls are?" That hit different. He was right. We have self-driving cars but blind people still use the same stick from hundreds of years ago.
🎯 What It Does
Our app uses the iPhone's LiDAR scanner to create a real-time 3D map of your surroundings. But blind people don't need a map - they need directions. So we convert depth data into simple haptic feedback:
- 4 quick taps (dots) = turn left
- 1 long vibration (dash) = turn right
- 2 gentle taps = go straight, path is clear
- Continuous buzz = STOP, something's too close
It scans 60 times per second, detecting obstacles up to 5 meters away. The genius part? It remembers places you visit often, building a spatial memory of your home, office, and favorite spots - saving 70% battery in familiar locations.
🚧 How We Built It
We used ARKit's Scene Depth API to access raw LiDAR point cloud data. Mesh reconstruction happens in real-time at 60Hz. For haptics, we implemented Core Haptics with custom morse-like patterns.
The breakthrough? Eye-level scanning only - we scan the middle third of the screen to avoid detecting the ground as an obstacle. This took three days to figure out. The ground kept triggering false "STOP" warnings.
We built an Einstein-level spatial memory system using:
- WiFi fingerprinting to identify rooms (BSSID scanning)
- Temporal learning that remembers time-based patterns (door closed at night)
- Obstacle permanence scoring (0.0 for chairs, 1.0 for walls)
- Adaptive scanning modes that switch from 60Hz aggressive to 10Hz predictive
💪 Challenges We Ran Into
Ground detection hell: LiDAR sees EVERYTHING. For three days, our app thought the floor was a wall. We tried filters, height maps, plane detection - nothing worked. Finally realized we could just ignore the bottom third of the scan.
Haptic overload: Early versions buzzed constantly for every object. Overwhelming. We added 2.5-second minimum delays and prioritized only critical directions. Less is more.
Apple's walled garden: You need a paid developer account to properly test. We kept hitting provisioning errors. The TestFlight submission process is a nightmare without an account.
Battery drain: First version drained 40% per hour. We optimized with the spatial memory system - once it knows a space, it stops aggressively scanning and only checks for changes. Got it down to under 5% per hour.
🏆 Accomplishments That We're Proud Of
We tested it blindfolded at this hackathon venue. Set up an obstacle course with chairs, tables, and bags scattered around. Our team member walked through the entire thing without hitting anything. Other teams stopped to watch - they couldn't believe someone could navigate blindfolded using just phone vibrations.
The battery optimization is insane - we got it from 40% per hour down to under 5%. During our 36-hour coding session, one iPhone ran IRIS the whole time on a single charge.
Simplicity: Other projects have complex audio cues, voice commands, special gestures. Ours? Dots and dashes. That's it. We watched a volunteer learn it in literally 2 minutes and navigate successfully.
VoiceOver compatibility: Most apps break Apple's accessibility features. Ours enhances them. Blind users don't have to choose between their screen reader and our navigation.
📚 What We Learned
LiDAR is noisy: Raw depth data needs massive filtering. We learned point cloud processing, RANSAC for plane detection, spatial hashing for efficient lookups.
Accessibility = simplicity: Every "cool" feature we added made it harder for actual blind users. Simple is better.
Test blind: We spent hours blindfolded, walking into walls. You can't design accessibility from the outside.
ARM optimization matters: NEON instructions for vector math made distance calculations 3x faster. Neural Engine processes depth maps in 8ms vs 25ms on CPU.
🚀 What's Next for IRIS
The $50 wristband dream: We've found suppliers who can do mini depth sensors for $30 in bulk. Add a haptic motor, Bluetooth chip, and battery - we could make something truly affordable.
Crowd-sourced maps: Imagine if every IRIS user contributed to a shared database. Coffee shops, malls, airports - all pre-mapped by the community.
Voice annotations: Context that haptics can't provide. "Careful, wet floor sign" or "construction ahead."
Smart city integration: Some cities have Bluetooth beacons for navigation. We could combine that with LiDAR for perfect indoor accuracy.
Guide Dog partnerships: This isn't meant to replace guide dogs but supplement them. For people on waiting lists or in areas where guide dogs aren't available.
But honestly? Right now we just want blind people to use it. To walk without fear. To stop apologizing for existing in public spaces. To have independence.
That's all we want.
🛠 Built With
- Swift
- SwiftUI
- ARKit
- LiDAR
- Core Haptics
- SQLite
- ARM NEON SIMD
- Metal Shaders
- Core Location
🤝 Team
Built with ❤️ at hackathon by Team I.R.I.S
Try it: GitHub Repository


Log in or sign up for Devpost to join the conversation.