R-EYE-DER is a hands-free, real-time coaching and safety tool for drivers.
Inspiration
A total of 40,901 people died in motor vehicle crashes in 2023.
More than one in three of the 6000 motorcyclist fatalities are single vehicle crashes – which are collisions with stationary objects.
Driver fatigue was found in 13 percent of all commercial motor vehicle crashes; 5,837 large trucks were involved in a fatal crash in 2022, up 49% in the last decade.
There are two attention problems this product addresses:
In extreme sports, or prolonged vehicle operation, it is common that the stress of constantly scanning ahead for oncoming hazards leads to tunnel vision and reduced reaction speed, this is a neurophysical phenomenon called Attention Fatigue or (DAF Directed Attention Fatigue).
When motorists are in a state of panic or fatigue, they stare at the hazard (pot holes, other vehicles, pedestrians, barriers, etc.), and their vehicle follows.
Because a vehicle naturally follows a driver’s line of sight, a technique known as target fixation, can backfire and cause them to unintentionally steer directly into danger.
The problem is gaze on the wrong thing at the wrong time.
The 2 Primary Use Cases of R-EYE-DER
A hands-free, lightweight visual attention coaching app for safer visual habits that persist for newer riders, and as a safety net for career motorists.
We teach motor learning for people when to direct their energy to intentionally scan and when to commit your gaze.
For Motorcyclists/Bikers/Skaters: Training riders to look where they want to go.
Scenario: Panic → Staring at obstacle → Steer into it
- Manage Attention: When I stare at the obstacle, the system detects prolonged gaze on hazard, and must consciously divert gaze to a safe exit.
- Look where you want to go: Gives instant visual and auditory feedback through peripheral cues, guide lines, and subtle alerts through real time vision coaching.
- Cornering Technique: Trains “look through the turn” behavior.
(Personal/Everyday Use/Novice Drivers)
For Car/Semi Truck Drivers: Breaking tunnel vision before fatigue turns into an accident.
Scenario: Fatigue → tunnel vision
- Scan Ahead: Detects fatigue and tunnel vision through prolonged low-variance gaze.
- Prompts micro gaze shifts: and scanning for situational awareness
(Safety/Commercial Impact/Scalable/Experienced Drivers)
How we built it
R/EYE/DER is built on Raven Framework, a Python-based SDK for developing gaze-driven applications on Raven Glass AR smart glasses. Our goal was to prove that real-time visual attention coaching is technically feasible, responsive, and comfortable enough to use while moving.
We used cups detected by YOLO as stand-ins for tangent points along a curve. This may look simple, but it solves several hard problems at once:
- It proves real-time object detection and gaze alignment are fast enough to run continuously
- It validates the coaching loop: detect → evaluate gaze → give feedback → adapt
- It avoids spending limited time training models instead of validating system behavior In a real deployment, object detection and scene understanding can run on cloud infrastructure, and models can be trained to directly detect road curvature, tangents, exits, and hazards. The same attention logic we built remains unchanged.
Core Technologies
Raven Framework: Provided the foundation for gaze tracking, AR rendering, and sensor access.
YOLOv7: Used for real-time object detection to identify the target location for gaze fixation. In our demo, YOLO detects cups placed along the tangents of a road curve.
OpenCV: Computer vision pipeline for camera capture, image preprocessing, and frame processing
Audio System: Custom audio panning system that provides feedback based on target location relative to gaze
PySide6/Qt: UI framework powering the Raven Framework's widget system
Challenges we ran into
Running YOLOv7 inference while keeping the UI smooth was our biggest technical challenge. We addressed this by running vision inference in a separate worker process and adding idle vs. active state management to avoid unnecessary computation. We had to process frames at controlled intervals to balance responsiveness and stability, ensuring the system felt responsive without overheating the device or stalling the UI too much.
Another huge difficulty was deciding on how to find the focal point for turning motion. Initially we considered training a ML model on existing videos that show the focal point when driving. However, this database was too small and too difficult to implement. Our mentor suggested to use markers to find the focal point which for us was cups that represented white stripes on roads. We chose to use cups because they were for free.
An unexpected challenge while creating our application was audio. As the RAVEN glasses do not currently have a stereo solution, we couldn't provide directional sound to help drivers look in a specific direction. Another issue that was found is that after playing a sound, switching scenes would sometimes crash the application. Fortunately, we managed to fix this issue.
Demo Mode Constraints
Because judges could not use live camera hardware, we extended the vision worker protocol to support pre-recorded video input. This allowed us to demonstrate the full coaching experience—gaze tracking, detection, feedback, and state transitions—without compromising realism.
Accomplishments that we're proud of
Real-Time Object Detection: We successfully integrated YOLOv7 for on-device inference and used it to compute meaningful attention targets in real time.
What we learned
Real-Time ML Inference: Running computer vision models on-device requires careful optimization. Separating inference into worker processes, managing model warmup, and balancing accuracy vs. speed are all critical considerations.
State Management at Scale: We managed a complex flow—welcome → calibration → use mode → route → coaching—by clearly separating UI rendering from business logic and using enums for state transitions. This kept the system understandable as it grew.
Performance Profiling: Identifying bottlenecks (widget recreation, image I/O, YOLO inference) taught us the importance of profiling before optimizing. Not all "slow" code is where you think it is.
What's next for R-EYE-DER
On-Device Deployment: Deploy to Raven Glass hardware for real-world validation, while offloading heavy perception tasks to cloud compute for scalability and model complexity.
Expanded Object Detection: Train custom models to detect:
- Road curvature and tangents
- Vehicles, pedestrians, debris, potholes
- Multi-class hazards with priority-based attention targeting
Advanced Coaching Algorithms:
- Predict attention fatigue before it becomes dangerous
- Personalize coaching based on learned scanning patterns
- Adapt difficulty dynamically as skill improves
G-Map data integration:
- Remind drivers of important signs (eg: speed limit, stop sign)
- Used to find data like road curvature. Data can be used to improve algorithm for focal point.
Data Collection & Analytics: Collect anonymized gaze and coaching data to measure effectiveness, improve algorithms, and demonstrate real-world safety impact.


Log in or sign up for Devpost to join the conversation.