Inspiration

  • Driving is an integral part of our daily lives, but it's disheartening to learn that tens of thousands of lives are lost each year in car accidents. According to the Annual United States Road Crash Statistics, over 46,000 people lose their lives in car crashes annually. Many of us have experienced the tragic loss of someone we know due to a car accident. It's this sobering reality that inspired us to find a solution to this problem and save countless lives. As we delved into the issue, we discovered that drowsy driving is a significant contributor to accidents, with drivers falling asleep at the wheel. Therefore, we embarked on a mission to reduce drowsy driving and enhance road safety for people worldwide.

What it Does

  • Our innovative solution employs facial recognition technology to identify a driver's face and, more specifically, their eyes. By monitoring the driver's eyes, the system can determine if their eyes are open or closed. If there's a moment when the driver's eyes close, our program springs into action by automatically placing a call to the driver's phone. The ringing sound or vibration serves as a wake-up call, preventing the driver from closing their eyes and drifting into sleep. Our goal is to keep drivers alert and safe on the road.

How We Built It

  • Our project, "DrowsyDriver," was created to address the critical issue of drowsy driving, which contributes to thousands of road accidents and fatalities each year. We built this system using a combination of Python, OpenCV, dlib, and Twilio to monitor and alert drivers who exhibit signs of drowsiness. Facial Landmark Detection: We harnessed the power of dlib, a popular library for facial detection and landmark prediction. The "shape_predictor" model was employed to identify the facial landmarks and, more importantly, the coordinates of the driver's eyes. Eye Aspect Ratio (EAR): To detect drowsiness, we calculated the Eye Aspect Ratio (EAR), a metric that quantifies the openness of the driver's eyes. This required computing the Euclidean distances between vertical and horizontal landmarks of the eyes. Video Stream Processing: We utilized the imutils library to handle video streaming from the camera. The system continuously captured frames from the camera and processed them in real-time. Drowsiness Detection: By monitoring the EAR over consecutive frames, we determined if the driver's eyes were open or closed. If the EAR fell below a predefined threshold, indicating drowsiness, the system triggered an alert. Twilio Integration: We integrated Twilio's services to initiate a phone call alert when drowsiness was detected. The phone call alert acted as a wake-up call for the driver, mitigating the risk of falling asleep at the wheel.

Challenges We Ran Into

  • Our journey to create this life-saving system was marked by various challenges that we overcame to develop a fully functional solution. The core of our system relies on programming, and we encountered obstacles in areas like object recognition within video frames. We adapted our approach to improve the identification of a driver's eyes, a critical element of our facial recognition code. Initially, we considered using an alarm for alerting drowsy drivers, but we faced issues with integrating it with our program's audio element. As a result, we transitioned to using phone calls, a change that proved more effective in consistently alerting drivers.

Accomplishments We're Proud Of

  • This hackathon was a debut experience for most of our team, and as second-year students, our technical expertise was relatively limited compared to other participants. Despite these challenges, we are immensely proud of our achievement in creating a functional and comprehensive program with the potential to save lives. Our learning curve was steep, especially in terms of working with Python, OpenCV, NumPy, and other essential resources. We feel well-prepared for future projects and more significant challenges in software development. Our greatest accomplishment as newcomers to the field is the creation of a project that can make a tangible impact.

What We Learned

  • Our journey in this hackathon has been a valuable learning experience, expanding our technical skill set beyond our expectations. We delved into the intricacies of object recognition in videos, enabling us to identify human eyes within a human face using facial recognition. Additionally, we successfully implemented phone call functionality, enhancing our understanding of Python, OpenCV, NumPy, Twilio, and more. These seemingly small tasks have significantly improved our programming skills and knowledge. We are now better equipped to tackle future projects related to video tracking, facial recognition, and other aspects we explored during this project.

What's Next for DrowsyDriver

  • We're enthusiastic about continuing the development of DrowsyDriver and introducing new, impactful features. Our vision includes integrating our program directly into cars or cameras, increasing accessibility, and gathering user feedback. We also aim to incorporate personalized messages for users upon awakening, further enhancing the user experience. Improving various elements of the program can enhance its overall efficiency. Additionally, we see the potential to expand the system's recognition capabilities to detect other body parts, such as the driver's arms, for a more comprehensive understanding of the driver's alertness. In essence, our project is a versatile platform with room for expansion, and we're excited about the opportunity to bring it closer to real-world implementation.

Built With

Share this project:

Updates