Hey there! Thanks for coming; see our presentation for more: https://docs.google.com/presentation/d/18BXyRwnAm_WxYAeCLkGuQTC35s1-44rtN6GIFZa9xmk/edit?usp=sharing.
Inspiration Our goal was to expand the reach of digital education by making it more inclusive. We focused on integrating AI to support students with motor function disabilities, recognizing that traditional input methods often create barriers for these learners.
What It Does BlinkBot enables users to perform mouse operations, such as moving the cursor and clicking, through simple eye blinks. It also supports voice commands using advanced speech recognition algorithms, offering a fully hands-free way to interact with a computer.
How We Built It We implemented real-time facial landmark tracking using OpenCV and MediaPipe, allowing the webcam to monitor eye movement and blinking for cursor control. Mouth opening initiates a voice input session, processed via speech recognition. The user interface, built with Tkinter, includes sensitivity controls and real-time voice feedback, tying all components into a seamless and user-friendly experience.
Challenges We Faced One major challenge was fine-tuning blink detection to prevent unintentional clicks, particularly in varying lighting conditions. Accurately detecting mouth movements, all without misclassifying other facial expressions, also proved complex. Additionally, maintaining responsive and lightweight voice input without relying on large models or external triggers was a key technical hurdle.
Accomplishments We're Proud Of We successfully developed a functional, hands-free control system using just a webcam and microphone. BlinkBot combines eye tracking, blink-based actions, and voice input into an intuitive interface. It runs in real time, is resource-efficient, and holds potential to greatly enhance digital accessibility for individuals with physical impairments.
What We Learned We learned how to integrate computer vision, voice recognition, and AI into a cohesive application. The project deepened our understanding of accessibility needs and highlighted the importance of user-centered design when building assistive technologies.
What’s Next for BlinkBot Our future plans include adding gesture-based scrolling, improved support for multi-monitor setups, and more customizable voice commands. We're also exploring integration with platforms like Google Docs and Google Classroom. Long-term, we aim to expand compatibility to mobile and tablet devices to make BlinkBot even more accessible.

Log in or sign up for Devpost to join the conversation.