Inspiration

The idea for VISION came from the goal of creating an inclusive platform that empowers individuals, particularly those with limited mobility, to express their artistic creativity. By utilizing advanced technologies such as eye tracking and voice recognition, our aim is to remove barriers in digital art creation, making it accessible to all.

What it does

Vision is an innovative software application that enables users to draw on a virtual canvas using only their eyes and voice commands. Advanced eye tracking technology allows users to control the mouse pointer, while speech recognition enables them to change brush colors and sizes seamlessly. This hands-free approach not only enhances the user experience but also allows for dynamic and interactive art creation.

How we built it

We developed VISION using the following technologies: Python: The core programming language for building the application. OpenCV: Used for image processing and eye tracking capabilities. Pygame: To create the interactive drawing canvas and manage user inputs. Speech Recognition: To handle voice commands for changing colors and brush sizes. Visual Studio Code: The integrated development environment (IDE) used for coding and debugging.

Challenges we ran into

Throughout the development process, we encountered multiple challenges:

  1. Eye Tracking Accuracy: We had to conduct thorough calibration and testing to ensure precise tracking of eye movements.
  2. Voice Command Recognition: Achieving high accuracy in recognizing a wide range of voice commands, especially in noisy environments, was a significant challenge.
  3. User Interface Design: Designing an intuitive interface that caters to users with different levels of technical proficiency and accessibility requirements was a complex task.

Accomplishments that we're proud of

We are proud to have developed a fully functional application that seamlessly integrates eye tracking and voice recognition, making it accessible and user-friendly. This successful integration allows users to effortlessly and creatively create art.

What we learned

Throughout the project, we learned:

  1. The importance of user feedback in refining functionality and usability.
  2. Techniques for optimizing eye tracking algorithms for better accuracy.
  3. Strategies for implementing reliable speech recognition in real-time applications.

What's next for Vision

In the future, we intend to:

  • Expand the variety of customizable brushes and tools for users.
  • Improve voice recognition capabilities to include additional commands and languages.
  • Hold user testing sessions to collect feedback and enhance the application.
  • Investigate integration with online platforms to facilitate easy sharing of artwork by users.

Built With

Share this project:

Updates