About the Project

🌱 Inspiration

The spark for Blink Speech (MindLink) came from a simple yet powerful belief: Everyone deserves a voice.
We were inspired by people who, due to conditions like ALS, paralysis, or temporary speech loss, struggle to communicate even basic needs.
Seeing how current assistive devices often require expensive hardware or physical buttons, we wanted to create something that only needs a webcam and a browser — making it accessible to anyone, anywhere.


💡 Vision

Blink Speech turns intentional blink patterns and eye gaze gestures into spoken words in real time.
It’s designed for:

  • Critical care scenarios – ICU patients, post-surgery recovery, or locked-in syndrome cases.
  • Accessibility at home – People with ALS, muscular dystrophy, or other motor impairments.
  • Temporary speech loss – Oral surgery recovery, laryngitis, or intubation cases.
  • Low-resource settings – Situations where no special hardware is available.

Our most important goal: give a voice to those who cannot speak, especially in emergencies.


🛠 How We Built It

We started by experimenting with eye-tracking and blink detection in the browser.

  • Gaze tracking: Powered by WebGazer.js and MediaPipe FaceLandmarker for high-fidelity facial landmark detection.
  • Blink detection: Using the Eye Aspect Ratio (EAR) method to detect single, double, triple, and long blinks.
  • Mapping blinks to speech: Blink patterns are linked to phrases stored in JSON, customizable for each user.
  • Speech output: The Web Speech API converts detected patterns into spoken words instantly.
  • Data handling: Local calibration and preferences are stored via localForage, with optional cloud sync using Supabase.
  • UI & logic: Built on Next.js, React, Tailwind CSS, and Zustand for state management.

Everything runs client-side for privacy — nothing is sent to external servers unless the user enables cloud sync.


📚 What We Learned

  • How to fine-tune gaze tracking models for real-world use in browsers.
  • Integrating multiple computer vision techniques to work together smoothly.
  • Designing interfaces that are not only functional but also comfortable for people with limited movement.
  • The importance of customizable interactions — no two users blink or gaze exactly the same way.

⚠️ Challenges We Faced

  • Calibration accuracy: Eye tracking varies based on lighting, head movement, and camera quality.
  • Blink differentiation: Ensuring the system doesn’t confuse natural blinks with intentional ones.
  • Performance: Balancing detection accuracy with smooth real-time response in the browser.
  • Accessibility testing: Simulating real-world medical use cases without actual patients in early stages.

Blink Speech is still evolving, but we’re optimistic. This is more than a hackathon project — it’s a step toward breaking communication barriers for good.

Built With

Share this project:

Updates