Skip to content

um-e-kalsoum/technova2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Signify

A real-time sign language recognition system using MediaPipe, OpenCV, and scikit-learn. The system provides a web-based user interface developed using React.js, Vite, and Tailwind CSS.

Features

  • Real-time hand gesture recognition
  • 10 sign language classes: "I Love You", "Hello", "Thank you", "Me/I", "Want", "Question", "Help", "Water", "A", "Yes"
  • 100% accuracy on training data
  • Webcam-based inference
  • Light/dark mode toggle
  • Resizable interpreted text

Setup

  1. Install dependencies:
pip install -r requirements.txt
  1. Collect training data:
python collect_images.py
  1. Process the data:
python create_dataset.py
  1. Train the model:
python train_classifier.py
  1. Run inference:
python inference_classifier.py

Files

  • collect_images.py - Collects training images from webcam
  • create_dataset.py - Processes images and extracts hand landmarks
  • train_classifier.py - Trains the RandomForest classifier
  • inference_classifier.py - Real-time sign language recognition from trained data
  • data/ - Training images organized by class
  • data.pickle - Processed hand landmark data
  • model.p - Trained classifier model

Usage

The system uses MediaPipe for hand detection and extracts 21 hand landmarks per hand. The RandomForest classifier is trained on normalized landmark coordinates to recognize different sign language gestures.

About

A real-time sign language recognition system using MediaPipe, OpenCV, and scikit-learn. The system provides a web-based user interface developed using React.js, Vite, and Tailwind CSS.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors