A real-time sign language recognition system using MediaPipe, OpenCV, and scikit-learn. The system provides a web-based user interface developed using React.js, Vite, and Tailwind CSS.
- Real-time hand gesture recognition
- 10 sign language classes: "I Love You", "Hello", "Thank you", "Me/I", "Want", "Question", "Help", "Water", "A", "Yes"
- 100% accuracy on training data
- Webcam-based inference
- Light/dark mode toggle
- Resizable interpreted text
- Install dependencies:
pip install -r requirements.txt- Collect training data:
python collect_images.py- Process the data:
python create_dataset.py- Train the model:
python train_classifier.py- Run inference:
python inference_classifier.pycollect_images.py- Collects training images from webcamcreate_dataset.py- Processes images and extracts hand landmarkstrain_classifier.py- Trains the RandomForest classifierinference_classifier.py- Real-time sign language recognition from trained datadata/- Training images organized by classdata.pickle- Processed hand landmark datamodel.p- Trained classifier model
The system uses MediaPipe for hand detection and extracts 21 hand landmarks per hand. The RandomForest classifier is trained on normalized landmark coordinates to recognize different sign language gestures.