CogniSign is a revolutionary project that aims to bridge communication gaps and enhance educational opportunities for the deaf community. By combining advanced neural network architectures and computer vision techniques, CogniSign enables real-time recognition of American Sign Language (ASL) gestures, transforming them into letters and words.
- Real-time hand gesture detection and recognition.
- Integration of OpenCV for hand region segmentation and noise reduction.
- Custom-built neural network architecture designed for ASL recognition.
- Seamless transformation of ASL gestures into letters and words.
CogniSign's technical architecture revolves around the synergy of computer vision and machine learning. The core components include:
- OpenCV Integration: Captures and processes real-time video feed, performs hand detection, and isolates the hand region.
- Neural Network Architecture: A meticulously designed neural network with convolutional and recurrent layers trained on an ASL dataset for accurate recognition.
- Real-time Processing: Combines OpenCV and the neural network to instantly decipher ASL gestures, presenting them as letters and words.
- Clone this repository:
https://github.com/Meko6701/HT6.git
