Inspiration

   I have seen that after an accident which causes temporary or even permanent deafness , patients 
   need time to recuperate before permanent solutions are found. They only need to understand few 
   basic phrases during their recovery which has to be made easy.

What it does

    It recognizes 5 basic phrases:
    - Hi !
    - Hungry?
    - Hurt?
    - Ok?
    - Water?(Asking if the person is thirsty)
      It also factors in Background Noise. When someone says the words , the person with hearing 
     disability will be able to see the words highlighted on the screen. The percentage values indicate 
      how close the word is to the what is heard. The site is made as accessible and inclusive  
      as possible .A study shows that a dyslexic person reads better with a blue-white combination, 
      with sans-serif font. The font size is also made fairly large and sufficient gap is given between 
      lines. A color-blind person can also easily use the app, as I avoided red/green and other 
     distracting colors

How we built it

    - Used my own voice to record the words and used them as my dataset. Tried to add different 
       expressions for variety
    - Used Teachable Machine to train the dataset and uploaded the model to **Google Cloud**
    - After getting the model code , I tweaked it to perform the way I wanted it to.

Challenges we ran into

     - Training the model initially with longer sentences for leading to overlap, hence shifted to shorter 
        and concise words
     -  Was running into trouble while displaying the text properly, and highlighting it when it was 
         recognized
      - Learned a lot of JavaScript and TensorFlow on the way.
      - Adding the extra touch to the start button was a bit of a challenge. I had seen it in a website a 
         while back and wanted to recreate it.

Accomplishments that we're proud of

       - The "Start" button was a big achievement for me. Had to read a lot of CSS properties for it.
       - The app works exactly the way I want it to.

What we learned

       - Learnt how to use `svg` property . Figured out how the properties are getting displayed and 
          Highlighted them accordingly. Learnt about the various ways to make the app accessible and 
           inclusive. 

What's next for Word - Recognizer

      - Adding new words and factoring in different accents.
      - Making a mobile app for it.
      - Making it more sophisticated.

Built With

Share this project:

Updates