Inspiration
I was inspired by the movie A Quiet Place. Even though it is a fictional movie, the way they struggled made me feel bad. I, then, started thinking about deaf and mutes. As an Egyptian girl, I did some research and I found that deaf and mute people suffer from a lot of problems such as communicating with others, lack of understanding, and bullying. It is difficult for deaf and mute to understand people around them because not all people understand the sign language so communicating with others is hard. Another problem is bullying. Some people think that there is something wrong with the deaf and mute. This affects the deaf and mutes negatively. The bullying makes them want to stay at home. They are not confident enough because they are scared of bullying.
What it does
I decided to make their lives easier. My idea is to create a mobile application that translates sign language to speech and from speech to sign language. What makes MySign special from past technologies is that I chose it to be a mobile application because it is easy to take it everywhere you go such as stores, club and public transportation.
My project will make a lot of difference in their lives. It will make people understand them and understand their language. They can go anywhere and talk to everyone easily. They will not need anyone to help them. They won’t be bullied or humiliated by anyone. They will be happy. They won’t be afraid to face the world.
The pains to be relived are:
- Communicating with others will be easy
- Lack of understanding will be decreased
- Bullying will be less
- Parents and close people pain of not being able to support their kids will decrease
How we built it
Due to the fact that I am only 13 years old, I decided to design a prototype with the little knowledge I have. First, I made a research. Then, I made sure I learnt everything I can about sign language. For the AI, I learnt a few basics about it then started training a model using Teachable Machines by Google. Then, I used Pictoblox to access the camera. When it sees a sign, it is converted to sound and the user can hear it.
Challenges we ran into
The problem that I had is that there is not enough datasets. I am still learning and do not have enough knowledge in AI and Mobile App . Also that there are some signs that are movements not just pictures. To implement it in real life, I will suffer from complexity and the need to have deep understanding of AI and the new models.
Accomplishments that we're proud of
First of all, I have designed a block diagram and a flow chart for our prototype.

Then, I tested the prototype. I only trained it on two words which are "أين" which is "where" in Arabic and "لو سمحت" which is "please" in Arabic. After testing, the prototype could distinguish between the two words.

I am proud of myself for getting this far, even though I barely know anything about this topic.
What we learned
I learnt more about AI. I learnt about datasets, training and testing. I learnt basic knowledge. I also learnt how to construct a block diagram and a flow chart. Moreover, I learnt how to search about any topic, find the useful topics, studying the pros and cons, and implement it.
What's next for MySign
Our project is aimed for mute and deaf people and we want to learn more So that we can help them live better. We hope that in the future that our app will be:
- More accurate and find more dataset
- Supports more than the Arabic sign language
- Add a button in the keyboard for easy translation in all social apps
Built With
- pictoblox
- teachablemachines
Log in or sign up for Devpost to join the conversation.