Inspiration
Learning a new language can be challenging, especially when it comes to connecting words with real-world objects. We wanted to create a fun and intuitive way to bridge that gap — using something we all carry every day: a camera.
What it does
Eyelingo lets users point their camera at objects in the real world to instantly get translations, pronunciations, and example sentences in their target language. It turns everyday surroundings into a dynamic language-learning classroom.
How we built it
We developed Eyelingo using React Native to create a smooth and responsive mobile app for both iOS and Android. We integrated an AI model capable of real-time object detection through the camera. Once an object is detected, the app retrieves the corresponding word in the target language and plays the pronunciation through a voice API. We focused on keeping the interface simple, so users can learn new words naturally while exploring the world around them.
Challenges we ran into
- Implementing API keys
- Using time efficiently
- Bugs (First ever mobile app)
- Working with multiple layers and having them connected
Accomplishments that we're proud of
We are proud that we were able to build a functional prototype that combines real-time object detection and voice feedback. We successfully created a smooth camera interface and achieved reliable object recognition. Integrating the voice API added an auditory learning layer, which made the app more interactive and engaging. Building all of this within a limited time frame is something we consider a big achievement.
What's next for Eyelingo
In the future, we plan to expand Eyelingo to support more languages and pronunciation options to reach a wider audience. We also aim to integrate the app with smart glasses, allowing users to receive real-time object recognition and language feedback directly through their lenses. This would make language learning even more seamless and hands-free.

Log in or sign up for Devpost to join the conversation.