Inspiration

There was a story about a Grandpa who learnt how to speak English by placing Post-it notes on everything he could get his hands on, just to converse with his grandsons and grand-daughters that lived far away. This let him actively remember the words because he was always looking at the words. With the invention of the Spectacle, we figured we could utilize this method and make this accessible to everyone so that everyone can actively learn while going through their daily life.

What it does

Using the power of the Spectacles, we created an application that uses a ML model that can detect common objects and classify it under its name and also a translation in the chosen language. This would work as a flashcard so that users can gamify their learning experience. Furthermore, users can screenshot the object which will save it on their phone so that they can access the word and translation anywhere they want.

How we built it

For the Spectacles, we used Snapchat's propietary Lens Studio to create the interface for our application. By integrating our app with SnapML and the object detection models in its asset library, we were able to create a model that could detect objects and shoot a flashcard on top of it. We also utilized PocketBase and Swift for the application part of our project which integrated quite well with the Spectacles.

Challenges we ran into

Lens Studio recently got a major update to allow for easier development for Spectacle apps but its recency caused there to be limited documentation of what we could achieve. Alongside the fact that it was our first time working with anything similar to Lens Studio, a lot of our time was committed to learning how it worked.

Accomplishments that we're proud of

  1. We managed to create an Object Detection Model that successfully tracks the objects around us.
  2. Created a cohesive Spectacle application that was able to integrate with mobile devices.

What we learned

This was all of out first times learning how to develop for a tool like the Spectacle. It gave us a new angle to approach any future problems and give us first-hand experience to the workflow of someone developing on AR and ML. We also learnt a lot from conversing with the Snapchat team and their struggles and successes with Lens Studio.

What's next for LingoSnap

While the app is cohesive, it is still currently in need of more features. We are planning to add voice recognition so that users will be able to practice their pronunciation as well. Furthermore, it would be nice to gamify the app to allow users to continue learning while also having fun. Also, the final goal would be to have the application fully on spectacle and this includes features such as viewing past images and conversations.

Built With

Share this project:

Updates