Inspiration

By understanding the challenges visually impaired people face in their daily lives, we were inspired to create this app to help them see the products they are shopping for using AI.

What it does

The app shows a screen filled with a display of the phone's camera. The user taps anywhere on the screen, then the phone briefly describes what the camera is pointed at.

How we built it

Starting with a Google template for a basic CameraX application, we built onto the camera functionality to analyze a picture, upload it to Chooch to analyze the scene and get the description, then added Text-To-Speech functionality to read aloud the result.

Challenges we ran into

We found contacting an API with Java/Kotlin to be surprisingly complicated compared to JavaScript. We were also new to Android development and needed to learn all the tools used in the Android app ecosystem to write the app, such as the CameraX API, Kotr client for HTTP requests, and Android's built-in TTS engine.

Accomplishments that we're proud of

After struggling for several hours with error message after error message, we were able to finally to build and debug the program enough to make it functional.

What we learned

We learned how to program basic Android apps.

What's next for GroceRead

Once GPT-4 with image-reading functionality becomes generally available, the accuracy of the image detection for this app can be improved massively and made to be a lot more descriptive.

Built With

Share this project:

Updates