Inspiration: Visually impaired people who want to keep track of their daily food intake. But somehow they are not able to do it, so we wanted to create something that could provide them with easy alternatives in daily life.

What it does: It detects the image and text of the food the user wants to know about and calculates the information about calories and nutrients in the food along with the name of the food. It also has a voice description for the people who are visually impaired.

How we built it: We used Flutter.io by google which runs on both iOS and Android and we used the google cloud platform services such as Firebase and Machine Learning Kit. We also used the official API of United States Food Department to get the nutrition facts. We used Machine Learning to analyze our image to retrieve the user-searched information on food. Also, we fetched the information from our realtime database.

Challenges we ran into: The detected object was very general in topic and we had trouble fetching data from server.

Accomplishments that we're proud of: We integrated flutter and used Machine Learning to detect the real-time objects. We were successful to implement text-to-speech for the description of the food.

What we learned: Integrating Machine Learning into flutter and collaborating as a team.

What's next for Nutritionacts: We are looking to complete the application to make it more specific on wide ranges of products and hopefully push it to Appstore and Google Play Store

Built With

Share this project:

Updates