Inspiration
According to a survey by the Yale Program on Climate Change Communication in 2020, only 39% of Americans are aware that food choices can impact the environment. We noticed that many people are unaware of the environmental impact of their food choices. With increasing awareness and interest in sustainability, we wanted to create a tool that empowers consumers to make more informed and environmentally friendly decisions.
What it does
An innovative app that allows users to scan images of food items and leverages advanced AI APIs on the backend to accurately determine the CO2 emissions associated with the food type. The app provides users with detailed information on the carbon footprint of their food choices, empowering them to make more sustainable decisions.
How we built it
Using React Native for the app front end, the user can take a photo of their food, which is then sent to the backend Azure Vision AI API for image description analysis. The image description is processed with the Google Gemini API to identify the ingredients and their quantities. These ingredients are then cross-referenced with a dataset on CO2 emissions of various foods to calculate the cumulative CO2 emissions of the dish.
Challenges we ran into
- Learning Curve: As some team members were using React Native for the first time, we had to invest significant effort in learning how the technology works.
- Integration Challenges: We encountered difficulties connecting the Expo App frontend with the backend, requiring additional troubleshooting and development.
- API Selection: We needed to find a suitable API that could effectively handle both image processing and logic for determining food ingredients and their CO2 emissions. ## Accomplishments that we're proud of
- Successful Integration: We effectively integrated React Native with Azure Vision AI and Google Gemini APIs, enabling the app to accurately analyze food images and determine ingredients.
- Functional Prototype: Developed a working prototype that successfully scans food images, identifies ingredients, and calculates the CO2 emissions based on the provided dataset.
- User-Friendly Design: Created an intuitive and engaging user interface that simplifies complex sustainability data for users.
- Cross-Platform Capability: Built an app that operates seamlessly across both iOS and Android platforms, thanks to React Native.
- Error Handling: Implemented robust error handling for scenarios where images cannot be detected or processed, providing clear feedback to users. ## What we learned
- React Native Proficiency: Gained valuable experience with React Native, including best practices for building cross-platform mobile applications.
- API Integration: Learned how to effectively integrate and utilize multiple APIs to handle complex tasks like image recognition and data analysis.
- Backend Development: Developed skills in connecting frontend and backend systems, overcoming challenges related to data flow and API communication.
- User Experience Design: Understood the importance of designing an intuitive interface to make complex data accessible and actionable for users.
- Error Management: Learned how to handle cases where image detection fails, ensuring the app remains user-friendly and functional even under challenging conditions. ## What's next for EcoEats
- Better Vision model that can better recognize the food items
- Put the app into production
- Connected with database to store user information, expanding the functionalities of app.
Built With
- azure-vision-ai-api
- flask
- google-gemini-api
- react-native

Log in or sign up for Devpost to join the conversation.