Inspiration
We all have allergies and sometimes have trouble finding out whether we can eat a food. We thought a cool solution was if we took a picture of the food we are eating and let the AI analyze if the food we eat tends to have the things we are allergic to in the ingredients list.
What it does
It finds multiple websites across the internet and finds whether the recipe tends to have the allergen specified.
How we built it
We used gemini API for the backend and next js for the frontend. The overall flow involved a user-uploaded image being sent to a Next.js API route, which then communicated with the Gemini model to get the analysis, and finally displayed the results back to the user.
Challenges we ran into
Integrating the information the user inputs and translating it so the prompts can run through Gemini was our greatest setback.
Accomplishments that we're proud of
We were able to design user user-friendly interface and integrate the Gemini API so that the data can be processed and go through Gemini. the End-to-End Multimodal Solution was also a amazing feat along with how fast we were able to develop this app.
What we learned
We were able. to learn advanced prompt engineering for multimodal models, learning how to guide the AI to perform complex, multi-step reasoning in a single API call.
What's next for BiteRight
BiteRight in the future may be able to expand in terms of mutiple image upload along with features where Gemini also includes more information on the recipes.
Built With
- css
- geminiapi
- html
- javascript
- typescript
Log in or sign up for Devpost to join the conversation.