Inspiration
Cooking Assistant was inspired by the everyday struggle of not knowing what to eat whether it breakfast, lunch or dinner. Existing product works by inputting ingredients manually, however we found that the process was tedious and unnecessary. From that realization, we thought augmenting the process with a multi-label model was a great idea.
What it does
Cooking Assistant helps you find recipes that you like with ingredients you already have. Inputting your ingredients is as easy as taking a picture.
How we built it
Cooking Assistant leverages Visual Transformer that has been repurposed for multi-label classification, and fine-tuned on foods. Once the Visual Transformer outputs the food items, it will then get fed into another model that's been prompt-tuned for food recipes with specified ingredients.
Challenges we ran into
As we're new to the field of deep learning and neural networks, we couldn't get to figure out how to fine-tune the model properly. Insufficient dataset were also a challenge as there were limited multi-labelling food ingredients dataset in the web. We've tried to work with food segmentation and grocery store datasets, but this adds complexity to the project.
In the end, the project was incomplete due to lack of specialty and knowledge in the AI field.
Accomplishments that we're proud of
What we learned
What's next for Cooking Assistant
Along with making Cooking Assistant to actually work, we plan to move from image inputs, to real time video. Aside from that, mobile optimization and general UI improvement is on the plan.
Log in or sign up for Devpost to join the conversation.