Inspiration
We frequently have leftover ingredients in our fridge and we decided to fine-tune the GPT-3 model to recommend recipes which utilized these ingredients in the most optimal way.
What it does
Our model accepts ingredients as its input and return a set of instructions to cook a dish based on this input.
How we built it
We firstly found a public data set that contained a large number of recipes (ingredients and instructions) We then cleaned up and processed this data into suitable training data. We then fine tuned the GPT3 model using this training data.
Challenges we ran into
With no experience in machine learning, it took us time to get out of the starting blocks and understand the fundamentals of NLP. We processed our output data by mistake as well which was not our intention. We then fixed this later to get a better set of output data.
Accomplishments that we're proud of
We got it to produce some kind of believable data which with some human inference and deduction could be understood.
What we learned
We learned the fundamentals of NLP and how to use available datasets to train and fine tune preexisting models like GPT-3
What's next for Recipe Recommendation
Adjust it for a more niche data set (i.e specific cuisines). We could also parse our input better to give more specific outputs such as vegan, veg or non-veg recipes.
Built With
- colab
- gpt
- huggingface
- kaggle
- python
Log in or sign up for Devpost to join the conversation.