INSPIRATION:
Trying to eat healthy is always a struggle, especially for college students. With every nutrition app you have to put in the food you ate, the serving, and other details. For someone who doesn’t have much time like a student or professional, this is simply too complicated. So, we decided to make an application which will be able to determine the type of food using machine learning and artificial intelligence from an uploaded or captured image.
WHAT HAVE WE CREATED:
Our app allows users to be able to take a photo or upload a photo of a food dishes. After the user has done this the application will determine the name of the food dish using the ML Model that we created.
In the long run, our app aims to tackle the issue of not knowing what you’re being served. For example in a foreign country a user may be served a dish and utilizes the application to find the name of the dish to be able to search up if it contains any possible allergens.
HOW IT WORKS:
We created a model by training it with specific photos and recognizing patterns via machine learning. So, Artificial Intelligence will use this training model to identify specific food items. The task of image recognition involves the creation of a neural network that processes the individual pixels of an image to recognize a dog for example. So basically, more the pictures, more augmentations like rotation blur color etc., the better our model would be.
CHALLENGES:
We had to find a dataset that consisted with different foods and this was the process took a long time and we have eventually found a dataset on Kaggle. Properly implementing the ML model into the app using OOP. Apple has a very strict privacy policy due to the app needing a camera/photo library. access so we had to figure out a privacy policy/developer signing for camera access. Getting the model to use different argumentations so it can have a accuracy rate above 70%
ACCOMPLISHMENTS:
Model to identify 101 food items Understanding the convoluted math behind the ML and how it actually processes each image Implementing ML model into our IOS application
WHAT WE LEARNED:
We learned how to use Apple’s Vision Library, Create ML via Apple Developer Tools, Core ML via XCode. We also learned app development, as well as how to train a model with photos inclusive of many augmentations(Blur, Upside Down, Etc.).


Log in or sign up for Devpost to join the conversation.