Inspiration

Our inspiration for ForkCast came from the struggle of tracking nutrition. We, as teens trying to work out, saw how hard it was to stay consistent with nutrition tracking. In addition to this, 2 of our members had previously worked on a different food related project (link) for a hackathon, so it was fresh in our minds. Most apps require manual input, which is tedious and sometimes inaccurate. We thought: what if you could just take a picture and instantly know what you're eating and what it has in it. With AI Image Recognition and real-time nutritional analysis, ForkCast makes that possible.

What it does

ForkCast lets you take a picture of your food and instantly identifies what it is using our trained AI model. It then uses Spoonacular API to fetch detailed nutritional data (such as calories, protein, fat, and carbs). Using a combination of the AI and API ForkCast gives you fast, reliable food insights right from your camera.

How we built it

We built ForkCast using a Flask backend and a trained ResNet-50 model we found on HuggingFace. It has a 95% accuracy and is fine-tuned on the Food-101 dataset for image classification. This means it can identify 101 different foods, just from their images. This AI model then sends its discoveries to an API which returns nutritional information and facts. As for our frontend, we built it using HTML, CSS, and JavaScript for a smooth user experience. Once a user captures a food image, it's sent to the backend which returns the necessary information to the user in the frontend. The full app is packaged into a responsive web interface designed for both desktop and mobile use.

Challenges we ran into

To start, we tried training our own AI model using TensorFlow and the Food-101 dataset, but this was extremely difficult due to time constraints. If we had gone through with making our custom bot, it would have wasted all of day 1 and since no work could be done without the bot, we decided it wasn't worth it and abandoned the plan. So, we switched to find a pre-made model. Afterwards, we had trouble linking real-time camera input with accurate food classifications from the AI, but we decided to use flask routes which worked flawlessly. Finally, we also had a few difficulties making the UI responsive while managing async API calls and errors, but we soon tackled it.

Accomplishments that we're proud of

The fact that we made a fully functioning web app in approximately a day and a half was a huge accomplishment for us.

What we learned

We learned how to integrate AI's with a real-time user input in a full-stack application. We explored working with external API's like Spoonacular, handling image data in JavaScript, and working with deep learning models. We also gained experience in designing a clean user interface and debugging async workflows. Finally, we learned how to collaborate and build complex projects under time pressure and turn a theme into a well-working project.

What's next for ForkCast

We plan to expand ForkCast by adding meal portion detection using size estimation, integrating health tracking (like calories over time), adding voice command so it is even more accessible, and supporting more food varieties. We also aim to train our own custom model in the future to improve prediction accuracy and reduce reliance on external AI's and API's.

Share this project:

Updates