Inspiration

Food is what keeps us alive, and nowadays, in this uncertain age, we need to know what we're eating. When you read a nutrition label, aside from the common ingredients you hear everyday, long, scientific names are present in the ingredients label, and the average consumer will not know what it does. There may also be highly risky items, which may seem innocent on the outside, like corn syrup, but could have severe health repercussions to certain demographics, not to mention the everrising presence of food allergens. We needed an all-in-one, technological solution, to these alarming problems. *We need to know what we eat. *

What it does

The app welcomes you with a log in screen, and enables you to create an account, where you can input your various dietary needs (allergies, diets, religions), your age, and an email to use for your account. After creating your account, you will be led to the main scanning screen. From here, you will take a picture of an ingredients list, and our algorithm will get to work. It will first give a health score from 1-100, using various factors of the ingredients to determine how healthy it is, and gives a list of overall healthy and unhealthy ingredients at a glance, along with alternative tips to subsitute the item, powered by AI. Then, you will be led to the main attraction, a full dashboard, where you can select to view uncommon ingredients, which are ingredients which may not have a common name, and it will give exavtly what each unknown chemical's purpose is. After, it will list all of the items that may conflict with your earlier mentioned dietary needs. The dashboard also lists items with high risks for potentional ailments, and what they are. All of your scans are saved in a central profile, where you can track which macros you have consumed, and how healthy the user has been eating, in an at a glance view.

How we built it

The application was built using the SvelteKit framework, something we've used in the past because of its simplicity and component-oriented architecture. We used TypeScript throughout the codebase to ensure type safety and reduce runtime errors, enabling faster development and easier refactoring. Combined with Svelte’s routing capabilities, this stack allowed us to deliver a lightweight yet robust mobile-friendly experience for real-time OCR processing and ingredient analysis. The OCR itself was done with the Tesseract engine, and we built using Vita and deployed on Vercel.

Challenges we ran into

A main challenge was the unpredictable nature of machine learning. It took many different styles of prompts, engineering, and persistence to get the artificial intelligence to do exactly what it wanted it to do. We had to change our speaking style to one of talking to a human, to one of speaking to a small toddler, telling it exactly what we wanted to do. It was a challenging experience, and took many tries to get the machine learning algorithm to give our desired inputs. We had to scrap some features entirely, due to the extensive training that the AI would have to have gone through, which was simply not possible within the given time frame, but we learned valuable lessons on how to deal with this emerging technology from now on.

Accomplishments that we're proud of

One of the things we're proud of is actually getting this set up as a PWA. The UX is super clean on mobile, and it works as a smooth, responsive web application that actually shows all the data about the ingredients. Additionally, we're proud of the extensive user-end features that enable them to keep track of everything they're consuming in accordance with their own lifestyle, including the statistics page and the customization to track various dietary options.

What we learned

Throughout the hackathon, we learned how to efficiently integrate machine learning models for optimal performance, tuning them so they would provide the most accurate responses possible. Additionally, we learned how to develop responsive web applications suited for mobile devices, as previously we've primarily worked with desktop environments.

What's next for NutriLens

We hope to further develop our own algorithms, trained on curated data on common additives and allergens present in food. This would substantially boost the accuracy of NutriLens, providing responses tailored to everyday products. Additionally, using image recognition technology, we could implement a feature that would analyze the ingredients from simply pointing the camera at any food. We also hope to create a way for users to report products themselves, sharing some of their meals in order to either warn or endorse.

Built With

Share this project:

Updates