Inspiration

In recent months, a surge of toxic reactions experienced by Australian consumers led to a national recall of spinach products. What followed was a number of over-sensationalised media publications, but food recalls rarely enter the public eye. In fact, only about half of affected products are ever returned to retailers, meaning that one in four Australian households are exposed to potential health and safety hazards.

Once products have been sold to the public, the main challenge stems from efficiently notifying consumers of product recalls so that they may respond quickly and effectively. Though detailed information about recalls and health and safety alerts are widely available through Food Standards Australia, consumers find it difficult to filter through all alerts to monitor notifications about relevant products.

Recallify aims to safeguard both the food supply and consumer health through a streamlined and user-friendly interface that notifies users whenever a health and safety alert for a food product they have purchased has been issued. Previously, consumers would have to manually search for relevant recalls, but with Recallify, our users can simply take a photo of their receipts after purchase and upload it to our app to allow the information will come to them.

We target SDG goal 3 (Health and Wellness) with our data driven solution as we aim to improve local health systems by making it easier for consumers to check relevant product recalls.

We hope to be able to target other countries in the future and include other categories of product recalls. Our PWA is an MVP, and we hope to be able to turn it in a native application in future iterations.

It is compatible on both desktop and mobile devices.

What it does

Tailored Recall Notifications

When users upload a photo of their receipts following a purchase, our custom-built OCR uses computer vision technology to process the image into text. Receipts are stored for the user's convenience, and we then use natural language processing to discover whether their purchases include items that have been recalled or possess health and safety alerts. The app maintains a database of recalled products, and items are scraped every 5 minutes across multiple sources to ensure that users are provided with the most up-to-date information. If a purchased product is identified to be on the recall list, the user will be instantly notified.

The app additionally employs deep learning to analyse a user's purchase history to identify commonly purchased items and alert them if those products are actively being recalled, even if they forget to record their purchases by scanning the receipt. Furthermore, an integrated recommendation system provides consumers with viable substitutes if a product they're watched has been linked to a recall - we've got you covered on all your regular grocery trips. All of this is hosted on a custom performant cloud infrastructure.

How we built it

After researching the functionality of existing notification platforms, we experimented with the design of our app by creating iterative mockup designs in Figma. We built an integrated webscraper to collate information from the Australian Food Standards website https://www.foodstandards.gov.au/industry/foodrecalls/recalls/Pages/default.aspx to acquire up-to-date information about product recalls.

The frontend Progressive Web App was created using React and Typescript to provide a user interface, and we used TRPC with Express to create a backend service that processed and stored information.

Two deep learning models were utilised to process the user's input.

Optical character recognition was used to extract text from the uploaded image. We created our own custom solution to do this. We identified the text regions in the image using bounding boxes and refined these with a four point transform algorithm that corrected any perspective distortion or skew in the image to create a more accurate representation of the text regions. Gaussian blurring and grayscale was used to preprocess the uploaded image before running the algorithm to reduce image noise and further enhance the text regions of the image.

An n-gram model was used to vectorise words in a dictionary and find the closest word in the dictionary based on input data. Specifically, we used the data trained on our dataset to work with the OCR system and identify the closest match for each item.

Challenges we ran into

A number of challenges arose whilst extracting data from receipts using OCR as we didn't have enough training data, and the bounding box system we used to identify text regions in the images was not always accurate, so the framing of receipts could have been improved. To solve this problem in the future, we plan to improve the bounding box system so that it can extract pricing data from the receipts by identifying a horizontal contour in the image. With this information, we will be able to map out items to their respective prices. Additionally, by improving our OCR system, we can get better accuracy and more comprehensive data extract from our uploaded image. To do so, we will develop a more robust bounding box system and mapping algorithm.

In working with the ngram model, one significant challenge was that we did not have access to an exhaustive list of data, which limited the accuracy of our predictions. Additionally, the OCR system can introduce noise, which can impact the model's performance. Training deep learning models also proved to be very time-consuming and though we reduced runtime from 14 minutes to 8 minutes by connected the GPU to the system, this still represented a significant investment of time and resources. Furthermore, every time new training data was added, it was necessary to retrain the entire model. Though we tried saving and restarting the model, this caused additional issues. There was also a lack of a regularisation procedure in place to measure the accuracy of our training data for each epoch. As a result, we may encounter faulty data during our training batches, which can impact the overall performance of the model.

Accomplishments that we're proud of

Overall, we are proud of our ability to create a high-quality, user-friendly, and effective application that can help people stay informed and safe.

We have created a high fidelity Figma prototype to demonstrate the design and vision of the application. We used ReactJS and TypeScript for the frontend, allowing us to build a robust and scalable frontend for a seamless user experience. Our use of Chart.js allows us to create visually appealing and informative graphs. We focused on creating a visually appealing and intuitive user interface, which reflects our commitment to delivering a high-quality product. We used Tailwind and twin.macros for styling to ensure consistency and professionalism in the user interface. Our use of trpc and Prisma allows us to create a fast and efficient backend that can communicate with the frontend seamlessly.

Despite facing challenges with limited training data and noisy OCR inputs, we have worked hard to develop and implement an OCR system and ngram model that are performing well.

One of the areas where we have made significant progress is in optimizing the training process for the ngram model. By connecting our GPU to the system, we have been able to speed up the training time from 14 minutes to 8 minutes per run. While we are still encountering issues with retraining the model each time new data is added, we are exploring ways to overcome this challenge and improve the overall performance of the model.

Additionally, we are proud of the efforts that we have made to refine our approach and improve the accuracy of our models. For example, we have worked on developing custom bounding boxes to help frame the dockets and identify areas of interest, and we have used computer vision algorithms to locate contours and extract key data.

Overall, while we know that there is still more work to be done, we are proud of the progress that we have made so far and are committed to continuing to refine and improve our models as we move forward.

What we learned

We learned the importance of staying informed about product recalls and how they can impact consumers. By researching and monitoring recall data, we gained insights into the types of products that are most commonly recalled and the reasons why they are recalled. This helped us to identify key areas of focus for our app and to develop a robust and effective notification system.

Additionally, we learned about the challenges involved in creating a notification system that is both accurate and timely. We had to develop a system that could quickly identify and verify product recalls, and then deliver notifications to users in a way that was convenient and easy to understand. We also had to consider the potential for false alarms or outdated information, and develop protocols to ensure that our notifications were as reliable as possible.

Finally, we learned about the value of providing consumers with information and resources that can help them make informed purchasing decisions. By creating an app that makes it easy for consumers to stay informed about product recalls, we are empowering them to take control of their own safety and well-being. This can help to build trust and loyalty among consumers, and also promote greater transparency and accountability among manufacturers and retailers.

We were also completely new to video editing, so learning Adobe Premiere Pro was an additional accomplishment.

What's next for Plastic

Going forward, we will continue to built upon our existing work by refining and improving our existing models, particularly the OCR and ngram models that we have developed. We could work on collecting additional training data, refining our algorithms, and testing our models under different conditions to improve their accuracy and effectiveness.

Another possibility is to expand the scope of our project by incorporating new features or functionalities into our app. For example, we could work on developing a more comprehensive database of product information, including details on product safety, ingredients, and manufacturing practices. We could also explore ways to integrate our app with other platforms or services to create a more seamless user experience.

Finally, we could work on developing partnerships and collaborations with other organizations or stakeholders who share our goals and objectives. By working together with other groups, we could leverage their expertise, resources, and networks to achieve our goals more efficiently and effectively.

We would love to expand beyond helping local systems and provide product recall information for other countries as well as for other types of product recalls rather than just food products. However, our MVP shows just how possible and scalable our goal is. We'd also love to create an additional native mobile app as we currently only have PWAs, which can be installed by users on both their desktops and mobiles, and also be viewed on the web.

Built With

+ 12 more
Share this project:

Updates