Inspiration
UC Irvine students lead busy lives and accumulate great memories and a lot of stuff! As we outgrow old clothes or just want to share with the community, it can be time consuming to first check if your items are in good condition to donate and even more tiring to contact donation centers within a reasonable distance. In the end, these clothes, food cans, and random objects accumulate or worse, go to waste. To simplify the process of checking whether an item can be donated (and hopefully encourage sharing in the community!) This platform provides a convenient way to snap an image and confirm the condition of an item before locating appropriate donation centers. By providing an accessible resource for students to use, we hope to encourage donating gently used items or non-perishable foods and giving back to the community. Help each other and even declutter!
What it does:
Snap2Give is your all-in-one platform for all belongings related to sustainability. Anyone with old clothes, non-perishable foods, and gently used appliances can simply upload a current image or take a live picture of their object on our website. Our computer vision AI model classifies image input and returns a corresponding list of charities. Each charity’s name, website link, and distance from UC Irvine is displayed in our running list of charities. Additionally, an unfiltered charity tab has been implemented for general donations or miscellaneous items.
How we built it
Our backend developers first explored Google Cloud Vision API as the idea was initially to focus purely on image classification. However, we decided that we could challenge ourselves by identifying a variety of categories associated with the image so we went further with our API and ended up using the Gemini LLM to leverage our application to classify clothing, food, appliances, and miscellaneous. We also used Yelp API & Business Search to access a large database of “donation centers” in Irvine. This API provides a plethora of data including approximate distance to charity, hours of operation, reviews, and links to the Yelp website. We then filtered this data by approximate distance and category names or classifications like “food bank,” “clothing,” and “thrift” to provide more accurate charity recommendations that are applicable to the user, depending on the donation object the AI identifies. For the frontend, our developers had to learn HTML and CSS as well as React and JS in just a few hours. In addition to these frontend development languages, we implemented Chakra– a user interface component system for building high-quality web apps and design systems. Lastly, everyone learned to connect frontend and backend through deployment with Vercel.
Challenges we ran into
Our first challenge was picking a project that was impactful yet feasible in the given time constraint of roughly 12 hours. Next, we had trouble delegating tasks as everyone wanted to do backend development. Our team was split into backend and frontend to tackle obstacles. Jisha and Aayushi worked on backend while Ira and Lily worked on frontend. Our backend developer ran into trouble with AI, first attempting to incorporate Open AI’s platform, but ran into errors with improper classifications and inaccurate “True” and “False” when classifying whether an item could be donated or not. We then turned to Gemini, which provided more accurate classifications and was more efficient to incorporate. We were also new to connecting front end to back end in terms of incorporating large amounts of data gathered into the UI design for the web page, but were able to figure out how to incorporate the data by dictionary terms. On the front end side, we had trouble displaying the items. Sometimes we were not able to change the background and color of things. We were not able to display things horizontally even though we tried everything, and we kept asking for help from our mentor. Importing the camera function from a different file was challenging since I do not know the source.
Accomplishments that we're proud of
We’re so proud to have participated in our first hackathon! Though we come from different majors, backgrounds, and experiences, it was amazing to collaborate with such creative and intelligent people on a project that incorporates computer vision, AI, web development, and API data incorporating.
Aayushi: I’m proud of challenging myself to participate in a hackathon that allowed me to see the real-world applications of programming and learn while doing a thoughtful project. I started my CS journey 7 months ago when I took an online Intro to Python Class (ICS 31 equivalent) at a community college and had to essentially teach myself. The thought process and logic were starkly different from what I am used to as a Biological Science major, but over time I have come to enjoy the problem-solving, interactive work, & satisfying product creation of computer science. Prior to this, I had just coded small projects for classes, but I am so happy to have programmed the backend of this project while learning about AI incorporation, APIs, and UI alongside three brilliant teammates and an incredible mentor who took the time to explain each step while pushing us to meet our ambitious goals (shout to Kevin!)
Jisha: I am happy to have seen a different perspective on coding by actually examining the practical applications of my learning. Before, LLMs and Computer Vision were foreign concepts; however, through this project, I gained some exposure and will definitely delve deeper into this space to create more applications by taking advantage of the open-source cutting-edge technology. Specifically, this project incited my research into the Google Gemini Generative module and the Cloud Computer Vision APIs. Additionally, in an adaptive team setting, I am proud of the constructive and effective collaboration that all of us engaged in, culminating in an overall enriching and productive experience!
Lily: I am so proud to have coded for 10 hours straight, in addition to 2 hours invested in ideation! It felt a lot longer and was sometimes tiring, especially when integrating the webcam into the platform. It highlighted the frustrating and rewarding of finishing a coding project in 12 hours. I did some design today, which I am really proud of because it is not one of my things. I was able to create a file upload component, implement a react webcam on our website, and a dropdown menu.
Ira: I am astonished at how fast I managed to learn CSS, HTML, and Javascript in just twelve hours. I found this hackathon to be a huge learning curve and I am so glad to have programmed a fully functional frontend website. I can now say I am competent at front-end coding!
What we learned
As a group of four, we learned to persevere beyond the imaginable and think beyond the unthinkable. We challenged ourselves to expand our project impact in every way possible, whether it be through additional classification types, an AI implementation, and a user interface design using best graphic design practices. We also learned that starting at level zero is not a setback, but rather an opportunity to grow! As a result, Snap2Give was the result of twelve hours of collaboration, a lot of fruit snacks, some crazy learning curves, teamwork, and a great deal of fun.
What's next for Snap2Give
Another element we want to include in the project is creating a depop element, where people can post the image of the item to donate (likely clothing items, home objects, etc) and others can view these pages and purchase them directly. By adding this element, users have the option to drop off their items at donation centers or post the items online in hopes of selling them. With more time we would improve our UI graphics and design a landing page to encapsulate our goal with the platform and how we hope it can help students and the larger Irvine community alike.
Log in or sign up for Devpost to join the conversation.