Inspiration
On community apps like Nextdoor, there is a slight chance you find your lost item on your feed! We wanted to make it easy to search for a lost item.
What it does
Flick n Find is a community-wide lost-and-found platform powered by AI, making it easy to report, search, and reconnect lost items with their owners. Say goodbye to cluttered feeds and hello to effortless recovery!
How we built it
We used Streamlit to integrate Python into the web browser. We used various CV models to describe images, and then we used NLP techniques such as sentence encodings and cosine similarity to semantically relate search queries to CV image descriptions. For the "database," we used Pandas DataFrames and converted this to and from csv files.
Challenges we ran into
We faced challenges with library incompatibility, unfamiliarity with data structures, and CV/NLP models, along with a lack of planning and organization. We also struggled with deciding which features to implement and finding a suitable dataset for logo detection.
Accomplishments that we're proud of
We are really proud of using CV to describe images and using NLP to create a more advanced searching algorithm.
What we learned
We learned the importance of using more accurate models and improving location tracking for better results. We also realized the need for clear user features, such as allowing item postings and notifying searchers about sightings.
What's next for Flick n Find
We aim to use better models, track the location of photos, and allow users to post lost items. Additionally, we plan to notify searchers when similar objects are sighted and introduce an incentive structure for uploads.
Log in or sign up for Devpost to join the conversation.