Inspiration
Our inspiration was simple. We wanted to leverage the newest technologies to make life easier for those who, arguably, have it the hardest.
What it does
SpotiFind uses advanced object recognition and depth mapping to guide one through everyday life using audio cues.
Challenges we ran into
By far the biggest challenge was long testing cycles because of the multiple pipelines that needed to be loaded onto the machine. This forced us to use smaller models, resulting in some inaccuracy.
Accomplishments that we're proud of
We managed to implement a lot of features in a short amount of time:
- Obstacle avoidance
- Support for over 100 languages
- Local, on-device AI
- GenAI for natural language input
- Solana integration to boost community engagement
What we learned
We learned that we are not very far from running this technology on mobile devices (because we ran it on an Intel i5 CPU!)
We also delved head-first into the world of blockchain with Solana.
What's next for SpotiFind
We solved a lot of problems but there are still growth areas.
Integration of a specialized graphics processing unit would allow for better speech recognition and timely real-time directions.
Log in or sign up for Devpost to join the conversation.