Inspiration

Music is multidimensional, yet we discovered every streaming platform flattens it into a one-dimensional feed. We therefore questioned if you could see your own taste ... not as a playlist, not as a genre label, but as a living constellation of sound - navigable, explorable, and most importantly yours. Soundgaze is our answer to the black box of algorithmic recommendation: a spatial music intelligence platform that transforms how humans relate to their listening identity.

What it does

Soundgaze maps your Spotify listening history into a fully interactive 3D universe of song embeddings, where sonic similarity becomes physical proximity. We let users fly through their personal music cosmos, land on any node to preview tracks and surface AI-powered cultural context, or drifts through sonic neighborhoods in real time while music plays in the background. Every point is a song. Every cluster is a mood. Every journey is uniquely yours.

How we built it

We engineered a full ML pipeline that encodes 28,000+ songs as 8-dimensional audio feature vectors stored in an Actian VectorAI database - enabling sub-millisecond cosine similarity search at scale. We then applied UMAP dimensionality reduction with quantile normalization to project that high-dimensional sonic space into a clean, navigable 3D point cloud. The backend runs on FastAPI with live Spotify OAuth, ReccoBeats for audio feature extraction, and Gemini AI for real-time cultural song descriptions. The frontend is a custom Three.js renderer with fluid, real-time 3D interaction.

Challenges we ran into

Spotify deprecated core audio feature endpoints mid-build, forcing us to consider alternative APIs and live integration issues under time pressure. Achieving perceptually meaningful clustering in 3D was also a headache since raw UMAP projections collapsed into dense, unreadable regions. We solved this through quantile normalization, redistributing point density evenly across space without sacrificing topological integrity.

Accomplishments that we're proud of

We built something that has never existed before: a spatial, embodied interface for musical self-discovery. The 3D point cloud isn't just beautiful, it's actually semantically meaningful. Clusters emerge organically from audio DNA, not editorial curation. Pairing that with live previews, AI cultural context, and autonomous sonic walks transforms passive listening into active, spatial exploration. This feels like a new paradigm for how humans interact with music.

What we learned

We mastered the full ML stack — from raw feature engineering with Pandas and Scikit-learn, through UMAP topology preservation, to real-time vector search at scale. We learned the deep tradeoffs between topological integrity and visual clarity when compressing high-dimensional music data into 3D space, and discovered that the most powerful visualizations aren't the most mathematically precise - they're the ones that feel most humanly insightful.

What's next for Soundgaze

Soundgaze is a launchpad, not a finished product. We plan to explore user's musical identity shifting over time, facilitating social exploration so you can drift through a friend's musical universe, and installing scaled infrastructure to support millions of songs and users. The music cosmos is just getting started.

Built With

Share this project:

Updates