Inspiration
Musicians love to talk about "tone color" – but the rest of us can't see it like they do. So, we wanted to visualize the world for someone who can experience music in multiple senses. This is the future of music – multisensory audio perception that will allow anyone to relish a beautiful piece.
What it does
The project uses the Eevee graphics engine and runs a Python script that takes input as a .wav file. It then processes the amplitude and frequency of the sound wave transcribed by the file. This data is then fed into a program that animates and changes the color of the animation based on developments in the music. The points change color in response to changes in frequency and grow larger or smaller depending on the volume.
How we built it
We started with the back-end development and used the Librosa library to upload music. Then, we built a custom amplitude analysis system, along with using a Fast Fourier Transform and several algorithms to analyze the pitch. After that, we synced the data with the animation system, Blender, and then rendered everything through Eevee.
Challenges we ran into
This project wasn't all smooth sailing. First, we had to work on implementing the file upload system, for which we initially used a different library. We found Librosa when we were on the verge of giving up and managed to make it work. After that, the calculations required to run the FFT took several rounds of corrections, and some errors resulted in crashes. The Blender software didn't autosave, so we had to rewrite the code three times! However, we eventually managed to wrangle all of the systems into compliance.
Accomplishments that we're proud of
Our tenacity was unknown to us until we needed it. The culmination of this project seemed to be far away, but it was just shrouded in some sort of Cimmerian fog; we kept pushing through and managed to finish. Our technical skills were stretched to the maximum!
What we learned
We learned how to record and process audio through algorithmic methods in Python, as well as integrating and passing data to Blender. Finding classical and modern pieces to run tests on was also a blast!
What's next for Synesthesia
We have many things we would like to work doing next with Synthesia. The first would be optimization. Currently, we are limited to about forty objects and music files under roughly six minutes. Any more and the time to generate the animation becomes huge. In the future, it would be cool if we could generate the animation in real-time.
Another goal would be packaging. Currently, our project exists as three Python scripts, due to not being able to install external libraries in Blender. Blender offers a fairly user-friendly option to convert scripts into add-ons to make them more user-friendly and easier to install which is something we would like to do.
Integrating Synesthesia into AR may be far-fetched, but it would be our primary goal in the long-term plans for this project.
Log in or sign up for Devpost to join the conversation.