Overview
Introduction
Audio Visualizer, a lighting system consists of several modular panels that takes in an aux input from either a laptop or phone and generates lighting animated based on the music. It can be used for those who want to perceive music in another perspective, or home decoration that can create corresponding ambiance when playing music.
The system works as follows: when we plug in the aux input, the main board processes the signal in real time, analyzes the musical pattern of the input frequencies and then outputs an animation to be displayed to all the panels.
Baseline Goals
- Seven panels that are able to communicate with the mbed and vary based on the hardcoded/pre-programed configuration.
- Lighting in panels determined by light sensor feedback + melodic analysis
- Synchronized lighting and musical output (within human perception)
Reach Goals
- Animation based on 3D configuration of the panels
- More complex melodic analysis (will lead to more complex animation)
- Dynamic lighting animation based on panel not hardcoded configurations
Alpha Prototype
For the first version of the panel, we used laser cut smoke acrylic for the top cover, and MDF for the bottom parts. We separated the panel into 2 layers, top layer holds the LED strip and the bottom layer contains the ATTINY45 with output and input connectors on a breadboard. For the software system, we configured our own communication protocol between the mbed and ATTINY45s.
After assembling it, we realized that we can still see through the smoke layer, and the light is not properly defused on the panel. Also, it is hard to fix the 4 pin connectors on the breadboard. Therefore we decided to change the layout.
Baseline Demo
For the baseline demo, we changed the top cover of each panel to be raster etched clear acrylic and duplicated 6 of our panel modules. We managed to get the lighting system work according to the song 'Sail'. After getting feed backs from professor Rahul and TAs, we revised our reach goals:
Revised goal
Instead of focusing on module, we were told to focus on the quality of the animation. Our current music processing need to make people actually feel the music.
Beta Prototype
This is second version of our system where we mainly focused on the musical processing. We built up the pre-process system that separates the signal into 4 different frequency channels by letting it pass through 4 different band-pass filters. We also realized that since the mbed can not process fast enough to determine the entire range of frequencies perceivable by humans, we need to sample the signal by the hardware. Therefore we constructed op-amp integrator after each channel and resets it every 20ms to control the output. On the software side, we re-wrote the melodic analysis algorithm and took all 4 analog inputs into consideration for LED outputs.
Reach Demo
During Reach Demo, 2 of our modules were broken so we only had 5 panels for the presentation. But we managed to get 5 panels display the song 'Hand Clap' and precisely caught the claps within the song.
Conclusion
Accomplishments that we're proud of
We developed our own communication protocol between two different micro-controllers. We also modularized our system so that you only need to hook up 2 connectors to put a panel into the whole configuration.
What we learned
Hardware:
- Design op-amp integrator to sample analog signal in hardware
- Configure LED with LPD8806
Software:
- Design communication protocol between mbed and ATTINY45
- Configure LPD8806 with ATTINYs
- Melodic Analysis with DFT
What's next for Audio Visualizer
- Improve the mechanical structure so that the light can properly defuse within the top layer, creating better visual effect and people would not see the actual LEDs.
- Make the animation more robust and able to work with multiple songs. After talking to professor Rahul, we realized that it is more important for us to get be able to animate for an entire collection of songs than just to have one song really well animated.
Log in or sign up for Devpost to join the conversation.