Inspiration
Markov chains are fascinating, as they rely on the idea that transition matrix can be manually specified, or learned from data. For the first order markov chain, the future markov chain state only depends on the current, whereas for the second order markov chain, we can use the history of the first order chain to say that the future depends on the current state and the past state.
What it does
I used training data to construct markov chains. It was challenging for me as I had to learn an n-th order markov chain from input data. Using the TWINKLE_TWINKLE input data from the magenta example, I construct an Nth order markov chain, constructing an identity matrix using the markov chain states, and multiplying the transition matrices of the order one markov chain and the markov chain created using the identity matrix.
How I built it
For the first state, the probability is 0 for all previous states, since we have no other information except the current information for a first order markov chain. The probability here is the probability that a note will be played. However, using resources on the internet allowed me to understand how to construct chains using the twinkle twinkle training data.
Challenges I ran into
Making sure all the chain orders were working smoothly!
Accomplishments that I'm proud of
That I allow the user to change the number of partials for additive synthesis! This makes the interface more interactive and allows them to make fun music.
What I used
Web Audio API, Javascript, HTML, CSS
Built With
- html
- javascript
- webaudio
- webaudioapi

Log in or sign up for Devpost to join the conversation.