What it does

BeatFinder leverages Amazon's AWS platform (Lambda functions) and a host of open-source libraries (ffmpeg, etc.) to split audio from a selected portion of a YouTube video and identify background music using audio fingerprinting.

The project is comprised of a front-end, which parses URL input and allows users to select the start and end times for their clip, and a back-end which receives data from the user and processes audio. A haphazard mixture of home-grown scripts and publicly available functions are used to complete the analyzation, start to finish.

How we built it

Each core function of the program and website were completed simultaneously by the various team members:

Grant Gonzalez — General reesearch of Web Development

Dineshchandar Ravichandran — Separation of foreground speech from background music using Python.

Owen Sullivan — Front-end in HTML, CSS, and JavaScript, as well as domain administration.

Nikhil Suresh — Separation of audio from YouTube video using Python, as well as Lambda function setup.

Zachary Ikpefua — Audio fingerprinting and identification of background music using Python.

Challenges we ran into

--Wasn't able to extract human speech in the given time

Accomplishments that we're proud of

--Created a python script that calls both the MusicBrainz and AcoustID APIs to both fingerprint and ID songs

What we learned

-APIs -Python packages: pytube, pydub and spleeter. -AWS lambda, API gateways and step functions -ajax

What's next for beatfinder

-Add ability of vocal spliter -Add multiple sources other than YouTube (lectures, media suggestion, etc)

Share this project:

Updates