Inspiration
Trying to manually scrub through a long video lecture to find a specific piece of information can tedious and frustrating.
What it does
Basically, you can "google" anything from the video, and get relevant timeframes for it. For example, you can look up certain phrases you remember a professor saying, and get the link exactly to that time.
How we built it
We used some TensorFlow libraries to help us with sentence embedding, and speech to text models. Then after fetching transcripts from a youtube video, we create embeddings for the transcript and the user queries. After that we use cosine similarity to match the phrases from the transcript with the highest similarity to the user's queries and we show that.
Challenges we ran into
Finding a way to quickly get the front-end up and running for a demo, so we used streamlit which came in very handy.
Accomplishments that we're proud of
We are proud of the idea since it is unique and is actually pretty helpful in real life scenarios, and that we were able have it fully working for the demo.
What we learned
Start earlier.
What's next for Video Lecture Assistant
Possible extensions are question answering features that answer user's questions from a transcript and also a question generation feature, which creates questions from the transcript to help the user prepare for their exams.
Log in or sign up for Devpost to join the conversation.