Inspiration
The transition from traditional in-person school to online school as a result of the COVID-19 pandemic has been turbulent for many students. Students had to adapt quickly to the new learning environment while navigating pre-existing problems within the classroom, such as having to multitask listening and note-taking, knowledge gaps, and relevancy. noteAI conquers these pre-existing and exacerbated challenges by enhancing the online learning environment to help students achieve their personal best.
What it does
noteAI provides the following services:
Live audio and live image transcription. noteAI will transcribe slide information and lecture audio simultaneously so students can focus on listening to the lecture rather than reading slides to take notes. This encourages active listening among students as opposed to being distracted by the slides.
Automated Wikipedia Linking. Oftentimes, professors may skim over pieces of knowledge that are essential to the cumulative learning process. noteAI will automatically link relevant Wikipedia articles to the information being taught during lectures to fill any gaps in knowledge not being addressed during the lecture up to 3 topics deep.
Automated Knowledge Maps. Students may often ask themselves "how does this relate to anything we're learning?" Through the use of mindmaps, new topics being taught in class can be visually related to old knowledge by creating a network of potentially related topics. This makes it easier for students to look at certain topics that they aren't familiar with, which will help in the overall cumulative learning process.
How we built it
noteAI is built using Python, AssemblyAI, OpenCV, and HTML. Wikipedia links are gathered using web scraping through Wikipedia APIs. Using the AssemblyAI live transcribing, the text is parsed, and subject-specific words. Computer vision and optical object detection then compare slides of the lecture material to look for important formulas and keywords. HTML was then used to combine the equations, resources, and the topic map for the lecture to create a user-friendly experience for the student.
Challenges we ran into
Our project has several major features that are needed to work together in a seamless application. This was difficult as we use several machine learning APIs, computer vision, and various other open source libraries. There was also a learning curve the first time many of us worked with tools like AssemblyAI and OpenCV hence gaining the necessary knowledge as we developed present many uncertainties and technical difficulties .Some problems also occurred during the mind map creation stage of noteAI. Most notably, finding a mapping system to best suit our needs while still maintaining visual appeal was a struggle.
Accomplishments that we're proud of
We are proud of using optical object detection to differentiate between the content which is present on the slide. Also, we were able to parse through what the speaker is saying and create an in-depth topic map using the Wikipedia API and Plotly allowing the student to develop a broad understanding of the topic.
What we learned
We learned how to "domino" each step of noteAI so each service would be executed sequentially. We also learned how to use the Wikipedia API, Assembly AI, and Tesseract. Finally, we learned how to transplant data from an HTML file to a mindmap graphic by using Plotly and igraph to generate a mind map.
What's next for noteAI
In the future, we hope to be able to implement noteAI in multiple languages to help students worldwide. We would also like to implement adjustable mindmap depth; students would benefit from being able to adjust mindmap depth depending on how well they feel they understand the lecture material. Finally, noteAI would be the most impactful if it were compatible across varying learning environments either in-person (using hardware) or in a virtual setting so students can benefit from its services regardless of the learning environment.


Log in or sign up for Devpost to join the conversation.