Inspiration The idea for LockedIn emerged from the challenge of measuring engagement in real-time across various settings such as online learning, virtual meetings, or even in-person presentations. With the increasing shift to digital interactions, the ability to assess and enhance engagement through computer vision and machine learning felt like an impactful area to explore. Our aim was to provide a tool that could help educators, team leaders, and individuals better understand and improve participation and focus.
What it does LockedIn uses computer vision and machine learning to detect and classify a person’s engagement level in real time. It categorizes individuals as engaged, partially engaged, or disengaged by analyzing visual cues such as facial expressions, head position, and gaze direction. This tool can be applied to multiple use cases, including monitoring student attentiveness in virtual classrooms, evaluating engagement in meetings, or enhancing user experiences in interactive applications.
How we built it We used a combination of OpenCV for real-time video frame processing and a pre-trained neural network model to analyze visual data. The system captures frames via webcam (either from the frontend or backend) and processes them through an engagement analysis API. The user interface was built with React and TailwindCSS, while the backend leverages Python with frameworks like Flask or FastAPI for efficient API management. We fine-tuned the ML model on engagement-labeled datasets to improve accuracy in predicting different levels of engagement.
Challenges we ran into Dataset limitations: Finding labeled datasets specifically tailored for engagement detection was challenging, requiring us to preprocess and augment available datasets. Real-time performance: Optimizing the system for real-time inference without compromising accuracy proved to be a technical hurdle, especially when managing webcam frame rates and backend processing times. Defining engagement metrics: Deciding on objective criteria for "engaged," "partially engaged," and "disengaged" categories involved iterative testing and feedback. Cross-platform compatibility: Ensuring the system works smoothly across various devices and environments required significant testing and debugging. Accomplishments that we're proud of Successfully integrating real-time engagement detection with minimal latency. Achieving a high degree of accuracy in classifying engagement levels using our machine learning model. Building a seamless and user-friendly interface that makes the system accessible to non-technical users. Creating a flexible architecture that supports multiple use cases beyond our initial scope. What we learned The importance of real-time optimization when working with computer vision and machine learning applications. Effective ways to preprocess data and overcome dataset limitations by creating synthetic variations and combining multiple sources. How to balance UI/UX design with technical complexity to deliver a smooth user experience. The value of collaboration and iterative feedback in refining both the model and the user interface. What's next for LockedIn Enhanced feature set: Adding emotion recognition and sentiment analysis to complement engagement detection. Scalability: Training the model with larger, more diverse datasets to improve robustness across different demographics and contexts. Integration with platforms: Embedding LockedIn into popular tools like Zoom, Microsoft Teams, or LMS platforms for seamless usage in virtual environments. Edge deployment: Exploring deployment on edge devices to reduce latency and ensure privacy by processing data locally. Real-world testing: Collaborating with educational institutions, businesses, and developers to refine the system through real-world applications and feedback.
Log in or sign up for Devpost to join the conversation.