We will be undergoing planned maintenance on January 16th, 2026 at 1:00pm UTC. Please make sure to save your work.

Inspiration

We noticed that many methods of assessing students' skills and understanding continue to rely on take-home assignments and projects. Unfortunately, with the increase in AI usage to cheat, this has become unreliable. Currently, in-person and oral testing continue to be better options. However, this creates another problem. Educators don't have the time or resources to administer individual oral tests for each and every student. This is where EchoLabs comes in.

What it does

EchoLabs is an AI oral assessment platform. It enables educators to conduct oral assessments aided by artificial intelligence. It assists in automating tasks like evaluation, scoring, and feedback for spoken responses.

How we built it

EchoLabs is built primarily with TypeScript using Vite and Node. We sought a modern, scalable codebase that would enable us to leverage frameworks and libraries suitable for fast-paced web development and AI integration.

Challenges we ran into

  • Real-time audio and latency: Managing browser audio input, sending it to the server/AI, and returning responses quickly enough to feel like a natural conversation was tricky. We had to tweak buffering and when to start/stop recording.
  • Prompt design for AI grading: Getting consistent and fair feedback from the AI meant iterating on prompts and carefully structuring the data we send.
  • Getting Google Authentication to work correctly: Setting up secure login with Google OAuth was more complicated than expected. We had to deal with API credentials, redirect URLs, token verification, and handling errors when authentication failed.
  • Auto-filling the student’s name securely: We wanted the student’s name to come directly from their Google account so nobody could log in as someone else. Ensuring that the name was correctly pulled from the authenticated profile and that it couldn’t be modified took careful validation and testing.

Accomplishments that we're proud of

  • Built an end-to-end flow where a teacher can:
    1. Enter a module/topic
    2. Generate an access code
    3. Have students join and complete an oral test
    4. View the conversation history and metrics afterward
  • Successfully tracked and displayed useful analytics like total speaking time, number of pauses, and session duration.
  • Created a clean, simple UI that works for both teachers and students.
  • Integrated AI so it not only chats with the student but also summarizes performance and highlights areas for improvement.
  • Laid the foundation for an assessment tool that can scale to many classes and subjects.

What we learned

  • How to work with audio in the browser, including permissions, streaming, and dealing with noisy input.
  • The importance of prompt engineering and data formatting when using AI for grading and feedback.
  • How to design around academic integrity: using codes, limiting access, and thinking about how students might try to game the system.
  • Better team skills: dividing tasks between front-end, back-end, and AI integration, and coordinating under time pressure.
  • That even small pieces of analytics (like pauses and timing) can give teachers a lot of insight into a student’s confidence and fluency.

What's next for EchoLabs

Some next steps for EchoLabs:

  • [ ] Expanding AI evaluation capabilities to improve grading accuracy
  • [ ] Enhancing user interface and experience (latency)
  • [ ] Supporting more languages and assessment formats
  • [ ] Integrating with learning management systems
  • [ ] Gathering user feedback to iterate further
  • [ ] Add a central database for user sessions and data
  • [ ] Add camera recording for extra assurance that there's no cheating
Share this project:

Updates