🚀 Inspiration

  • Online learning often feels passive and one-sided.
  • I wanted something more like a real conversation with a tutor.
  • Goal: use AI + voice to make learning personal, dynamic, and engaging.

🎙️ What it does

  • AI-powered LMS where you learn through voice conversations.
  • Build your own AI Companions (mentors) for any subject.
  • Customize:

    • Name 🏷️
    • Subject / Topic 📚
    • Teaching style (formal / casual) 🎩
    • Voice 🎤
  • Launch sessions, ask questions, and get spoken answers in real-time.

  • Feels like one-on-one tutoring without the awkwardness.


🛠️ How we built it

  • Frontend: Next.js 15 (App Router) + TypeScript + Tailwind CSS + Radix UI.
  • Backend: Supabase (PostgreSQL) + Next.js Server Actions.
  • Auth: Clerk (secure email/Google login).
  • AI & Voice:

    • Google Gemini-2.5-flash-lite (conversational AI).
    • Web Speech API (speech recognition + synthesis).
  • All integrated into a single, seamless full-stack app.


⚡ Challenges

  • Syncing speech recognition + synthesis (don’t talk over the user!).
  • Inconsistent browser support for the Web Speech API.
  • Free-tier Supabase idling issue → solved with GitHub Actions ping.

🏆 Accomplishments

  • Built a complete end-to-end voice-first platform.
  • Smooth, real-time conversational flow 👏.
  • Companion Builder: highly customizable AI tutors.
  • Clever automation fix for database inactivity.

📚 What we learned

  • Deep dive into Next.js App Router + Server Actions.
  • Hands-on with Supabase, Clerk, Gemini integration.
  • Design principles for voice-first UIs: timing, state, feedback.

🔮 What’s next

  • 🌍 Multi-language support.
  • 📊 Smarter feedback on learning progress.
  • 👥 Multiplayer / group study sessions.
  • 🧠 Improved conversational memory for long sessions.

Built With

Share this project:

Updates