About Our Project: The Emotionally Intelligent AI Study Buddy

What Inspired Us

We were inspired by a problem we saw in many modern study tools: they focus entirely on academic content but ignore the student's emotional state. Students frequently experience burnout, anxiety, and frustration while studying, which leads to disengagement.

Our inspiration was to build a different kind of tool—a learning companion that introduces emotional intelligence into the study loop. We wanted to create an AI study buddy that could detect a student's real-time mood and adapt its support accordingly. The goal was to mitigate academic stress, boost focus, and make learning a less isolating and more positive experience.

How We Built It

We built this project as a full-stack web application, splitting the architecture into a Python backend and a React frontend.

  • Backend: We chose FastAPI as our Python framework. Its high-performance, asynchronous capabilities were perfect for handling concurrent API calls, managing chat I/O, and calling external AI models without blocking. This backend serves all our API endpoints, including /api/chat, /api/notes/upload, and /api/notes/summary.
  • Frontend: The client is a React application built with Tailwind CSS for rapid and responsive UI development.
  • AI & Emotion Detection: This is the core of our project. We used the Google Gemini API for generating intelligent, context-aware chat responses and summaries. For the emotion detection, we integrated a pre-trained Hugging Face (DistilBERT) model, which analyzes the user's text input in real-time to classify their mood.
  • Database & Auth: We used SQLModel with SQLite to define our database schemas and manage user data, sessions, and message history with strong data validation. Authentication is handled securely through Google OAuth.

Our development was structured in three phases:

  1. Phase 1: Foundation: Building the secure Google Auth and API token system.
  2. Phase 2: Core AI Loop: Implementing the chat UI, session management, and the mood detection API.
  3. Phase 3: Polish & Innovation: Adding high-impact features like mood visualization, persona selection, and empathetic response logic.

What We Learned

This hackathon was a massive learning experience. Our biggest takeaway was learning how to build a complete, end-to-end AI application from scratch.

A major part of this was learning FastAPI. We had to quickly get up to speed with its asynchronous programming model, dependency injection system, and how to use it to build a robust API that could serve a React frontend and communicate with multiple AI services at once.

We also learned how to strategically integrate different AI models. Instead of just using a single generative AI, we learned to chain models together: using a lightweight Hugging Face model for a specific task (emotion classification) and then feeding that emotional context to a more powerful model (Gemini) to shape its response.

Challenges We Faced

We ran into several challenges that tested our problem-solving skills:

  • Learning Curve & Environment Setup: As mentioned, learning FastAPI on the fly was a significant challenge. This was compounded by configuration issues. We spent a lot of time debugging and making the project work locally on all our machines. This involved creating a detailed SETUP.md and troubleshooting common issues like dependency mismatches and environment variable (.env) loading.
  • Port Problems: A frequent and frustrating issue was managing the local development environment. We had to resolve port problems to ensure the frontend (running on http://localhost:5173) could successfully connect to the backend (running on http://localhost:8001) without CORS errors or conflicts.
  • Making it Inclusive: A core goal was to make the app inclusive, which presented its own technical challenges. We implemented multi-language support (now available in 8 languages) which required a system for managing translation strings. We also aimed to add audio (text-to-speech) for accessibility, which was a challenge to integrate cleanly with the real-time chat interface.

Future Enhancements

If time permits, we plan to enhance the AI study buddy with several key features to create a more natural, engaging, and comprehensive learning experience:

  • Audio Interaction: Allow users to speak directly with the AI, enabling more natural and immersive learning conversations.
  • Gamification Features: Introduce elements like points, badges, and progress tracking to motivate and engage users in their studies.
  • Collaborative Study: Add functionality for collaborative study sessions, allowing users to invite friends to learn together.
  • Calendar Integration: Integrate with calendar apps to help users plan, schedule, and track their study sessions effectively.

Built With

Share this project:

Updates