💡 Inspiration
We were inspired by the real communication challenges faced by neurodiverse individuals—especially those with autism, ADHD, or social anxiety—who often struggle with interpreting emotions and responding appropriately during conversations. Our goal was to create a supportive tool that helps them feel more confident and included in both virtual and in-person communication.
⚙️ What it does
ReVerbal is a real-time, AI-powered conversation assistant that:
Detects emotions from user input using tone and text analysis
Provides contextual response suggestions
Translates complex expressions into simple cues
Acts as a “social guide” by displaying friendly, visual prompts during conversations
It helps users navigate communication with ease—especially in classrooms, meetings, or social discussions.
🏗️ How we built it
Frontend: Built for a clean, intuitive UI/UX
Backend: Developed with FastAPI and Python for scalable API endpoints
Emotion Detection: Simulated with basic logic for now, but extensible to integrate NLP and emotion AI models
Routing: Clean folder structure with modular API routes
Local Testing: Used Uvicorn to host the backend server and test API functionality
🚧 Challenges we ran into
Integrating the backend with the frontend and verifying real-time flow
Structuring the backend correctly for scalability
Designing a user-friendly interface that doesn't overwhelm the user
Creating meaningful mock responses for emotion detection in the absence of a full model
🏅 Accomplishments that we're proud of
Successfully built and deployed a full working prototype
Created a welcoming UI tailored for neurodiverse users
Designed a backend that’s easily extendable with real AI/ML models
Learned how to bridge communication design with tech for accessibility
📚 What we learned
How to use FastAPI for rapid backend development
Structuring a project for real-time interactive applications
Importance of empathy in UI/UX design, especially for diverse user groups
Balancing simplicity with effectiveness in emotional communication
🚀 What's next for ReVerbal
Integrating actual emotion detection via NLP and facial recognition (like Affectiva or Microsoft Emotion API)
Adding real-time speech-to-text and audio analysis
Creating a personalized learning model for user-specific response guidance
Partnering with schools or therapy centers to test ReVerbal in real-life settings
Exploring voice-based interface and wearable integrations for accessibility
Built With
- fastapi
- python
- react
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.