Talent X AI - Voice-First AI Mentor for Youth
Inspiration
Many young people including ourselves often feel overwhelmed, unmotivated, or unsure about what direction to take. While mentorship can provide guidance and motivation, it’s not always accessible. We wanted to build a friendly, voice-first AI companion that offers quick, personalized support anytime, helping youth grow in confidence, creativity, and productivity.
What it does
Talent X AI is a multi-agent AI mentor that interacts via voice or text, providing personalized guidance across multiple areas:
- Confidence & Wellness - motivational support and daily check-ins
- Creativity - ideas, projects, and inspiration
- Learning - skill-building tips and educational suggestions
- Productivity - personalized advice and planning
It uses a Master-Agent to orchestrate four specialized agents, combining their outputs into friendly, actionable guidance. Responses can also be spoken aloud using Amazon Polly.
How we built it
Core technologies & workflow:
| Component | Role |
|---|---|
| Streamlit | Frontend UI for voice and text interaction |
| AWS Bedrock | Master-Agent and sub-agent reasoning |
| Amazon Transcribe | Converts voice input to text |
| Amazon Polly | Converts AI responses to speech |
| DynamoDB | Stores chat history and session data |
| S3 | Stores generated audio and optional HTML outputs |
| Python libraries | numpy, pydub, pandas, beautifulsoup4, wave, scipy |
Workflow:
User (voice/text) → STT → Master-Agent → Specialized Agents → TTS → Streamlit UI
Step-by-step:
- User speaks or types a question.
- Master-Agent detects intent and routes input to the appropriate agent(s).
- Agents generate their outputs (tips, guidance, or project suggestions).
- Master-Agent combines outputs into a friendly, coherent response.
- Optional voice output is generated via Polly.
- Conversation is stored in DynamoDB, and any audio/portfolio assets are stored in S3.
Challenges we ran into
- Ensuring each agent responded appropriately required careful prompt design.
- LLM response latency was noticeable; we improved it with prompt tuning and caching.
- Integrating STT, AI reasoning, and TTS in a single pipeline required multiple iterations.
- Streamlit occasionally broke when updating tabs or displaying dynamic content.
- Maintaining a stable workflow under hackathon time constraints was tough.
Accomplishments we’re proud of
- Built a working multi-agent voice companion in under a day.
- Successfully connected STT → LLM → TTS in a smooth, real-time flow.
- Created a clean, usable frontend interface with Streamlit.
- Designed specialized agents that respond differently based on user input.
- Delivered a live demo showing personalized guidance for youth.
What we learned
- How to orchestrate multiple AI agents using a master LLM.
- How to reduce latency with prompt optimization and caching.
- How to structure a voice-first AI experience.
- How to quickly build, test, and debug under hackathon pressure.
- How to turn a rough idea into a working prototype fast.
Next steps for Talent X AI
- Add user profiles to track goals over time.
- Enhance memory using embeddings/vector search.
- Add emotional awareness to better support mental health.
- Gamify habits and daily check-ins.
- Build a mobile-friendly version for easier access.
Built With
- Streamlit
- AWS Bedrock
- Amazon Polly
- Amazon Transcribe
- DynamoDB
- S3
- Python (
numpy,pandas,pydub,beautifulsoup4,wave,scipy) - HTML/CSS for any frontend visualization
Log in or sign up for Devpost to join the conversation.