👓💬 UniTalk:

Your Universal, Real-Time AR Conversation Assistant

Uni Talk is a universal, AR-powered conversation assistant designed for professional networking and career events. By leveraging Snapchat Spectacles and cutting-edge AI, Uni Talk helps you connect, communicate, and make lasting impressions—effortlessly and in real time.

Inspiration

We created Uni Talk to overcome the universal challenges of networking: 😬 forgetting names, 🧠 missing key details, and 🌐 struggling to keep conversations flowing—especially across language and cultural barriers. We envisioned a tool that unites people and empowers users to make the most of every interaction 🤝.

⚙️ What it does

Uni Talk uses AR and AI to unify and elevate your networking experience: 📝 Live Transcription & Summarization: Instantly transcribes conversations and summarizes key points, all in real time. 💡 Contextual AI Feedback: Provides tailored, actionable feedback and conversation suggestions based on what’s being said—delivered in under 300ms using Groq-accelerated LLMs (Whisper, Llama-3). 🔍 Smart Contact Discovery: When names, companies, or job titles are mentioned, Uni Talk searches Linkd to help you identify and remember who you’re talking to. 🌍 Seamless Translation: Instantly translates conversations to and from other languages, breaking down communication barriers on the spot—supporting universal communication.

🛠️ How we built it

🕶️ AR Interface: Built with Lens Studio for Snapchat Spectacles, using JavaScript and TypeScript for a seamless, hands-free user experience.

⚡ Backend: Flask server for audio processing and real-time speech-to-text, powered by Groq’s transformer architectures for ultra-low latency.

🖥️ Frontend: Streamlit dashboard for reviewing transcripts, summaries, and feedback.

🔗 APIs & Integrations: Linkd API for smart contact lookup; Groq for LLM-based transcription and feedback. 📌 Note: While we explored Fetch AI agent architectures, our final implementation did not use Fetch AI.

🧩 Challenges we ran into

⏱️ Achieving sub-300ms end-to-end latency for transcription and LLM feedback.

🎧 Efficiently streaming audio from Spectacles to the backend.

🔧 Debugging AR hardware and microphone issues in Lens Studio.

📄 Integrating multiple APIs and adapting to unreleased hardware with limited documentation.

🏆 Accomplishments we’re proud of

Real-time, context-aware LLM feedback and transcription on AR hardware.

💪 Seamless integration of AR, AI, and web technologies—despite being new to all of them.

🚀 Building a working prototype as first-time hackers, learning new stacks and APIs on the fly.

📚What we learned

🕶️ Deep dive into AR development with Lens Studio and Spectacles.

🤖 Optimizing for real-time AI inference using Groq’s transformer architectures.

🧪 Troubleshooting and innovating with niche, unreleased hardware.

👥 The importance of user-centric design in universal professional networking tools.

🔮 What’s next for Uni Talk

We plan to make Uni Talk even smarter and more natural in conversations, with 🔗 deeper LinkedIn integration, 🗣️ expanded language support, 📶 offline functionality, and 🎯 more personalization. We’re also focused on 🌟 optimizing performance for AR hardware to make the experience as universally seamless as possible.

🤝✨ Ready to break the ice and make every conversation count? Uni Talk is your universal, AR-powered networking wingman.

Built With

Share this project:

Updates