Another demo :)

Inspiration

We built Relaiy to address the "fragmentation" (lol) in conversational agent design. Our goal was to streamline context management and enabling dynamic real-time interactions without sacrificing clarity.

What it does

Relaiy is a SaaS platform that automates the generation and parsing of conversation context trees. Each user response is instantly processed and updated live, allowing developers to focus on creating richer interactions.

How we built it

  • Stack: NextJS, TypeScript, Express, Twilio, Retell, Langchain, Prisma, OpenRouter (we use ~15 different LLM models).
  • Real-time updates: Websockets galore! As models complete their request, all updates are streamed live to the user :) Imagine streaming over 100 requests to the user every message :>

Challenges we ran into

  • Designing a robust context tree that scales with conversation complexity.
  • Integrating disparate services (Twilio for messaging, langchain for LLM interactions) while ensuring low latency.
  • Monte Carlo tree search. Nothing more to be said.
  • Maintaining strict type safety across the codebase to reduce runtime errors.

-- beware chatgpt boilerplate below :) --

Accomplishments we're proud of

  • Delivering a production-ready SaaS platform in a collegiate hackathon setting.
  • Achieving a clean, maintainable, and modular codebase that aligns with industry best practices.
  • Successfully implementing real-time updates that handle complex conversation flows.

What we learned

  • The intricacies of real-time data processing and its importance in conversational AI.
  • How to integrate multiple services into a cohesive and scalable platform.
  • The value of strict type enforcement in reducing bugs and enhancing code quality.

What's next for Relaiy

  • Expanding integration with additional AI services.
  • Enhancing scalability and performance.
  • Introducing advanced analytics to further optimize conversation strategies.

Built With

Share this project:

Updates