The Problem 💭
Meet Rachel, a newly hired software engineer, eager to make her mark. She joins a mature team with a large, fast-moving codebase and a mountain of tribal knowledge. Her goal is simple: start contributing valuable code and feel like a real part of the team. But instead of writing features, she’s spending her days lost in a maze of Slack threads, GitHub issues, Linear tickets, and outdated Notion docs.
Rachel has tons of "why" questions:
- Why was this API designed this way?
- Why was that PR reverted?
- Why are we prioritizing this over that?
But she’s hesitant to constantly ping senior developers—they’re busy, and she doesn’t want to seem unprepared. So she resorts to piecing things together alone, context-switching across dozens of tools, trying to reverse-engineer decisions from fragments of information. She’s not unqualified, just uninformed. This experience is more common than teams admit. Onboarding is a silent tax on engineering productivity, not just in time lost, but in knowledge withheld. It can take weeks, sometimes months, for new hires to reach full velocity. The real bottleneck isn’t writing code—it’s understanding the why behind it.
The friction comes in two forms:
The Human Problem: New engineers feel pressure to be independent, yet they're overwhelmed by a lack of context. They don’t want to interrupt teammates, so they’re left to decipher scattered artifacts and second-guess every change.
The Technical Gap: Knowledge is siloed. Conversations live in Slack. Tickets sit in Linear. Docs float in Notion. Code evolves in GitHub. But no single place tells the full story. Even AI assistants fall short—most only analyze code, blind to the human rationale behind it.
Inspiration 🌎
The onboarding process of a new hire is a silent tax on every software team's productivity. It can take anywhere from a week to several months, often consuming valuable team bandwidth and slowing momentum. For new developers, the landscape is fragmented: Slack holds conversations, GitHub stores code, Linear tracks tasks, and Notion houses documentation, but none of these platforms tell the complete story on their own. As a result, new hires are left piecing together context, struggling to find the right documents, understand project decisions, or get quick answers to seemingly simple questions. We built Loominal to change that. Onboarding shouldn’t mean digging through dozens of Slack threads or hundreds of tickets. Loominal gives developers instant access to the knowledge behind the work, reducing ramp-up time and accelerating productivity from day one. Because great onboarding isn’t just about knowing what the code does—it’s about understanding why it exists.
What it does 🤔
Loominal is an AI-powered knowledge engine web application designed to accelerate team onboarding and knowledge transfer dramatically. It works by integrating with the full suite of a team's development tools, not just the codebase, but also the conversational, planning, and issue-tracking platforms like GitHub, Linear, Slack, Figma, and Notion, synthesizing scattered information into one place. This allows any team member (especially new hires) to get instant, contextual answers with citations to their most complex questions without interrupting the workflow of others or hunting through endless channels.
Whether it's a question about a PR's rationale, the evolution of a feature request, or a forgotten Slack debate, Loominal reveals the why behind every decision.
Our MVP is a focused web application that showcases the core magic of turning fragmented project knowledge into clear, actionable insight. It’s built around a seamless experience that guides users from connecting their tools to receiving their first synthesized answer, all within a clean, intuitive interface that makes asking complex, multi-layered questions feel effortless.
Key MVP features include:
- A Centralized Dashboard: A unified workspace where users can log in, view their connected tools, and manage integrations, serving as the home base for knowledge discovery.
- Seamless Source Integration: Users can easily connect their essential apps. For the MVP, we've prioritized GitHub (code + issues), Linear (planning), Slack (conversations), Figma (design specs), and Notion (documentation).
- The AI Interface: Loominal is a powerful, natural-language chat interface. Users ask questions like “Why did we change our approach to X?” or “What were the trade-offs discussed in this feature?” — and get meaningful, contextual answers.
- Rich Answers with Citations: Each response is synthesized across sources and includes direct, clickable citations pointing back to the original GitHub PR, Slack message, Linear ticket, Figma frame, or Notion doc, allowing users to verify context and build trust.
How we built it 🛠️
We used Next.js with TypeScript to power a performant and modular frontend that manages authentication, project routing, and a real-time chat interface. The backend architecture leverages a retrieval-augmented generation (RAG) system, integrating Vellum AI for intelligent query processing and precise, citation-backed responses.
Vellum serves as the brain of our knowledge engine. After relevant context is retrieved from connected tools via semantic search, we use Vellum’s prompt orchestration and tool-chaining capabilities to construct and execute dynamic queries. This allows us to control how different sources (e.g., GitHub PRs, Linear tickets, Slack threads) are combined and presented in synthesized answers. Vellum’s flexible SDK supports structured prompt flows and lets us fine-tune how information is aggregated, ensuring consistent and trustworthy outputs with rich citations.
To streamline interaction with project management data, we built custom Linear Agents that act as autonomous knowledge workers. These agents are capable of interpreting natural language questions and performing multi-step reasoning over Linear issues, cycles, and roadmaps. By combining metadata (labels, timestamps, ownership) with context from discussions and descriptions, our agents can answer queries like “What’s blocking this feature from shipping?” or “What decisions were made about auth in Cycle 2?” The agents operate within our RAG pipeline, orchestrated through Vellum, and are grounded with retrievals from our semantic index to ensure their responses are contextually accurate and traceable.
Next.js API Routes handle OAuth flows and data integration with GitHub, Linear, and Slack, enabling secure syncing of repositories, issues, PRs, and conversations. Vector embeddings are generated using Xenova Transformers and stored in pgvector for efficient semantic search. To enrich unstructured team conversations (like Slack threads), we used Google Gemini to generate clean, context-aware summaries that distill long discussion chains into concise knowledge artifacts.
The database layer is powered by PostgreSQL, hosted on Neon, with schemas defined using DrizzleORM for type-safe, composable operations across all integrations. It’s organized around a few key tables that keep everything simple and structured:
- Users: Stores user information like name and email.
- Organizations: Represents a team or company workspace.
- Projects: The core unit of work. Connections: Stores credentials for each external service.
- Organization Members: Maps users to organizations and defines their role. This setup allows us to support collaborative knowledge discovery while maintaining strict access control and clear ownership boundaries.
Custom Auth0 Extension for Enhanced User Management
We also extended Auth0 beyond basic authentication by building a fully custom session system and user interface tailored to our app. Instead of relying solely on Auth0’s built-in UI or workflows, we use the Auth0 Management API to sync user data directly into our PostgreSQL database, powering a robust system for managing users, roles, and organizations. This allows us to implement role-based authorization and enforce access restrictions across routes and actions at both the frontend and backend levels. This also lets us handle the more granular controls when deciding how to handle user management, since we're taking it upon ourselves to manage the user data instead of letting Auth0 handle every little decision related to authentication.
Challenges we ran into ⚠️
- Data Normalization: Integrating heterogeneous data into a unified embedding space required sophisticated data cleaning and transformation.
- GitHub API Rate Limits: Fetching file structures and histories from GitHub required low-level API interactions and robust error handling.
- LLM Context Optimization: Maximizing contextual accuracy while minimizing hallucination required careful data chunking and routing to Vellum.
- Cross-Language Communication: Coordinating communication between the TypeScript frontend and the Python backend presented challenges in data serialization and API design.
Accomplishments that we're proud of 🏆
- Implemented an end-to-end knowledge synthesis system that aggregates data from five major tools (GitHub, Slack, coming soon: Linear, Figma, Notion) into a unified chat experience.
- Engineered a retrieval-augmented generation (RAG) system that delivers rich, source-cited answers with direct links to GitHub PRs, Slack messages, and more, enhancing answer accuracy and verifiability.
- Designed a modular backend architecture that facilitates rapid integration of new data sources and AI models, enabling seamless expansion of the system's capabilities.
- Developed a clean, simplistic, and intuitive UI with deep technical depth, leveraging Next.js, TypeScript, TailwindCSS, Motion and Shadcn/ui to deliver a performant and user-friendly experience.
What we learned 🧠
- Designing for trust (via citations) is just as important as designing for intelligence.
- Onboarding is a data connectivity problem, not just a documentation problem.
- Building multi-service, multi-user systems demands careful architecture from Day 1: role-based access, scoped tokens, and project-level isolation were essential.
What's next for Loominal ➡️❓
- Real-time syncing of integrations like Slack and GitHub for continuous updates.
- Notion and Figma support, bringing in design specs, product planning, and more.
- VS Code extension for in-editor queries.
- Workspace permissions and granular access control for enterprise use cases.
- Support for custom embeddings and fine-tuned models to allow teams to run Loominal on their own infrastructure.
Extras
Vellum
- Vellum Setup + More
- Business Logic using Vellum + Gemini
- Vellum also uses Gemini in the background.
- Both Gemini API and Vellum have fallbacks -> to each other. If one fails for any reason at all, the other steps in.
Built With
- auth0
- drizzleorm
- flask
- gemini
- github-api
- linear-api
- neon
- neon-postgres
- next.js
- notion-api
- paithon
- pgvector
- postgresql
- python
- react
- slack-api
- tailwindcss
- typescript
- vellum
- xenova-transformer



Log in or sign up for Devpost to join the conversation.