Inspiration
Modern UIs are often overloaded with data but starved of context. We wanted to create an experience where users could understand why a number matters instantly, intuitively, and without leaving the page. Inspired by tools like Notion AI and GitHub Copilot, we imagined an interface that explains itself.
What it does
Indus adds intelligent, on-hover explanations to metrics, KPIs, or any piece of data in your app. With a subtle hover gesture, users see contextual, AI-generated insights that help them understand what a value means and why it matters.
How we built it
Frontend:
- Built with React and TypeScript, using a custom Hoverable component integrated with ShadCN’s HoverCard for polished, unobtrusive interactions.
- Styled for subtlety: semantic underlines, hover cues, and minimal UI friction.
Explanation Engine:
- A caching-first architecture avoids redundant fetches and keeps explanations snappy.
- Custom React hooks (useExplanation) manage loading, memoization, and subscription lifecycles.
Backend + Realtime Layer:
- On hover, a Lambda function triggers batched explanation fetches and queues them efficiently.
- WebSockets (via Supabase Realtime) are used to push explanation updates to the frontend as soon as they’re generated.
- Supabase Edge Functions orchestrate secure, role-aware access to model inference.
AI & Data:
- Explanations are generated via Google Gemini prompts dynamically assembled from the data point, its metric, and historical context.
- Responses are batched, cached, and invalidated intelligently to optimize API usage.
- Cached results are streamed to the client immediately via the WebSocket layer.
Challenges we ran into
- Balancing UX minimalism with the richness of AI-generated explanations.
- Coordinating async state updates over hover interactions without causing race conditions or stale caches.
- Managing cache invalidation and socket reconnections across multiple hover points.
- Designing a hover-based UI that doesn’t feel intrusive or gimmicky.
Accomplishments that we're proud of
- Built a self-explaining UI with near-zero cognitive overhead for users.
- Designed a reactive, real-time explanation engine that feels instant, even when powered by LLMs.
- Decoupled the entire system into reusable components: Hoverable, useExplanation, and socket listeners, ready for scale or library distribution.
What we learned
- How to build AI-native interactions that blend invisibly into existing UIs.
- The power of websockets for real-time AI feedback in non-chat interfaces.
- That well-scoped abstraction (like a Hoverable wrapper) can drastically reduce frontend complexity.
What's next for Indus
- Fine-tune a BERT-based summarizer using PyTorch for faster, more domain-aware insights.
- Bundle the system as a drop-in React library or Web Component.
- Integrate historical trend data and charting inline with explanations.
- Add user-role personalization, so explanations adapt based on persona (analyst vs. exec).
Log in or sign up for Devpost to join the conversation.