Inspiration

Lumen was born from a simple frustration: we kept opening our phones to check one thing and losing hours to content we never asked for. We all want to reduce our screen time, but the reality is there's genuinely important information we need to stay on top of.

The problem is that getting to it means scrolling through five different apps full of ads, engagement bait, and posts that have nothing to do with why we picked up the phone in the first place. We realized the issue was never willpower; it's that every platform is architecturally designed to hijack your attention, not serve it. So we built Lumen - a completely customizable experience where you choose exactly what you consume, whether that's diving deeper into Science topics, Learning how financial markets work, or Understanding conversations in politics, all in one place with zero noise. No more drowning in distraction. Just a personal space built entirely around your curiosity, your growth, and your time.

How we built it

For the frontend, we built with React and Vite, and our backend runs on a FastAPI server. Content gets embedded and stored in Qdrant, our vector database, which powers the semantic search behind your personalized feed. For the AI layer, we used LangGraph as our agentic framework with OpenAI's GPT-4o as the LLM. Our chatbot has 2 tool calls attached — one that queries the vector database for content we've already indexed, and one that searches the web in real time using Tavily's API. Finally, we leaned on Codex and Replit throughout the build to speed up the agentic coding.

Challenges we ran into

Our team reached consensus on what to build and started building very late, starting ~7pm on Saturday. We struggled with indexing and scraping social media apps like Instagram, TikTok, Reddit, and Twitter, meaning we had to settle for using more easily available APIs and RSS feeds. We built out the entirety of the application as a web app and converted it to a PWA at the last minute to support mobile. At the end, we ran out of time to implement the speech function with the ElevenLabs API.

Accomplishments that we're proud of

We are proud that we were able to index thousands of pieces of media(videos, articles, podcasts), embed and retrieve meaningful content as our recommender system, implementing agentic tool calls for semantic search and web search.

We were specifically happy about our UI - the dynamic themes(based on time of day), the screen time management UI.

What we learned

We learned to use agentic frameworks such as Langgraph to build agents and tools that use Qdrant for performing RAG, using the Tavily API for accessing the internet. We learned a lot as a team to use agentic coding assistants like Codex, Cursor, and Replit, which we're particularly happy about.

What's next for Lumen

We aim to index more popular social media sites like Instagram, TikTok, and Reddit. We want to add more agentic tool calls, such as for voice and text-to-speech. We intend to make our vision for reducing time spent on social media true.

Built With

Share this project:

Updates