## Inspiration
Ever seen that scene from It's Always Sunny in Philadelphia where Charlie stands in front of a cork board covered in papers and red string, ranting about Pepe Silvia? We thought: what if an AI agent did that — autonomously, in real-time, with any two topics you give it?
The idea was simple: take two completely unrelated topics (say, "NASA" and "Pizza Hut"), let an autonomous agent loose on the internet, and watch it build an increasingly unhinged conspiracy theory connecting them — complete with a live cork board, red strings, and paranoid narration.
## What it does
You enter two topics into a web form. The agent takes over from there — no further human input needed:
- Searches the web for information on both topics using Tavily
- Extracts entities and connections using LLM analysis
- Builds a knowledge graph in Neo4j linking the two topics
- Stores findings in a Senso knowledge base and uses them to dig deeper
- Analyzes discovered images with Reka Vision for "clues"
- Narrates its findings with progressively more unhinged commentary
- Displays everything in real-time on a conspiracy cork board with red strings
Each round, the agent uses its previous findings to search deeper — it literally goes down the rabbit hole, getting more paranoid with every iteration.
## How we built it
Backend: Python with FastAPI serving a WebSocket connection. The agent loop runs in a background thread, emitting structured JSON events at every step (search, extraction, graph update, vision clue, narration). The server broadcasts these events to the browser in real-time.
Frontend: A single HTML file with inline CSS and JavaScript — zero build tooling. Uses vis.js for interactive graph visualization and pure CSS/SVG for the conspiracy cork board aesthetic. Two views: a clean network graph and the full cork board with draggable pinned cards and animated red string connections.
Agent Architecture: The agent is fully autonomous after receiving two topics. It runs multiple rounds, each one: query knowledge base → search web → extract entities → update graph → analyze images → narrate findings. Each round builds on the previous one's discoveries.
All 5 sponsor tools integrated:
- Tavily — Real-time web search with image discovery
- Neo4j — Graph database storing the conspiracy network
- Reka Vision — Multimodal image analysis finding "clues" in evidence photos
- Senso.ai — Knowledge base for storing and querying growing conspiracy theory
- Render — Cloud deployment with render.yaml blueprint
## Challenges we ran into
- Real-time streaming: Getting agent events from a Python background thread into an async WebSocket required
asyncio.run_coroutine_threadsafe()to bridge the thread boundary - Cork board layout: Random card positioning made everything pile up — had to switch to grid-based placement with organic jitter
- Image analysis reliability: vis.js
imageshape nodes silently fail on CORS-blocked URLs — switched to star-shaped evidence markers - Narrator personality: Getting the LLM to actually be the conspiracy theorist rather than analyze conspiracy theories required very specific first-person prompting
## What we learned
- Event-driven architecture with optional callbacks is incredibly clean — the agent works identically in CLI mode and web mode
- Graceful degradation matters: every external service (Neo4j, Reka, Senso) has an
.availableflag pattern so the demo never crashes - A single HTML file with inline everything is the fastest path to a working demo — zero bundler config, zero deployment friction
- The "unhinged narrator" is what makes people laugh — the technical graph is cool, but the personality sells it
Built With
- css
- fastapi
- html
- javascript
- neo4j
- openai
- python
- reka-vision
- render
- senso.ai
- svg
- tavily
- vis.js
- websockets
Log in or sign up for Devpost to join the conversation.