Inspiration

We were frustrated watching brilliant marketing campaigns fail not because the strategy was wrong, but because predicting how real people would react felt like reading tea leaves. Focus groups lie. Surveys are biased. And by the time you know if your messaging works, you've already spent millions.

The breakthrough came from an unlikely source: watching how video game developers simulate entire economies and social systems. What if we could build a "SimCity for marketing"? A digital twin where thousands of AI agents with real personalities, biases, and social connections interact with your campaigns before you spend a dollar on production?

We discovered the OASIS framework from CAMEL-AI, which simulates social media ecosystems with autonomous agents. That was the spark. We built Prelude on top of it—a platform where you describe your audience and campaign, and watch thousands of synthetic humans react, share, debate, and ignore your message in real-time.


What it does

Prelude is a campaign simulation platform powered by swarm intelligence.

You start by uploading context: market research, product specs, brand guidelines, or even a novel you want to test. Then you describe your target audience in plain English—"soccer-loving teenagers in urban areas" or "enterprise IT directors skeptical of AI tools."

Prelude generates hundreds of AI personas—each with unique names, ages, bios, personality traits, interests, and social connections. These aren't demographics; they're individuals. Some are influencers. Some are skeptics. Some amplify messages; others dampen them.

You then design campaigns: different messaging strategies, channels, budgets, and timing. Hit "simulate," and Prelude runs your campaigns in a synthetic social network. You watch as personas post, reply, share, and evolve their opinions over simulated weeks compressed into minutes.

The result: a predictive report showing which strategy wins, which segments respond, where your message spreads organically versus dies, and—crucially—why. You can even interview individual personas to understand their reasoning.

Real use cases we've tested:

  • Predicting how a university community would react to a controversial policy change
  • Simulating the lost ending of Dream of the Red Chamber by running the characters through social dynamics
  • Testing marketing strategies for enterprise software

How we built it

Frontend: React 19 + Vite + TypeScript with D3.js for network visualization, Chart.js for metrics, and a custom design system (warm terracotta tones, cream backgrounds built to feel calm during long analysis sessions). We use the Vercel AI SDK for real-time streaming.

Backend: Dual-server architecture—FastAPI for the LangGraph agent server (SSE streaming), Flask for the simulation APIs. Python 3.11 with uv for dependency management.

AI Orchestration: LangGraph powers the conversational agent that helps users design campaigns, generate personas, and analyze results. The agent has 25+ tools for database queries, persona management, simulation control, and report generation.

Simulation Engine: We forked and extended OASIS (Open Agent Social Interaction Simulations) from CAMEL-AI. Agents have memory, can post content, follow each other, and form organic social graphs. We added:

  • Persona generation from natural language descriptions
  • Campaign brief parsing and strategy branching
  • Real-time metric streaming to the frontend
  • "God mode" variables to inject events mid-simulation

Memory & Knowledge: Zep Cloud for persistent conversation memory and knowledge graph construction. This lets the system remember your campaigns across sessions and discover relationships between data.


Challenges we ran into

1. LLM Token Costs — Running 100+ agents through 50 simulation ticks burns tokens fast. We implemented aggressive output capping (50KB per tool response), lazy agent initialization, and parallel simulation runs. But the real fix was building a campaign advisor that helps users run smaller, smarter simulations instead of brute-force scale.

2. SSE Streaming Stability — The Vercel AI SDK v6 uses a specific UI Message Stream Protocol. Getting LangGraph's async event stream to match that format required custom event translation. We wrote a ~200-line adapter that maps tool calls, text deltas, and errors into the right SSE frames.

3. State Management Across Services — Frontend, agent server, simulation runner, and database all needed to share state. We settled on SQLite for simplicity (with a db_conn passed through LangGraph's config), but coordinating migrations and keeping schemas in sync was a weekend-long saga.

4. Persona Realism — Early personas felt like caricatures. We iterated on the generation prompt dozens of times, adding constraints like "vary the sentence structure," "include contradictions," and "give them a secret opinion they don't share publicly." The difference between 80% realistic and 95% realistic is the difference between a toy and a tool.


Accomplishments that we're proud of

  • The "Interview Any Agent" feature — After a simulation, you can click any persona and have a conversation with them. Ask why they ignored your campaign. Ask what would have changed their mind. This is where insights surface that no aggregate report can show.

  • Branching simulations — You can fork a simulation at any tick, change a variable (different headline, higher budget, competitor response), and run parallel futures. It's like A/B testing in a time machine.

  • The design system — We built a full design language (documented in DESIGN.md) with warm, muted colors that reduce eye strain during marathon analysis sessions. The "calm focus" aesthetic has become part of the product identity.

  • Open source foundation — We contributed back to the OASIS ecosystem and documented our extensions. The persona generation pipeline is modular and could power other social simulations.


What we learned

Simulation is storytelling. The most valuable output isn't the metric chart—it's the narrative: "Your message spread through the 'early adopter' segment first, but stalled because the connectors in that group didn't find it shareable enough." We rebuilt the report agent three times before it learned to tell stories instead of dumping numbers.

Agents need constraints. The first simulations were chaos—every agent posted constantly, timelines clogged, signal drowned in noise. We learned to model attention economics: agents have limited "posting energy," follow finite feeds, and develop content fatigue.

The UI is the onboarding. Nobody reads docs. They click buttons. If the first thing they see is a 47-field form, they leave. We rebuilt the campaign creation flow four times until it felt like a conversation: describe what you want, the system suggests the rest.


What's next for Prelude

Backtesting mode — Upload historical campaign data and simulation parameters, and we'll show you where the prediction diverged from reality. Use that to tune the simulation engine's fidelity.

Multi-platform simulation — Currently we simulate a Twitter-like platform. We're adding Instagram (visual content, Stories), Reddit (threaded discussions), and LinkedIn (professional networks).

Collaborative workspaces — Marketing teams need to share campaigns, compare notes, and iterate together. We're building team accounts with shared company contexts.

The API — Today Prelude is a web app. Tomorrow it should be an API you call from your campaign management tool, your CRM, or your Slack bot. "Run a quick simulation on this headline before we ship."

demo: https://youtu.be/mTwMVQfIULk

Built With

Share this project:

Updates