Inspiration

We watched every "AI tool" at hackathons do the same thing — wrap an API call in a chat UI and call it innovation. Meanwhile, real founders are still juggling five tabs, copy-pasting between tools, and acting as the glue between research, content, outreach, and sales. The bottleneck isn't AI capability — it's coordination. We asked: what if you could hire an entire AI launch team with one voice command, and they actually talked to each other?

What it does

Interstice is a multi-agent orchestration system that launches products autonomously. You speak one command — "Launch macroscope.com" — and a CEO agent decomposes your intent, delegates to specialist agents (Research, Content, Outreach, Call), and they execute in parallel while sharing findings through a real-time communication bus. Research feeds Content real competitive data. Outreach writes emails citing actual market intel. Bland AI makes real phone calls. Every high-stakes action hits a human approval gate. And every interaction compounds into persistent memory — skill files grow, brand voice sharpens, market knowledge deepens. Day 30 is exponentially smarter than Day 1. Need a new capability? Add an agent on the fly — it inherits the company's institutional knowledge instantly.

How we built it

Each agent runs as a Claude CLI subprocess with persistent sessions (--resume) — no API keys, no context re-stuffing. Convex powers our real-time backend: task queue with atomic checkout, inter-agent message bus, shared findings channel, and live dashboard subscriptions. Auth0 Machine-to-Machine tokens enforce trust boundaries between agents — scoped OAuth so Research can read but can't send, Outreach can draft but needs approval to execute. Airbyte syncs lead data from Google Sheets into BigQuery (600+ source connectors). Postiz handles TikTok content scheduling. Bland AI conducts real outbound phone calls with live transcript streaming. The frontend is React with Convex subscriptions — zero polling, fully reactive.

Challenges we faced

  • Agent depth vs. demo speed. Deep agents that reason across multiple steps are slow. We had to architect the heartbeat scheduler so agents could work in the background while the demo kept moving.
  • Inter-agent timing. Content Agent needs Research findings before it can write — but we want parallel execution. We solved this with a pub/sub findings channel: agents subscribe and react when data arrives, rather than blocking.
  • Auth0 scoping. Getting M2M tokens with granular per-agent permissions required careful OAuth scope design. Token Vault for Gmail OAuth was non-trivial to wire through the approval gate system.
  • Airbyte connectivity. Our first destination (Neon Postgres) timed out repeatedly. We pivoted to BigQuery mid-hackathon — IAM propagation delays cost us an hour.
  • Memory architecture. Balancing shared company memory, per-agent skill files, contact memory, and the findings channel without agents overwriting each other required a clear read/write hierarchy.

What we learned

The hardest part of multi-agent systems isn't making agents smart — it's making them collaborate. A single brilliant agent is less valuable than five mediocre agents that share context. The compounding memory system turned out to be our strongest moat: agents that accumulate institutional knowledge create a flywheel no single-prompt tool can replicate.

Built With

  • airbyte
  • auth0
  • bigquery
  • bland-ai
  • claude-cli
  • convex
  • next.js
  • perplexity-api
  • postiz
  • react
  • tailwind-css
  • typescript
Share this project:

Updates