Inspiration

Every team building with AI hits the same wall: your AI tools are smart in isolation, but completely amnesiac across sessions and teammates. One engineer decides to go API-first in a Cursor chat, another picks a database in Claude, and a third makes a design call over Slack, and none of those decisions are accessible to anyone else's AI. The result is constant re-explaining, duplicated research, and contradictory choices that only surface during code review or a frustrated standup. We realized the bottleneck isn't intelligence, it's information. Modern AI agents lack a shared, persistent memory, and that gap gets worse the more agents and people you add. We wanted to build the connective tissue that lets an entire org's AI tools reason from the same evolving context.

What it does

Scope is a shared knowledge layer for teams using AI. You save decisions, context, and knowledge once, and any AI tool, Cursor, Claude Code, Poke, or custom agents can pull it back instantly with the right permissions. Memories are organized into workspaces with role-based access control, so managers control what's visible and to whom. Under the hood, Scope models memory as a knowledge graph: each memory is a node, and relationships, relevance scores, and temporal metadata are stored as edges. When a query comes in, vector search identifies entry points, then an LLM-guided traversal branches across the graph to pull in only the context that actually matters for that task, not just keyword matches, but semantically connected decisions and history. Scope tracks what changed and who's seen it, so teammates and agents always know what's new. It exposes everything as both a REST API and an MCP server, meaning any MCP-compatible tool can store and search team memory out of the box. We also built a CLI (scope init) that auto-configures Claude Code hooks so memories sync without any manual effort, engineers just code, and Scope keeps the org in sync behind the scenes.

How we built it

The backend is FastAPI with PostgreSQL for structured data and Neo4j + Graphiti for the knowledge graph layer. Graphiti gives us hybrid retrieval, vector similarity, BM25 keyword search, and graph traversal in one pipeline, powered by OpenAI embeddings. Auth uses bearer tokens with SHA-256 hashed API keys and full workspace-scoped ACLs. The frontend is Next.js 15 / React 19 with Shadcn UI, Zustand for state, and TanStack Query for data fetching, which lets you register, manage workspaces, search memories, and invite teammates from the browser. We also used Reagraph for graph visualization for memories. We built two MCP servers: one general-purpose server with 10 tools for Cursor and Claude Desktop (search, store, list workspaces), and a Poke-specific server with 15 tools that add team messaging, meeting scheduling suggestions, and daily summary generation. We also wrote a Python SDK and a CLI tool that configures Claude Code hooks to auto-push local AI memories to the shared workspace on every relevant file write.

Challenges we ran into

The hardest problem was bridging local and shared memory. Claude Code writes memories to local markdown files, getting those to sync upstream without duplicating, losing metadata, or breaking the user's workflow required building a diffing mechanism, a sync state tracker, and hooks that fire only on relevant file changes. Getting the granularity right (when is a memory "new" vs. "updated"?) was surprisingly tricky. Graphiti's search returns knowledge graph edges (extracted facts), not our original Memory records, so we had to work around a join gap where search results lost tags, scope, creator info, and timestamps. Reconciling the graph layer with the relational layer without sacrificing query speed was a constant tension. We also wrestled with per-user visibility of changes. "What's new" is relative, what's new to Alice isn't new to Bob, so we needed per-user read tracking across shared workspaces, which added real complexity to the access log and notification system.

Accomplishments that we're proud of

We shipped a full product loop in a hackathon: API, SDK, CLI, two MCP servers, a frontend dashboard, and a working Poke integration. A developer can run scope init in any project, and from that point on, every AI tool they use shares the same memory, no config, no copy-pasting, no standups to re-sync. We're proud of the graph-based retrieval. It's not just semantic search, it's traversal. When you ask "why did we choose Postgres?", Scope doesn't just find that memory; it walks the graph to surface the performance benchmarks, the alternative options that were rejected, and the teammate who made the call. That kind of contextual depth is what separates a real memory system from a vector store with a search bar. We also got the Poke MCP integration working end-to-end: Poke users can ask conversational questions ("what did the team decide about the launch?"), Get daily "spokes" summarizing what everyone accomplished, and even have Poke suggest when a meeting is actually necessary based on disagreements or major code changes it detects in the shared memory.

What we learned

We learned that memory is not storage, it's structure. A flat vector database gives you recall, but teams need hierarchy (workspaces, permissions), temporality (what changed and when), and relationships (why decisions connect). Building that graph layer was the difference between a demo and something teams would actually trust. We also learned how powerful MCP as a distribution layer is. Instead of building integrations one-by-one, exposing Scope as an MCP server meant that Cursor, Claude Code, Poke, and any future MCP client could plug in immediately. The protocol did the distribution work for us. Finally, we learned that the real unlock isn't giving AI more memory, it's giving teams more memory. The moment two people's AI tools share context, the compounding effect is immediate: less repeated work, faster onboarding, and decisions that actually stick.

What's next for Scope

Short term: production deployment with proper auth (OAuth, not just API keys), a hosted version so teams don't need to self-host, and a polished scope watch daemon that continuously syncs in the background instead of relying on hooks. Medium term: expand integrations beyond dev tools, connect to Slack, Notion, and Google Docs so memories flow in from where decisions actually happen, not just where code gets written. We also want to build automatic memory agents that scan for stale or conflicting memories and suggest merges or pruning. Long term: We want Scope to be the default memory layer for every AI agent in an organization. Not just dev teams, sales, ops, product, support. Any team running parallel AI workflows hits the same information gap. Our vision is that every AI tool an org uses reads and writes to Scope, so the entire company's AI operates with one shared, evolving understanding. The era of containerized conversations is over.

Built With

  • anthropic
  • axios
  • cursor
  • cypher
  • docker
  • fastapi
  • fastmcp
  • graphiti
  • motion
  • next.js
  • openai
  • poke
  • pydantic
  • python
  • react
  • reagraph
  • tailwind
  • typescript
  • uvicorn
  • zod
  • zustand
Share this project:

Updates