A web-first, source-grounded thinking workspace where PDFs, notes, GitHub content, and Slack exports become a navigable knowledge graph.
RC.JS is inspired by NotebookLM-style research workflows, but rendered as a spatial graph experience: ingest sources, generate concepts/questions/tasks, ask grounded questions, and explore evidence paths.
- Overview
- Core Capabilities
- Current Implementation Status
- Tech Stack
- Architecture
- Project Structure
- Getting Started
- Environment Variables
- Database and Migrations
- Running the App
- Background Worker Modes
- Seeding Demo Data
- Testing
- API Surface
- Data Contracts and Schemas
- Troubleshooting
- Roadmap and Known Gaps
RC.JS turns source material into a graph-shaped workspace for thinking and decision-making:
- Start from a prompt and create a workspace.
- Ingest sources (PDFs, notes, GitHub, Slack export text).
- Build a typed graph (concepts, entities, questions, tasks, insights, clusters).
- Ask questions and get source-grounded answers with citations and graph highlights.
- Publish and browse public workspaces; fork public work into private workspaces.
The product direction is organized around two principles from specs/README.md:
- Centralization: good knowledge should be reusable through a shared public graph.
- Validation: answers and graph claims should be grounded in explicit source evidence.
- Prompt-first landing experience and workspace flow.
- Private and public workspaces.
- Source ingestion pipeline with chunking and extraction.
- 2D/3D graph visualization for workspace exploration.
- Ask pipeline with citations and highlight IDs.
- Public gallery and fork flow.
- Background job orchestration via Postgres queues.
Status is based on implementation-plan.md, specs/README.md, and domain specs.
- Implemented:
- Workspace CRUD and graph/ask APIs.
- Public gallery, public workspace read, and public fork API.
- Source endpoints (upload, note, GitHub, Slack import).
- Ingestion jobs and worker orchestration with inline/background modes.
- AI pipeline modules for chunking, extraction, embedding retrieval, and grounded answers.
- Unit/integration tests and Playwright E2E flow scaffolding.
- Partially implemented or in-progress:
- Some UX wiring from landing/workspace controls is still being polished in product-experience spec checklists.
- End-to-end reliability depends on complete local env setup (Supabase + Gemini keys).
- Frontend:
- Next.js 16, React 19, TypeScript
- Tailwind CSS 4
- Framer Motion + GSAP
- react-force-graph-3d + Three.js
- Zustand
- Backend:
- Next.js Route Handlers
- Supabase (Auth, Postgres, Storage, RLS)
- pgvector for embeddings
- pg-boss for background orchestration
- AI:
- Google Gemini API via @google/generative-ai
- Embeddings + structured extraction and grounded answer flows
- Testing:
- Vitest + Testing Library
- Playwright
High-level flow:
- Source is added to a workspace.
- Content is normalized and chunked.
- Chunks are embedded and persisted.
- Extraction generates typed graph nodes/edges and workspace summary.
- Workspace graph is rendered in 3D/2D UI.
- Ask pipeline retrieves relevant chunks + graph neighborhood and returns grounded answer + citations.
Processing can run inline (synchronous dev fallback) or via a dedicated background worker.
- App routes and APIs: src/app
- Reusable UI components: src/components
- Core domain and integrations: src/lib
- Client workspace state: src/stores
- Specs and implementation contracts: specs
- DB migrations: supabase/migrations
- Scripts (seed/worker): scripts
- E2E tests: e2e
npm installCopy .env.example to .env and fill required values:
cp .env.example .envRun the SQL migrations in supabase/migrations against your Supabase Postgres instance.
You can do this with Supabase CLI or by executing migration files in order in SQL editor.
Create a Supabase Storage bucket named sources.
Migrations do not create storage buckets. PDF/GitHub source ingestion expects this bucket to exist.
npm run devOpen http://localhost:3000.
Defined in .env.example:
- Required:
NEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEYSUPABASE_SERVICE_ROLE_KEYGEMINI_API_KEY(required for AI processing/ask)
- Optional:
ELEVENLABS_API_KEY(TTS routes/features)E2E_USER_EMAIL,E2E_USER_PASSWORD,E2E_GITHUB_URL(Playwright scenarios)
Additional optional variables (not currently listed in .env.example):
WORKER_MODE(inlinedefault, non-inline for background worker mode)DATABASE_URL(required forpg-bossbackground worker)
Primary schema and API contracts are documented in SCHEMAS.md.
Key schema areas:
- Core entities:
workspaces,sources,source_chunks - Graph entities:
graph_nodes,graph_edges - Conversation/ask:
conversations,messages - Orchestration:
ingestion_jobs - Enum contracts and response shapes are treated as locked/stable in SCHEMAS.md
Migration files are in supabase/migrations, including:
- Initial enums/tables/RLS
- Chunk upsert + graph metadata
- Job progress + worker orchestration
- Conversation types
- Similarity matching function (
match_chunks)
Development:
npm run devProduction build locally:
npm run build
npm run startLint:
npm run lintRC.JS supports two processing modes:
- Inline mode (default): route handlers execute processing synchronously after enqueue.
- Background mode:
pg-bossqueue + worker process handles jobs.
Run worker:
npm run workerWhen using background mode, ensure WORKER_MODE is set to a non-inline value and DATABASE_URL is configured.
Seed a demo workspace:
npm run seedSeed public-gallery dataset (if needed for demos):
npm run seed:publicSeed scripts are in scripts/seed.ts and scripts/seed-public.ts.
Unit/integration tests:
npm testWatch mode:
npm run test:watchCoverage:
npm run test:coverageE2E (Playwright):
npm run test:e2eE2E UI mode:
npm run test:e2e:uiE2E flows rely on environment values described in e2e/workspace-flow.spec.ts.
Major route groups under src/app/api (non-exhaustive):
- Workspaces:
POST /api/workspacesGET /api/workspacesGET /api/workspaces/:idPATCH /api/workspaces/:id
- Sources:
POST /api/workspaces/:id/sources/uploadPOST /api/workspaces/:id/sources/notePOST /api/workspaces/:id/sources/githubPOST /api/workspaces/:id/sources/slack-importGET /api/workspaces/:id/sourcesPOST /api/workspaces/:id/processGET /api/workspaces/:id/ingestion-status
- Graph:
GET /api/workspaces/:id/graphGET /api/workspaces/:id/graph/node/:nodeId- Graph edit/merge endpoints for interactive graph tooling
POST /api/workspaces/:id/graph-editPOST /api/workspaces/:id/graph/edgePOST /api/workspaces/:id/graph/merge-nodes
- Ask and conversation:
POST /api/workspaces/:id/askGET /api/workspaces/:id/conversationPOST /api/workspaces/:id/builder-chat
- Public:
GET /api/public-workspacesGET /api/public-workspaces/:idPOST /api/public-workspaces/:id/forkPOST /api/public-workspaces/:id/ask
- Utility:
POST /api/tts
Private endpoints require Authorization: Bearer <supabase_access_token>.
Use SCHEMAS.md as the living source of truth for:
- Enum and table contracts
- API response shapes (
GraphResponse,AskResponse,SourceListResponse) - Cross-team function contracts between backend and AI layers
- Malleability status (locked/stable/flexible)
Additional implementation and handoff notes are captured in NOTES.md.
- Empty or failing private API calls:
- Verify bearer token is passed from logged-in session.
- Verify Supabase URL/keys in
.env.
- Sources stuck in processing:
- Check
WORKER_MODEand whether worker is running when in background mode. - Inspect
ingestion_jobsand source statuses.
- Check
- Ask returns insufficient evidence:
- Ensure source chunks were created and embeddings generated.
- Check Gemini API key and retrieval pipeline logs.
- Public gallery empty:
- Seed demo/public data and confirm workspaces are
visibility = 'public'.
- Seed demo/public data and confirm workspaces are
From the product and backend specs, remaining improvements include:
- Complete all UX wiring for create/publish/build actions in every shell path.
- Further polish demo flow and screenshot/story assets.
- Harden production deployment and observability setup.
- Expand connector depth and reliability.
For detailed acceptance checklists and ownership boundaries, see: