TeachMe AI (ChalkAI)
AI-powered lecture assistant that generates real-time visuals on a tldraw canvas as a teacher speaks.
Architecture Overview
%%{init: {'theme': 'dark', 'themeVariables': {'primaryColor': '#2a2a2a', 'primaryTextColor': '#e0e0e0', 'primaryBorderColor': '#555', 'lineColor': '#888', 'secondaryColor': '#1a1a1a', 'tertiaryColor': '#333', 'edgeLabelBackground': '#1a1a1a', 'clusterBkg': '#1e1e1e', 'clusterBorder': '#555'}}}%%
graph TB
subgraph FE ["Frontend (React + Vite)"]
direction LR
subgraph ASSISTANT ["assistant-ui"]
A1["๐ค Mic capture"]
A2["๐ Transcript display"]
A3["๐ต Agent status"]
end
subgraph TLDRAW ["tldraw Canvas"]
T1["โ๏ธ editor.createShapes()"]
T2["๐ธ Canvas snapshots"]
T3["๐ฒ Viewport awareness"]
end
end
WS["WebSocket /ws/session\nโ speech chunks + canvas state\nโ agent actions + status"]
subgraph BE ["Backend (FastAPI + Railtracks)"]
direction TB
subgraph ORCH_BOX [" "]
ORCH["๐ข Orchestrator ยท @rt.agent_node\nWindow โ Decision โ Resolve\ndraw_artifact | annotate | wait | review"]
end
subgraph PIPELINE_BOX ["Transcript Pipeline"]
direction LR
INGEST["ChunkIngestor\ningest.py"]
WINDOWER["WindowBuilder\nwindow=6, min_new=3"]
end
subgraph DECISION_BOX ["Decision Routing"]
direction LR
RESOLVER["ArtifactResolver\nquery โ fixture โ ops"]
ANNOTATOR["Annotator\ncallouts + arrows"]
end
subgraph TOOLS_BOX ["Tools ยท @rt.function_node"]
direction LR
TOOL1["get_recent_transcript"]
TOOL2["get_recent_decisions"]
TOOL3["find_matching_artifact"]
end
subgraph LLM_BOX ["LLM Layer"]
LLM["OpenAI gpt-5.4-mini\n(via LiteLLM)"]
end
PUB["EventPublisher\npub/sub broadcast"]
end
subgraph BB ["SessionState ยท Blackboard"]
direction LR
BB1["๐ transcript_chunks\nappend-only + cursor"]
BB2["๐ผ๏ธ drawn_artifacts\nfamilies on canvas"]
BB3["๐ง recent_decisions\norchestrator history"]
BB4["๐ recent_windows\nsliding windows"]
BB5["๐ฆ artifact_registry\npre-built tldraw shapes"]
end
FE --- WS
WS -->|speech chunks| INGEST
PUB -->|sends actions| WS
INGEST --> WINDOWER
WINDOWER -->|TranscriptWindow| ORCH
ORCH -->|calls| LLM
LLM -->|OrchestratorDecision| ORCH
ORCH -->|calls| TOOL1
ORCH -->|calls| TOOL2
ORCH -->|calls| TOOL3
ORCH -->|"draw_artifact"| RESOLVER
ORCH -->|"annotate"| ANNOTATOR
TOOL1 -->|reads| BB4
TOOL2 -->|reads| BB3
TOOL3 -->|reads| BB5
RESOLVER -->|reads| BB5
INGEST -->|writes| BB1
RESOLVER -->|writes| BB2
ANNOTATOR -->|writes| BB2
ORCH -->|writes| BB3
WINDOWER -->|writes| BB4
ORCH --> PUB
RESOLVER --> PUB
ANNOTATOR --> PUB
How It Works
Speech โ Chunks โ The teacher speaks into the microphone. Speech-to-text produces transcript chunks sent to the backend via
POST /sessions/:id/chunks.Chunks โ Windows โ
ChunkIngestorappends chunks toSessionState.WindowBuildergroups them into sliding windows (size 6, min 3 new chunks) and fires when ready.Window โ Decision โ
OrchestrationServicesends the window text + canvas state to a Railtracksagent_node. The LLM returns a structuredOrchestratorDecisionwith one of three intents:draw_artifactโ draw a new visual from the fixture libraryannotateโ add a label/callout to an existing visualwaitโ do nothing (content is vague or logistical)
Decision โ Canvas Ops โ For
draw_artifact,ArtifactResolvermatches the query against theArtifactRegistry, instantiates a JSON template into tldrawcreate_shapeops. Forannotate,Annotatorgenerates callout + arrow ops near the target artifact.Ops โ Canvas โ
CanvasOpBatchis published viaEventPublisher. The frontendEventAdapterreceives the ops and applies them to the tldraw editor.
Key Design Decisions
| Decision | Rationale |
|---|---|
| Prebuilt artifact fixtures | LLMs are unreliable at generating valid tldraw JSON. Fixtures guarantee visual quality. The model decides what to draw; the system owns how. |
| Blackboard architecture | SessionState is the single source of truth. Every component reads/writes to it, enabling stateful multi-step reasoning. |
| Concept hierarchy | transformer_stack subsumes attention_matrix. If the parent is already on canvas, the child is not drawn separately โ preventing redundant visuals. |
| Railtracks for orchestration | Structured output schemas + tool calling without owning the app lifecycle. |
| Sliding window | Avoids per-chunk LLM calls. Batches transcript into coherent segments before reasoning. |
Project Structure
backend/
app.py # FastAPI factory, wires all dependencies
api/routes.py # HTTP endpoints
domain/
models.py # Pydantic models (TranscriptChunk, OrchestratorDecision, etc.)
state.py # SessionState (blackboard) + SessionStore
orchestration/
service.py # OrchestrationService โ runs the agent loop
prompts.py # System prompt + family hierarchy rules
tools.py # Railtracks function_node tools
artifacts/
registry.py # Loads & indexes JSON fixture files
resolver.py # Maps decisions โ CanvasOpBatch
annotator.py # Generates annotation callout ops
fixtures/ # JSON templates (token_grid, attention_matrix, etc.)
transcript/
ingest.py # ChunkIngestor
windowing.py # WindowBuilder (sliding window)
streaming/
publisher.py # EventPublisher (pub/sub)
subscribers.py # Console + Recorder subscribers
simulation/
replay.py # ReplayRunner for offline testing
tests/
test_integration.py # 7 end-to-end integration tests
test_traces/ # JSON execution traces from test runs
frontend/
src/
components/
canvas/CanvasPane.tsx # tldraw canvas wrapper
assistant/ # Thread panel, reasoning display
layout/AppShell.tsx # Main layout shell
runtime/
api-client.ts # Backend HTTP client
event-adapter.ts # Translates backend events โ canvas ops
types.ts # TypeScript type definitions
Running
# Backend
cd backend
pip install -r requirements.txt
cp .env.example .env # add OPENAI_API_KEY
uvicorn backend.app:create_app --factory --reload
# Frontend
cd frontend
npm install
npm run dev
# Tests
python -m pytest backend/tests/test_integration.py -v --tb=short -s

Log in or sign up for Devpost to join the conversation.