Journaling that remembers and reasons in context.
Synapse is a memory-first reflection agent built for the London LangChain x SurrealDB Hackathon. It turns unstructured reflections into a persistent knowledge graph, then answers questions from that evolving graph context.
Public demo: synapse-frontend-vdmo.onrender.com
- Open the public demo: synapse-frontend-vdmo.onrender.com
- Create an account (or log in).
- Go to
reflect, write a short reflection, and pressreflect. - Review extracted patterns, emotions, themes, and follow-up questions.
- Switch to
talkand ask a question like:What pattern shows up most when I mention work feedback?
Behind the scenes, your reflection runs through a 6-node LangGraph pipeline — extraction, graph storage, vector embedding, insight generation, and more. This can take 15-30 seconds on the first run, so take a mindful moment and let it work its magic.
Start chat: @synapse_helper_bot
- Message the Synapse Telegram bot: @synapse_helper_bot (for deployments where the bot is enabled).
- First-time setup is inline: share email + password when prompted.
- Send a text reflection and get analysis back in chat.
- Or send a voice note; Synapse transcribes it and runs the same reflection pipeline.
- If you already have a web account, use
/linkto connect Telegram to that account.
Most AI journaling tools are stateless or shallowly stateful. They can sound empathetic, but they forget pattern history and repeat generic advice.
Synapse is built to solve that:
- persistent, structured memory in SurrealDB
- agent orchestration with LangGraph + LangChain tools
- grounded responses based on stored user context, not only the latest prompt
Synapse is grounded in established reflection methodologies, not generic motivational output:
- CBT (Cognitive Behavioral Therapy):
- Looks for thought patterns such as catastrophizing, all-or-nothing thinking, and mind-reading.
- Stores these as recurring
patternnodes and links them to themes/people that co-occur. - Helps users spot repeat loops and test alternative interpretations.
- DBT (Dialectical Behavior Therapy):
- Captures emotional valence/intensity plus trigger context and body-state cues.
- Stores emotional patterns in
emotion+triggered_by/expressesrelationships. - Helps users identify when dysregulation starts and what tends to escalate or reduce it.
- IFS (Internal Family Systems):
- Detects internal parts and role dynamics (
manager,firefighter,exile). - Persists these in
ifs_partnodes and activation links from reflections. - Helps users separate protective strategies from underlying vulnerable states.
- Detects internal parts and role dynamics (
- Schema Therapy:
- Tracks deeper enduring schemas (for example abandonment, self-sacrifice, unrelenting standards).
- Stores schema domain and coping style (
surrender,avoidance,overcompensation) inschema_patternnodes. - Helps users connect present reactions to long-running life patterns.
How this links to the system end-to-end:
- The extraction agent applies these lenses to each reflection and outputs structured JSON.
- SurrealDB stores each lens as durable graph entities and relationships.
- LangGraph queries historical graph context before synthesis, so insights reflect trajectory over time.
- The chat agent uses graph tools to answer from concrete history: patterns, people, triggers, body signals, and schema/IFS context.
This is the core product value: clinically informed reflection frameworks translated into persistent machine-readable memory, then used by agents to deliver clearer, more consistent, and more context-aware guidance over time.
Important boundary: Synapse is a reflection tool and pattern coach, not a replacement for therapy or medical care.
Synapse includes explicit safety and reliability guardrails across prompts and runtime behavior:
- Safety guardrails on every prompt: crisis-response instructions, non-diagnostic constraints, and non-pathologizing language are prepended across extraction, chat, insight, and follow-up prompts.
- Crisis-oriented instruction set: prompts include explicit crisis resource routing (UK/US/international) and direct the model to prioritize immediate support language.
- Data sensitivity rule: prompts instruct the system to summarize patterns from history rather than unexpectedly quoting sensitive past reflection text.
- Strict structured-output contracts: extraction is instructed to return JSON-only; follow-up generation is instructed to return exactly 3 questions in JSON array format.
- Fail-safe parsing behavior: if extraction or follow-up parsing fails, the pipeline degrades gracefully to safe fallback outputs instead of crashing request flow.
- Input normalization and tenancy boundaries: reflection source values are normalized to known enums, and core graph/data queries are user-scoped by
user_id. - Protected API surface: reflection/chat/dashboard endpoints require JWT auth.
User data is stored and accessed with layered controls in the current implementation:
- Password security: user passwords are hashed with
bcrypt(never stored in plaintext). - Token-based access: protected endpoints require signed JWT bearer tokens.
- User-scoped data access: reflection and graph queries are filtered by
user_idto isolate each account's data. - Account integrity controls: user email has a unique index, and password reset tokens are unique with 1-hour expiry.
- Telegram linking control: Telegram accounts are linked through explicit credential verification (
/link) or in-bot account creation. - Secrets-based configuration: database/API credentials are loaded from environment variables rather than hardcoded.
For production deployments, set a strong JWT_SECRET, enforce secure environment management, and apply your platform/network security policies.
The biggest engineering challenge was end-to-end latency on the reflection pipeline. A single reflection submission triggers a 6-node LangGraph pipeline that includes:
- 2+ LLM calls in the extraction agent (Claude Sonnet must call retrieval tools before extracting)
- 20+ OpenAI embedding API calls to vectorize each extracted entity
- 50-150+ sequential SurrealDB writes for edge creation (pattern co-occurrences, emotion-theme links, person-pattern triggers)
- 2 further LLM calls for insight and follow-up generation
The critical path is mostly sequential — only store_reflection and extract_patterns run in parallel, then everything converges.
We tested several models for the extraction agent:
| Model | Latency | Quality |
|---|---|---|
| Claude Sonnet 4.6 | ~35-45s | Best tool use, deepest extraction, consistent JSON |
| GPT-4.1 | ~25s | Faster but weaker at multi-tool orchestration and nuanced pattern recognition |
| GPT-5-mini | ~75-85s | Slower than Sonnet with less accurate extractions |
We chose Sonnet because extraction quality directly determines graph accuracy — every future insight and chat answer depends on how well the initial extraction captures patterns, schemas, and relationships. A faster but shallower extraction compounds into worse results over time.
- Parallelising LangGraph nodes — helped slightly for the first two nodes, but the remaining four must run sequentially (each depends on the previous output).
- Streaming an early insight before the full extraction completes — this actually increased total latency because the additional LLM call competed for resources and delayed the main pipeline.
- Progressive status updates — what we shipped. The frontend cycles through contextual messages ("Analysing patterns...", "Building your graph...", "Pulling insights...") to keep the user oriented while the backend works. Not a latency fix, but it makes the wait feel purposeful rather than broken.
After profiling in LangSmith, we implemented four targeted optimisations that cut 6–10 seconds from each reflection pipeline run:
- Batched SurrealDB writes** — grouped
RELATEstatements into batches of 50 per query call instead of issuing them one at a time, cutting dozens of sequential +round-trips - Batched embeddings — replaced per-entity
embed_query()calls with a singleembed_documents()call for all entities in a reflection, reducing OpenAI API +round-trips from 10–20+ down to 1 - Parallel insight + follow-up generation —
generate_insightsandgenerate_followupsnow fan out fromquery_graphsimultaneously instead of running +sequentially, saving one full LLM call's worth of wall time - SSE streaming with real pipeline state — the frontend now receives Server-Sent Events from
astream_events, so progress messages ("Extracting patterns...", "Building your graph...") reflect actual node execution rather than cycling through placeholder text
- Ingests reflections through a 6-node LangGraph pipeline
- Extracts patterns/emotions/themes/IFS parts/schemas/people/body signals using CBT/DBT/IFS/Schema Therapy lenses
- Persists graph entities + typed relations + vector embeddings in SurrealDB
- Generates personalized insights and follow-up questions
- Supports conversational "ask your graph" analysis with a ReAct tool-calling agent
- Ships as a full app surface:
- React web app (
reflect+talk) - FastAPI backend
- Telegram bot where users can reflect by text or voice note
- React web app (
Synapse is intentionally multichannel. Reflection capture is not limited to the web UI.
Bot link: @synapse_helper_bot
- Text reflections in Telegram: send your reflection as a message and Synapse returns extracted patterns, emotions, insights, and follow-up prompts.
- Voice-note reflections in Telegram: send a voice note, Synapse transcribes it with
whisper-1, then runs the same LangGraph reflection pipeline. - Shared memory model: Telegram and web reflections land in the same user-scoped SurrealDB graph, so context compounds across channels.
We implemented a dedicated eval harness in evals.py:
uv run python evals.py- Extraction quality eval (
eval_extraction)
- Runs multiple curated reflection cases.
- Checks expected pattern/person/emotion/body-signal/IFS-schema coverage.
- Produces per-case scores and misses for fast prompt iteration.
- Graph integrity eval (
eval_graph_integrity)
- Checks for orphaned reflections, duplicate entities, invalid IFS roles, and invalid co-occurrence edges.
- Verifies embedding coverage across key node tables.
- Chat grounding eval (
eval_chat_grounding)
- Uses must-mention and must-not-mention assertions on targeted questions.
- Designed to catch grounding regressions and hallucinated claims.
- Pipeline performance eval (
eval_performance)
- Measures full reflection pipeline latency.
- Reports extracted entity counts and points to LangSmith traces for node-level timing.
All eval suites are @traceable, so run data is inspectable in LangSmith under eval_* traces for deeper debugging and regression review.
flowchart LR
U["User writes reflection"] --> API1["POST /api/reflection"]
API1 --> LG["LangGraph reflection pipeline"]
LG --> N1["store_reflection"]
LG --> N2["extract_patterns (ReAct)"]
N2 --> T1["Tool: get_existing_patterns"]
N2 --> T2["Tool: retrieve_similar_reflections"]
N1 --> N3["update_graph"]
N2 --> N3
N3 --> N4["query_graph"]
N4 --> N5["generate_insights"]
N5 --> N6["generate_followups"]
N3 --> SDB[("SurrealDB knowledge graph")]
N1 --> VDB[("SurrealDB vector store")]
Q["User asks a question"] --> API2["POST /api/chat or /api/chat/stream"]
API2 --> CHAT["ReAct chat agent"]
CHAT --> GT["14 graph/search tools"]
GT --> SDB
GT --> VDB
CHAT --> A["Grounded answer"]
LSM["LangSmith tracing"] -.-> LG
LSM -.-> CHAT
The reflection workflow in reflect/agent.py is a typed StateGraph with explicit node boundaries:
store_reflectionextract_patternsupdate_graphquery_graphgenerate_insightsgenerate_followups
START fans out to both store_reflection and extract_patterns, then joins before graph updates. This enforces deterministic multi-step behavior while still allowing parallel start stages.
- Extraction agent (
reflect/extraction.py) usescreate_react_agentand must call retrieval tools before extraction. - Chat agent (
reflect/chat_agent.py) usescreate_react_agentwith 14 tools fromreflect/graph_store.py(overview, deep-dive, trigger, temporal, and hybrid search tools). - Chat streaming (
/api/chat/stream) emits SSE tokens fromastream_eventsinreflect/service.py. - Prompts encode therapeutic framing and safety boundaries so outputs stay observational and non-diagnostic.
Extraction tools (used before pattern extraction):
retrieve_similar_reflections(query): semantic retrieval of relevant prior reflections for context.get_existing_patterns(): fetches current graph patterns so extraction reuses canonical labels.
Chat tools (14-tool runtime):
hybrid_graph_search(query): semantic KNN lookup across pattern/IFS/schema/person/theme nodes.get_all_patterns_overview(): frequency-ranked pattern inventory plus co-occurrence summary.get_all_emotions_overview(): emotion inventory with trigger summary.get_ifs_parts_overview(): IFS parts with role + source reflection context.get_schemas_overview(): schema patterns with domain/coping style + source reflections.get_people_overview(): people entities, relationship types, and triggered patterns.get_person_deep_dive(person_name): full relationship impact drill-down for one person.get_body_signals_overview(): somatic markers and occurrence frequency.get_deep_pattern_analysis(pattern_name): roots/context for a specific pattern across graph links.get_graph_summary(): top-level graph totals and key connection snapshot.get_emotion_triggers(emotion_name): themes linked to a specific emotion.get_pattern_connections(pattern_name): co-occurring patterns and linked reflections.get_temporal_evolution(pattern_name): timeline view of pattern appearance over reflections.semantic_search_reflections(query): reflection-document semantic retrieval for grounded quoting/synthesis.
- Reflection and chat graphs are compiled with
MemorySaverfor thread continuity. - Thread IDs are normalized (
reflection-session-*,chat-session-*) and passed through the API.
@traceable instrumentation is applied on key chain functions and graph operations, so traces capture:
- node-level latency and outputs
- tool call sequencing
- end-to-end pipeline behavior per reflection/chat run
This gives judges and builders visibility into agent reliability, not just final output text.
Synapse uses SurrealDB as both:
- graph database for typed entities and relationships
- vector backend for semantic retrieval
reflectionpatternthemeemotionifs_partschema_patternpersonbody_signal
reveals(reflection -> pattern)expresses(reflection -> emotion)about(reflection -> theme)mentions(reflection -> person)triggers_pattern(person -> pattern)activates(reflection -> ifs_part)triggers_schema(reflection -> schema_pattern)feels_in_body(reflection -> body_signal)- plus co-occurrence and trigger edges
- The extractor can reuse existing labels instead of creating duplicates.
- The chat agent can traverse explicit relationships for grounded answers.
- Hybrid graph/vector search catches fuzzy language while preserving structure.
- User-scoped records (
user_id) keep each user graph isolated.
- Reflection documents are embedded for semantic recall.
- Core graph node tables are embedded for semantic graph lookup.
- SurrealDB v3 vector behavior is patched in
reflect/db.pyfor HNSW + cosine KNN compatibility withlangchain-surrealdb.
- Orchestration: LangGraph + LangChain
- Backend: FastAPI (Python 3.12+)
- Frontend: React + TypeScript (Vite)
- Database: SurrealDB (graph + vector)
- Embeddings: OpenAI
text-embedding-3-small - Extraction/chat model: Anthropic
claude-sonnet-4-6 - Insight/follow-up generation: OpenAI
gpt-5-mini - Telegram voice transcription: OpenAI
whisper-1 - Charts: Recharts
- Tracing: LangSmith
GET /health
POST /api/auth/registerPOST /api/auth/loginPOST /api/auth/reset-requestPOST /api/auth/reset-confirm
GET /api/daily-promptPOST /api/reflectionPOST /api/chatPOST /api/chat/stream(SSE)GET /api/dashboardGET /api/peopleGET /api/reflections
Render Blueprint: render.yaml
Services:
synapse-backend(web)synapse-telegram(worker)synapse-frontend(static web)
- More secure authentication: add refresh-token rotation, email verification, optional MFA, and stronger session management controls.
- Production-grade privacy controls: add data export/delete workflows, retention settings, and auditable access logs.
- Reliability hardening: add retry/backoff around model and DB calls, plus clearer degraded-mode responses when providers fail.
- Testing and CI: add automated unit/integration tests for extraction parsing, graph queries, and core API routes.
- Observability upgrades: add latency/error dashboards, alerting thresholds, and trace-linked incident debugging workflows.
- Schema migrations: move from startup schema init to versioned migrations for safer deployments.
- Expanded reflection intelligence: improve longitudinal trend summaries and better weekly/monthly pattern change views.
Install just on macOS:
brew install justCreate .env in the repo root.
Required:
OPENAI_API_KEYANTHROPIC_API_KEYSURREAL_URLSURREAL_USERSURREAL_PASS
Recommended defaults/optional:
SURREAL_NS=main(orSURREAL_NAMESPACE)SURREAL_DB=main(orSURREAL_DATABASE)JWT_SECRET=<random-long-secret>CORS_ORIGINS=http://localhost:5173LANGCHAIN_TRACING_V2=trueLANGCHAIN_PROJECT=synapse-hackathonLANGCHAIN_API_KEY=<if using LangSmith>TELEGRAM_BOT_TOKEN=<required for Telegram bot>
just syncThis runs backend dependency sync (uv sync) and frontend dependency sync when needed.
just dev- API:
http://localhost:8000 - Frontend:
http://localhost:5173
Stop services:
just stopSeparate terminal:
just telegramOr run all services in one terminal:
just dev-all- Visit
http://localhost:5173 - Register a new account (or log in)
- Submit reflections in
reflect - Query your graph in
talk
just backend
just frontendseed_data.pyis useful for pipeline experiments, but it does not attach auser_id; dashboard/chat views in the authenticated app are user-scoped.- If frontend dependencies drift, rerun
just devorjust sync.




