Telepathy is a Next.js healthcare web app with patient and clinician portals, AI-assisted documentation, multilingual support, and 3D body-map workflows.
- Role-based auth (patient/clinician) with signup and login.
- Dashboard, appointments, current visit, visit history, profile, and report pages.
- Find-doctor flow with assignment and clinician browsing.
- 3D body map to mark pain regions and add notes.
- Upload media during current visit (images/video) for clinician review.
- Visit summary and historical records view.
- Clinician dashboard, appointments, patient list, current visit, records, and body-map pages.
- Live-style current visit workspace with transcript feed, EMR form autofill, priority markers, and diagnosis visualizations.
- Doctor and AI views over patient-submitted body regions.
- Patient detail views with records and contextual data.
- EMR extraction from visit conversation.
- Financial and clinical insights from patient profile data.
- Possible-differential generation based on patient/visit context.
- Translation support for multilingual workflows.
- ElevenLabs STT/TTS integration for voice input/output.
- Media/frame analysis endpoints for visit artifacts.
This project uses Featherless as an OpenAI-compatible LLM backend.
- Wrapper:
lib/featherless.ts - AI orchestration:
lib/aiService.ts - Base URL:
https://api.featherless.ai/v1 - Required key:
FEATHERLESS_API_KEY - Primary models used:
Qwen/Qwen3-32Bfor general analysis/translation workflows.nicoboss/Qwen-3-32B-Medical-Reasoningfor structured medical note generation.- Request behavior:
- retries on transient errors (e.g., 429/503),
- disables "thinking" output where supported,
- strips
<think>...</think>blocks before parsing.
Gemini can be used as a supplemental provider for EMR and differential generation when GEMINI_API_KEY is configured (lib/gemini.ts).
Transcription uses ElevenLabs Speech-to-Text (Scribe v2).
Flow:
- Patient records a voice note in current visit and chooses language (optional).
- On submit, app uploads media to
POST /api/visits/[id]/media, then callsPOST /api/visits/[id]/submitwith{ speechLanguage }. - Submit marks visit as submitted, then triggers voice processing via
POST /api/visits/[id]/process-voice. - Voice processing reads
patientMedia.voiceRecordingUrl, sends audio to ElevenLabs STT, optionally translates to English, and saves transcript intovisit.conversation.
Requirements:
ELEVENLABS_API_KEYin.env.local.- Voice must be uploaded before submit.
- Supported input: audio data URL (e.g.
audio/webm), up to 5MB per recording.
If transcript is missing:
- Verify
ELEVENLABS_API_KEYis valid. - Check server logs for
ElevenLabs STT API errororProcess-voice STT failed.
- Framework: Next.js 14 (App Router), React 18, TypeScript
- Styling/UI: Tailwind CSS, Framer Motion, Lucide React
- Data layer: MongoDB + Mongoose
- 3D body-map: Three.js +
@react-three/fiber+@react-three/drei - AI/LLM:
- Featherless (OpenAI SDK compatibility)
- Google Gemini (
@google/genai) - Voice:
- ElevenLabs client + SDK (
@elevenlabs/client,@elevenlabs/elevenlabs-js) - Charts/visuals: Recharts
- Auth/security primitives:
bcryptjsfor password hashing
app/- App Router pages and API routescomponents/- shared UI and feature componentscomponents/body-views/- 3D body model and body-map viewslib/- DB, AI, model schemas, utility modulespublic/- static assets (including body-model files and logo assets)
Main route groups under app/api:
- Auth:
/api/auth - Seed data:
/api/seed - Patients:
/api/patients,/api/patients/[id],/api/patients/[id]/visits,/api/patients/[id]/records - Clinicians:
/api/clinicians,/api/clinicians/[id],/api/clinicians/[id]/patients - Visits:
/api/visits,/api/visits/[id],/api/visits/invite - Visit AI/media/body-map:
/api/visits/[id]/submit/api/visits/[id]/process-voice/api/visits/[id]/possible-diseases/api/visits/[id]/body-map/api/visits/[id]/media/api/visits/[id]/analyze-media- AI/utilities:
/api/pipeline,/api/translate,/api/transcribe,/api/tts- ElevenLabs-specific:
/api/elevenlabs,/api/elevenlabs/stt
Create .env.local in repo root:
MONGODB_URI=your_mongodb_connection_string
FEATHERLESS_API_KEY=your_featherless_api_key
GEMINI_API_KEY=your_gemini_api_key_optional
ELEVENLABS_API_KEY=your_elevenlabs_api_key_optionalNotes:
MONGODB_URIis required for app startup.- Featherless/Gemini/ElevenLabs keys enable AI and voice features.
Install dependencies:
npm installRun dev server (default script binds to 127.0.0.1:3000):
npm run devIf port 3000 is busy, override:
npm run dev -- --port 3020 --hostname 127.0.0.1Populate demo clinicians, patients, visits, and records:
curl -X POST http://127.0.0.1:3000/api/seedUse test accounts listed in:
test-accounts.md
npm run dev- start local dev servernpm run build- production buildnpm run start- run production servernpm run lint- run Next lint
- Next.js version in this repo is
14.2.25(npm may warn about security patches; upgrade when feasible). - Some AI/voice features gracefully degrade when optional API keys are missing.