Skip to content

DeekshaBandi/AxxcessHack

Repository files navigation

Telepathy (AxxcessHack)

Telepathy is a Next.js healthcare web app with patient and clinician portals, AI-assisted documentation, multilingual support, and 3D body-map workflows.

Core Features

Patient experience

  • Role-based auth (patient/clinician) with signup and login.
  • Dashboard, appointments, current visit, visit history, profile, and report pages.
  • Find-doctor flow with assignment and clinician browsing.
  • 3D body map to mark pain regions and add notes.
  • Upload media during current visit (images/video) for clinician review.
  • Visit summary and historical records view.

Clinician experience

  • Clinician dashboard, appointments, patient list, current visit, records, and body-map pages.
  • Live-style current visit workspace with transcript feed, EMR form autofill, priority markers, and diagnosis visualizations.
  • Doctor and AI views over patient-submitted body regions.
  • Patient detail views with records and contextual data.

AI + language workflows

  • EMR extraction from visit conversation.
  • Financial and clinical insights from patient profile data.
  • Possible-differential generation based on patient/visit context.
  • Translation support for multilingual workflows.
  • ElevenLabs STT/TTS integration for voice input/output.
  • Media/frame analysis endpoints for visit artifacts.

Featherless AI (explicit)

This project uses Featherless as an OpenAI-compatible LLM backend.

  • Wrapper: lib/featherless.ts
  • AI orchestration: lib/aiService.ts
  • Base URL: https://api.featherless.ai/v1
  • Required key: FEATHERLESS_API_KEY
  • Primary models used:
  • Qwen/Qwen3-32B for general analysis/translation workflows.
  • nicoboss/Qwen-3-32B-Medical-Reasoning for structured medical note generation.
  • Request behavior:
  • retries on transient errors (e.g., 429/503),
  • disables "thinking" output where supported,
  • strips <think>...</think> blocks before parsing.

Gemini can be used as a supplemental provider for EMR and differential generation when GEMINI_API_KEY is configured (lib/gemini.ts).

Voice Transcription (ElevenLabs)

Transcription uses ElevenLabs Speech-to-Text (Scribe v2).

Flow:

  1. Patient records a voice note in current visit and chooses language (optional).
  2. On submit, app uploads media to POST /api/visits/[id]/media, then calls POST /api/visits/[id]/submit with { speechLanguage }.
  3. Submit marks visit as submitted, then triggers voice processing via POST /api/visits/[id]/process-voice.
  4. Voice processing reads patientMedia.voiceRecordingUrl, sends audio to ElevenLabs STT, optionally translates to English, and saves transcript into visit.conversation.

Requirements:

  • ELEVENLABS_API_KEY in .env.local.
  • Voice must be uploaded before submit.
  • Supported input: audio data URL (e.g. audio/webm), up to 5MB per recording.

If transcript is missing:

  • Verify ELEVENLABS_API_KEY is valid.
  • Check server logs for ElevenLabs STT API error or Process-voice STT failed.

Tech Stack

  • Framework: Next.js 14 (App Router), React 18, TypeScript
  • Styling/UI: Tailwind CSS, Framer Motion, Lucide React
  • Data layer: MongoDB + Mongoose
  • 3D body-map: Three.js + @react-three/fiber + @react-three/drei
  • AI/LLM:
  • Featherless (OpenAI SDK compatibility)
  • Google Gemini (@google/genai)
  • Voice:
  • ElevenLabs client + SDK (@elevenlabs/client, @elevenlabs/elevenlabs-js)
  • Charts/visuals: Recharts
  • Auth/security primitives: bcryptjs for password hashing

Project Structure

  • app/ - App Router pages and API routes
  • components/ - shared UI and feature components
  • components/body-views/ - 3D body model and body-map views
  • lib/ - DB, AI, model schemas, utility modules
  • public/ - static assets (including body-model files and logo assets)

API Surface (high level)

Main route groups under app/api:

  • Auth: /api/auth
  • Seed data: /api/seed
  • Patients: /api/patients, /api/patients/[id], /api/patients/[id]/visits, /api/patients/[id]/records
  • Clinicians: /api/clinicians, /api/clinicians/[id], /api/clinicians/[id]/patients
  • Visits: /api/visits, /api/visits/[id], /api/visits/invite
  • Visit AI/media/body-map:
  • /api/visits/[id]/submit
  • /api/visits/[id]/process-voice
  • /api/visits/[id]/possible-diseases
  • /api/visits/[id]/body-map
  • /api/visits/[id]/media
  • /api/visits/[id]/analyze-media
  • AI/utilities:
  • /api/pipeline, /api/translate, /api/transcribe, /api/tts
  • ElevenLabs-specific:
  • /api/elevenlabs, /api/elevenlabs/stt

Environment Variables

Create .env.local in repo root:

MONGODB_URI=your_mongodb_connection_string
FEATHERLESS_API_KEY=your_featherless_api_key
GEMINI_API_KEY=your_gemini_api_key_optional
ELEVENLABS_API_KEY=your_elevenlabs_api_key_optional

Notes:

  • MONGODB_URI is required for app startup.
  • Featherless/Gemini/ElevenLabs keys enable AI and voice features.

Local Development

Install dependencies:

npm install

Run dev server (default script binds to 127.0.0.1:3000):

npm run dev

If port 3000 is busy, override:

npm run dev -- --port 3020 --hostname 127.0.0.1

Seed Test Data

Populate demo clinicians, patients, visits, and records:

curl -X POST http://127.0.0.1:3000/api/seed

Use test accounts listed in:

  • test-accounts.md

Scripts

  • npm run dev - start local dev server
  • npm run build - production build
  • npm run start - run production server
  • npm run lint - run Next lint

Current Notes

  • Next.js version in this repo is 14.2.25 (npm may warn about security patches; upgrade when feasible).
  • Some AI/voice features gracefully degrade when optional API keys are missing.

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages