This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
Currently, two official plugins are available:
- @vitejs/plugin-react uses Babel for Fast Refresh
- @vitejs/plugin-react-swc uses SWC for Fast Refresh
A privacy-first web application that transcribes and summarizes doctor-patient conversations into structured clinical notes, featuring distinct doctor and patient views.
- ποΈ Local Audio Recording: Browser-based audio capture with real-time duration tracking
- π Privacy-First: All AI processing (Whisper + Ollama) runs locally
- π©ββοΈ Doctor View: Structured SOAP format clinical notes with review/edit capabilities
- π€ Patient View: Simplified, jargon-free summaries with consent-based sharing
- π Consent Management: Digital consent workflow for recording and summary sharing
- π¨ Mobile-Responsive: Clean, modern UI that works on desktop and mobile devices
- Frontend: React 18 + TypeScript + Vite
- Styling: Tailwind CSS
- Backend: Flask (Python)
- AI Models: Whisper (transcription) + Ollama (summarization)
- Storage: Local storage (MVP) / JSON files
- Icons: Lucide React
- Node.js (v18+)
- Python (v3.8+)
- Ollama - Install from ollama.ai
- FFmpeg (for Whisper audio processing)
# Install dependencies
npm install
# Start development server
npm run devThe frontend will be available at http://localhost:5173/
# Navigate to backend directory
cd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install Python dependencies
pip install -r requirements.txt
# Start Flask server
python app.pyThe API will be available at http://localhost:5000/
-
Install Ollama: Download from ollama.ai
-
Pull a language model:
ollama pull llama3.2 # or another model of your choice -
Install Whisper:
pip install openai-whisper
- Login β Doctor dashboard
- New Session β Patient consent form
- Recording β Real-time audio capture
- Processing β Whisper transcription + Ollama summarization
- Review β Edit SOAP notes and patient summary
- Share β Generate secure patient link
- Access Link β Patient summary view
- Review β Easy-to-read consultation summary
- Feedback β Optional satisfaction feedback
terrahacks-2025/
βββ src/
β βββ components/
β β βββ DoctorDashboard.tsx # Doctor's main interface
β β βββ ConsentForm.tsx # Patient consent workflow
β β βββ RecordingSession.tsx # Audio recording interface
β β βββ PatientView.tsx # Patient summary display
β βββ hooks/
β β βββ useAudioRecorder.ts # Audio recording custom hook
β βββ services/
β β βββ api.ts # Backend API integration
β βββ types/
β β βββ index.ts # TypeScript interfaces
β βββ App.tsx # Main app component
βββ backend/
β βββ app.py # Flask API server
β βββ requirements.txt # Python dependencies
βββ README.md
- Role Selection: Choose "Doctor Login"
- Consent Form: Fill patient information and consent checkboxes
- Recording: Start/stop audio recording (mock or real)
- AI Processing: Watch Whisper + Ollama generate notes
- Review: Show structured doctor notes
- Patient View: Switch to patient-friendly summary
- Privacy: Highlight local processing and consent features
- No persistent database (uses localStorage)
- Mock authentication system
- No real-time speaker diarization
- No EMR integration
- Not HIPAA-compliant (demo purposes)
npm run dev- Start development servernpm run build- Build for productionnpm run preview- Preview production buildnpm run lint- Run ESLint
Create a .env file in the root directory:
VITE_API_URL=http://localhost:5000/api
OLLAMA_HOST=http://localhost:11434- Primary Colors: Medical blue (#0891b2) and standard blue (#3b82f6)
- Typography: System fonts with clear hierarchy
- Components: Glass-morphism effects and smooth transitions
- Mobile-first: Responsive design for all screen sizes
This project is for demonstration purposes only. Not intended for production medical use.
This is a hackathon project. Feel free to fork and improve!
Built for TerraHacks 2025 π
The RecordingSession component now makes real API calls to the Flask backend:
- β Automatic Backend Detection: Checks if Flask server is running
- β Real AI Processing: Uses Whisper + Ollama when available
- β Graceful Fallback: Mock data when backend offline
- β Status Indicators: Visual feedback for backend connection
# Terminal 1: Frontend
npm run dev
# Terminal 2: Backend (Windows)
start-backend.bat
# Or Linux/Mac:
./start-backend.shThe app automatically switches from demo mode to real AI processing! π
You can also install eslint-plugin-react-x and eslint-plugin-react-dom for React-specific lint rules:
// eslint.config.js
import reactX from 'eslint-plugin-react-x'
import reactDom from 'eslint-plugin-react-dom'
export default tseslint.config([
globalIgnores(['dist']),
{
files: ['**/*.{ts,tsx}'],
extends: [
// Other configs...
// Enable lint rules for React
reactX.configs['recommended-typescript'],
// Enable lint rules for React DOM
reactDom.configs.recommended,
],
languageOptions: {
parserOptions: {
project: ['./tsconfig.node.json', './tsconfig.app.json'],
tsconfigRootDir: import.meta.dirname,
},
// other options...
},
},
])