Skip to content

veerdosi/clinical-engine

Repository files navigation

Clinical Engine

🔗 Live Website | 🎥 Video Demo | 📚 Medical Education Platform

Clinical Engine is an advanced medical simulation platform designed for medical education. It allows users (medical students) to interact with a virtual patient, perform physical examinations, order lab tests and imaging studies, and receive comprehensive feedback on their clinical decision-making.

Features

  • Virtual Patient Interaction: Lifelike patient interaction with advanced language model responses
  • Physical Examination: Simulated physical examinations with detailed findings
  • Diagnostic Testing: Order and review lab tests and imaging studies
  • Clinical Evaluation: Comprehensive assessment of diagnostic accuracy, test selection, and clinical reasoning
  • Natural Voice Interaction: Support for voice-based patient interaction with speech-to-text conversion
  • AI-Generated Medical Imaging: Dynamic medical image generation using Prompt2MedImage integration with Interactive Image Viewer
  • Performance Analytics: Timing and efficiency metrics for student performance tracking
  • Evaluation History: Track progress and review past case evaluations

Tech Stack

Backend

  • Flask: Python web framework with RESTful API design
  • Perplexity Sonar API: Large language model with search integration for patient simulation (case generation, lab results and physical exam findings)
  • MongoDB: NoSQL database for user data and session storage
  • HuggingFace: AI model inference endpoint for medical image generation
  • Cloudinary: Cloud-based image storage and optimization

Frontend

  • React 19: Modern React with latest features and performance improvements
  • React Router: Client-side routing and navigation
  • TailwindCSS 4: Utility-first CSS framework with latest optimizations
  • Lucide React: Modern icon library
  • Konva/React-Konva: 2D canvas library for interactive image viewing

Deployment

  • Vercel: Serverless deployment platform
  • Vercel Functions: Serverless backend API endpoints

Prerequisites

  • Python 3.10 or higher
  • Node.js 18 or higher
  • MongoDB 5.0 or higher
  • Google Cloud Platform account (for OAuth)
  • Perplexity API key
  • Cloudinary API Key
  • OpenAI API Key

Installation

Backend Setup

  1. Clone the repository:

    git clone https://github.com/yourusername/clinical-engine.git
    cd clinical-engine
  2. Create and activate a virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install Python dependencies:

    pip install -r requirements.txt
  4. Set up your environment variables (see Environment Variables section below).

  5. Start the backend server:

    python -m backend.main

Frontend Setup

  1. Navigate to the frontend directory:

    cd frontend
  2. Install Node.js dependencies:

    npm install
  3. Start the development server:

    npm start

Deployment

The project is configured for deployment on Vercel:

  1. Connect your repository to Vercel
  2. Configure environment variables in the Vercel dashboard
  3. Deploy automatically on push to main branch

The vercel.json configuration handles both frontend and backend deployment with serverless functions.

Authentication and Database Setup

MongoDB Setup

  1. Install MongoDB on your system if you haven't already. You can download it from MongoDB's official website.
  2. Start the MongoDB server with:
    mongod --dbpath /path/to/your/data/directory
    
  3. The application will automatically connect to MongoDB using the connection string in your environment variables.

Google OAuth Setup

  1. Create a Google Cloud Platform project:
  2. Create OAuth credentials:
    • In the Google Cloud Console, go to API & Services > Credentials
    • Create an OAuth client ID (Web application type)
    • Add authorized JavaScript origins (e.g., http://localhost:3000 for development)
    • Add authorized redirect URIs (e.g., http://localhost:3000 for development)
  3. Copy the Client ID to your environment variables.

Environment Variables

  1. Copy the .env.sample file to a new file named .env in the project root directory.

  2. Update the values in the .env file with your actual credentials:

    # OpenAI API Key
    OPENAI_API_KEY=sk-proj-your-openai-api-key
    
    # ElevenLabs API Key
    ELEVENLABS_API_KEY=sk_your-elevenlabs-api-key
    
    # Cloudinary Configuration
    CLOUDINARY_API_KEY=your-cloudinary-api-key
    CLOUDINARY_CLOUD_NAME=your-cloudinary-cloud-name
    CLOUDINARY_UPLOAD_PRESET=your-upload-preset
    CLOUDINARY_API_SECRET=your-cloudinary-api-secret
    
    # Perplexity API Key
    PERPLEXITY_API_KEY=pplx-your-perplexity-api-key
    
    # MongoDB Connection String
    MONGODB_URI=mongodb://localhost:27017/clinical_engine
    # MONGODB_URI=mongodb+srv://username:password@cluster0.xxxxx.mongodb.net/?retryWrites=true&w=majority&appName=Cluster0
    
    # Google OAuth Client ID
    GOOGLE_CLIENT_ID=your-google-client-id.apps.googleusercontent.com
    
    # Secret key for JWT tokens
    SECRET_KEY=your-secret-key-for-jwt-tokens
    
    # Flask Environment
    FLASK_ENV=development
  3. For the frontend, create a .env file in the frontend directory with:

    REACT_APP_GOOGLE_CLIENT_ID=your-google-client-id.apps.googleusercontent.com

Usage

  1. Navigate to the application in your web browser (default: http://localhost:3000)
  2. Sign in with your Google account
  3. Select a case to begin a clinical simulation
  4. Interact with the virtual patient through text or voice
  5. Order tests, perform examinations, and take notes as needed
  6. Submit a diagnosis when ready to receive an evaluation
  7. Review your performance and clinical reasoning assessment

API Endpoints

The backend exposes the following key API endpoints:

Authentication

  • POST /api/auth/google - Authenticate with Google OAuth
  • GET /api/auth/validate - Validate authentication token

User Management

  • GET /api/user/profile - Get user profile information

Case Management

  • GET /api/current-case - Get the current active case
  • POST /api/new-case - Generate a new patient case
  • GET /api/case-history - Get user's case history

Patient Interaction

  • POST /api/chat - Send text message to virtual patient
  • POST /api/voice-chat - Process voice input for patient interaction

Clinical Tools

  • POST /api/order-lab - Order laboratory tests
  • POST /api/order-imaging - Order imaging studies
  • POST /api/physical-exam - Perform physical examination
  • GET /api/physical-exam/results - Get physical examination results

Medical Imaging

  • POST /api/generate-image - Generate AI medical images
  • GET /api/image-status/:job_id - Check image generation status
  • GET /api/images/:case_id - Get all images for a case

Evaluation System

  • POST /api/submit-diagnosis - Submit diagnosis for evaluation
  • GET /api/evaluations/history - View evaluation history
  • GET /api/evaluations/:id - View specific evaluation details
  • POST /api/evaluate-case - Comprehensive case evaluation

Session Management

  • GET /api/sessions/history - View session history
  • GET /api/session-summary - Get current session summary
  • POST /api/save-session - Save current session state

Background Jobs

  • GET /api/job-status/:job_id - Check background job status
  • POST /api/background-tasks - Queue background processing tasks

Project Structure

clinical-engine/
├── backend/                     # Flask backend application
│   ├── main.py                 # Application entry point
│   ├── api_routes.py           # API endpoint definitions
│   ├── auth.py                 # Authentication handlers
│   ├── case_generator.py       # Case generation logic
│   ├── case_manager.py         # Case management
│   ├── chat_handler.py         # Patient chat functionality
│   ├── evaluation.py           # Case evaluation system
│   ├── imaging.py              # Medical imaging integration
│   ├── lab_system.py           # Laboratory test system
│   ├── physical_exam.py        # Physical examination simulation
│   ├── session_manager.py      # Session state management
│   ├── speech_to_text.py       # Voice interaction
│   ├── virtual_patient.py      # Patient simulation logic
│   ├── background_jobs.py      # Async task processing
│   ├── db.py                   # Database connections
│   ├── config.py               # Configuration management
│   └── user.py                 # User management
├── frontend/                    # React frontend application
│   ├── src/
│   │   ├── App.js              # Main application component
│   │   ├── components/         # Reusable UI components
│   │   ├── pages/              # Page-level components
│   │   ├── auth.js             # Authentication utilities
│   │   ├── api.js              # API communication layer
│   │   └── utils/              # Utility functions
│   ├── public/                 # Static assets
│   └── package.json            # Frontend dependencies
├── Prompt2MedImage/             # AI medical image generation
│   ├── main.py                 # Image generation entry point
│   ├── feature_extractor/      # Image feature extraction
│   ├── safety_checker/         # Content safety validation
│   ├── scheduler/              # Generation scheduling
│   ├── text_encoder/           # Text to image encoding
│   ├── tokenizer/              # Text tokenization
│   ├── unet/                   # U-Net model configuration
│   └── vae/                    # Variational autoencoder
├── api/                        # Vercel API functions
│   └── index.py                # Serverless API handler
├── requirements.txt            # Python dependencies
├── vercel.json                 # Deployment configuration
├── codemcp.toml               # Development tooling config
└── README.md                   # Project documentation

Development

Development Commands

The project includes development tooling configured in codemcp.toml:

# Format code (if formatter script exists)
./run_format.sh

# Run tests (if test script exists)
./run_test.sh

Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Releases

No releases published

Packages

 
 
 

Contributors