Project Description EvalMate is an intelligent automated evaluation system that uses AI to grade student answer papers with complete observability and traceability. The system leverages OpenAI's GPT models and OCR technology to extract, analyze, and evaluate handwritten or typed answers against model answer keys, providing detailed scores, feedback, and matched/missing key points.

What It Does Extracts handwritten text from PDF answer papers using Marker OCR API Evaluates student answers using AI-powered keyword matching against model answers Generates detailed scores, feedback, and identifies matched/missing concepts Provides complete observability through Galileo tracing Stores results in multiple formats (local files, SQLite database, cloud storage) Exposes RESTful API for easy integration with frontends and other systems

Tech Stack Backend: FastAPI (Python web framework) OpenAI Agents SDK with GPT-5.1 Uvicorn (ASGI server)

AI & Processing: OpenAI GPT for intelligent evaluation Marker API for OCR/handwriting extraction Deterministic keyword matching algorithm

Observability & Storage: Galileo for AI tracing and monitoring SQLite for local database Tigris (S3-compatible) for cloud storage

Deployment Options: Traditional Python (local development) Docker Compose (containerized) Daytona Sandboxes (cloud execution)

Key Features Handwriting Recognition - Extracts text from handwritten answer papers AI-Powered Evaluation - Uses GPT models with custom agent tools Keyword-Based Scoring - Deterministic matching against model answers Detailed Feedback - Generates teacher-style comments and suggestions Complete Traceability - Galileo traces every operation Multiple Storage - Saves to files, database, and cloud RESTful API - Easy integration with interactive docs Flexible Deployment - Run locally, in Docker, or cloud sandboxes https://drive.google.com/file/d/1g4DjGJcfHK7DaoWQWFZXv7BmDZmdzM8Y/view?usp=sharing

Built With

Share this project:

Updates