FAIR (or The Fair Platform) is an open-source platform that makes it easy to experiment with automatic grading systems using AI. It provides a flexible and extensible environment for building, testing, and comparing grading approaches, from interpreters and rubrics to agent-based systems and research datasets.
The goal is to support researchers, educators, and students who want to explore how AI can improve assessment, reduce manual grading workload, and enable reproducible experiments in educational technology.
- Flexible Architecture – Define courses, assignments, and grading modules with full customization.
- Interpreters – Parse and standardize student submissions (PDFs, images, code, etc.) into structured artifacts.
- Graders – Apply configurable rubrics, AI models, or hybrid approaches to evaluate submissions.
- Artifacts – A universal data type for storing submissions, results, and metadata.
- Experimentation First – Swap modules, run A/B tests, and measure performance across approaches.
- Research-Friendly – Designed for reproducibility, with plans for standardized datasets and benchmarks.
- Extensible – Build plugins for compilers, proof validators, RAG systems, or agentic graders.
pip install fair-platform
fair serveFor detailed installation instructions, troubleshooting, and more, visit the documentation (available in English and Spanish).
Once you have uv and Bun instlaled, you can build the platform and start using it:
uv run
./build.sh
fair serveSome planned directions for FAIR include:
- Standardized datasets for AI grading research
- Dataset generation tools (e.g., synthetic student responses with realistic errors)
- Plugins for popular LMS
- More visualization and reporting tools
FAIR is open for contributions! Whether you want to submit issues, propose new grading modules, or share experimental datasets, we'd love your help.
📖 New contributors: Please read our CONTRIBUTING.md for detailed guidelines on how to get started, our workflow, and what to expect.
Quick start:
- Submit issues and feature requests
- Propose or implement new grading modules
- Share experimental datasets and benchmarks
If you're interested in collaborating, open an issue or start a discussion.
This project is licensed under the MIT License. See LICENSE for the full text and details.
You CAN:
- Use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the software.
- Use the software in commercial, educational, or research contexts.
- License your derivative works under any terms you choose.
You MUST:
- Include the copyright notice and permission notice in all copies or substantial portions of the software.
Disclaimer:
- The software is provided "as is", without warranty of any kind.
Questions about licensing? Please open an issue or contact allan.zapata@up.ac.pa.