LLPhant – A comprehensive open source PHP Generative AI Framework – contribution with complete evaluation module
Posted on 30 August 2025 in projects
Implemented a collection of tools that represent different strategies for evaluating LLM responses in most popular PHP AI/LLM framework LLPhant.
Supported 10 different strategies for evaluating LLM responses:
Score evaluators:
- Criteria evaluator
- Embedding distance evaluator
- String comparison evaluator
- Trajectory evaluator
Output validation:
- JSON format validator
- XML format validator
- Fallback messages validator
- Regex pattern validator
- Token limit validator
- Word limit validator
Introduced A/B testing for different LLMs response comparison.
Added also guardrails which are lightweight, programmable checkpoints that sit between application and the LLM. After each model response they run an evaluator of your choice (e.g. JSON‐syntax checker, “no fallback” detector). Based on the result, either pass the answer through, retry the call, block it, or route it to a custom callback.
Framework repository: https://github.com/LLPhant/LLPhant
Evaluation module: https://github.com/LLPhant/LLPhant/tree/main/src/Evaluation


