📖 Paper 🤖 Project Page
Current large language models (LLMs), despite their power, can introduce safety risks in clinical settings due to limitations such as poor error detection and single point of failure. To address this, we propose Tiered Agentic Oversight (TAO), a hierarchical multi-agent framework that enhances AI safety through layered, automated supervision. Inspired by clinical hierarchies (e.g., nurse, physician, specialist), TAO conducts agent routing based on task complexity and agent roles. Leveraging automated inter- and intra-tier collaboration and role-playing, TAO creates a robust safety framework. Ablation studies reveal that TAO's superior performance is driven by its adaptive tiered architecture, which improves safety by over 3.2% compared to static single-tier configurations; the critical role of its lower tiers, particularly tier 1, whose removal most significantly impacts safety; and the strategic assignment of more advanced LLM to these initial tiers, which boosts performance by over 2% compared to less optimal allocations while achieving near-peak safety efficiently. These mechanisms enable TAO to outperform single-agent and multi-agent frameworks in 4 out of 5 healthcare safety benchmarks, showing up to an 8.2% improvement over the next-best methods in these evaluations. Finally, we validate TAO via an auxiliary clinician-in-the-loop study where integrating expert feedback improved TAO's accuracy in medical triage from 40% to 60%.
Create a new virtual environment, e.g. with conda
~$ conda create -n tao python>=3.9Install the required packages:
~$ pip install -r requirements.txtActivate the environment:
~$ conda activate taoSet up API keys:
~$ export GOOGLE_API_KEY="your_google_api_key_here"
~$ export OPENAI_API_KEY="your_openai_api_key_here"Replace api keys with your actual ones. Prepare the data:
~$ mkdir -p ./dataPlace your JSON data files in the ./data directory. Ensure that the files are named according to the dataset they represent, e.g., safetybench.json, etc.
Your directory structure should look like this:
tao/
├── data/
│ ├── safetybench.json
│ └── ... (other dataset files)
├── run.sh
├── main.py
├── utils.py
├── requirements.txt
└── README.md
- SafetyBench: https://github.com/thu-coai/SafetyBench
- MedSafetyBench: https://github.com/AI4LIFE-GROUP/med-safety-bench
- LLM Red-teaming: https://daneshjoulab.github.io/Red-Teaming-Dataset/
- Medical Triage:https://github.com/ITM-Kitware/llm-alignable-dm
- MM-SafetyBench: https://github.com/isXinLiu/MM-SafetyBench
~$ bash run.sh- Add initial experimental scripts
- Add other sampled datasets
- Add ablation scripts for better replication
- Update token usage calculator
- Add experiment config.yaml for better visibility
- Add eval.py for better replication








