Hi there, I'm Yue Zhao (赵越) 👋
😄 I am an Assistant Professor at USC Computer Science. More information can be found on my homepage.
External Affiliation Disclosure:
As of 02/01/2026, Dr. Zhao does not currently hold any industry employment, consulting, or advisory appointments.
My research focuses on auditing, securing, and deploying reliable AI systems, with an emphasis on foundation models and agentic systems operating in real-world environments.
My work centers on three closely connected directions.
I develop methods, benchmarks, and open-source systems to audit and monitor complex AI systems, including foundation models and agentic pipelines.
Representative systems include:
- TrustLLM – auditing trustworthiness of large language models
- agent-audit – security analysis for agentic AI pipelines
- PyOD ecosystem – scalable anomaly detection tools (35M+ downloads)
Keywords:
AI Auditing · AI Assurance · Trustworthy AI · Agent Systems · AI Monitoring · Risk Analysis
I study failure modes and security risks in modern AI systems, particularly LLMs and agents.
Representative topics include:
- hallucination mitigation
- jailbreak detection
- prompt attacks
- privacy leakage
- robustness and anomaly detection
Keywords:
LLM Safety · AI Safety · Robustness · Anomaly Detection · Failure Analysis
I apply reliable and auditable AI systems to high-impact domains where failures carry significant consequences.
Example areas include:
- climate and weather forecasting
- healthcare and biomedicine
- computational social systems
Keywords:
AI for Science · Climate AI · Healthcare AI · Social Systems
- Email: yue.z [AT] usc.edu
- 🌐 Homepage
- 📚 Google Scholar
- 🧠 GitHub
💡 I am the creator/core developer of several widely used ML systems including PyOD, PyGOD, ADBench, and TrustLLM, which together have 35M+ downloads and 22K+ GitHub stars.



