James Oldfield1, Philip Torr2, Ioannis Patras1, Adel Bibi2 Fazl Barez2,3,4
1Queen Mary University of London, 2University of Oxford, 3WhiteBox, 4Martian
Monitoring large language models' (LLMs) activations is an effective way to detect harmful requests before they lead to unsafe outputs. However, traditional safety monitors often require the same amount of compute for every query. This creates a trade-off: expensive monitors waste resources on easy inputs, while cheap ones risk missing subtle cases. We argue that safety monitors should be flexible--costs should rise only when inputs are difficult to assess, or when more compute is available. To achieve this, we introduce Truncated Polynomial Classifiers (TPCs), a natural extension of linear probes for dynamic activation monitoring. Our key insight is that polynomials can be trained and evaluated progressively, term-by-term. At test-time, one can early-stop for lightweight monitoring, or use more terms for stronger guardrails when needed. TPCs provide two modes of use. First, as a safety dial: by evaluating more terms, developers and regulators can "buy" stronger guardrails from the same model. Second, as an adaptive cascade: clear cases exit early after low-order checks, and higher-order guardrails are evaluated only for ambiguous inputs, reducing overall monitoring costs. On two large-scale safety datasets (WildGuardMix and BeaverTails), for 4 models with up to 30B parameters, we show that TPCs compete with or outperform MLP-based probe baselines of the same size, all the while being more interpretable than their black-box counterparts.
The codebase contains the following key files:
model.pycontains the model definitions (for the TPC and baselines)train.pycontains the training scriptstest_poly_forward.pycontains unit tests to ensure that the symmetric forward pass matches that when materializing full tensorsutils.pyhelper utilsextract/*contains files to save intermediate activations to disksweep_monitors.pyis the main script to reproduce the results.sweep.shis the main example script to train all models and reproduce the results.
If you find our work useful, please consider citing our paper:
@misc{oldfield2025tpc,
title={Beyond Linear Probes: Dynamic Safety Monitoring for Language Models},
author={James Oldfield and Philip Torr and Ioannis Patras and Adel Bibi and Fazl Barez},
year={2025},
eprint={2509.26238},
archivePrefix={arXiv},
primaryClass={cs.LG}
}Please feel free to get in touch at: jamesalexanderoldfield@gmail.com
