llmguard: replace raw eval() with a safe AST-based evaluator#2180
llmguard: replace raw eval() with a safe AST-based evaluator#2180crivetimihai merged 1 commit intoIBM:mainfrom
Conversation
4561499 to
c80ed46
Compare
Signed-off-by: RinCodeForge927 <dangnhatrin90@gmail.com> Signed-off-by: RinZ27 <222222878+RinZ27@users.noreply.github.com> Signed-off-by: Mihai Criveti <crivetimihai@gmail.com>
c80ed46 to
077aad4
Compare
Review and UpdatesRebased onto Fixes Applied
Tests AddedAdded
Behavioral NoteThe new evaluator accepts boolean constants ( |
Signed-off-by: RinCodeForge927 <dangnhatrin90@gmail.com> Signed-off-by: RinZ27 <222222878+RinZ27@users.noreply.github.com> Signed-off-by: Mihai Criveti <crivetimihai@gmail.com>
Hardened the policy evaluation by ditching raw
eval(). Instead of relying on a whitelist and then compiling, I wrote a dedicated AST walker that only knows how to handle the specific logic we actually use. It keeps things strictly contained while preserving all the boolean and comparison features we need.eval(compile(tree, ...))entirely._safe_evalmethod to compute results based on a strict set of supported operations.