-
Notifications
You must be signed in to change notification settings - Fork 614
[SECURITY]: Moving away from serialization and restricting eval scope in LLM Guard #2156
Description
I've been spending some time exploring the LLM Guard plugin implementation lately—the modular setup is really well put together. While digging through the logic, I spotted a few implementation details that might be worth revisiting to improve overall robustness.
The cache logic in llmguardplugin/cache.py currently relies on pickle. Since pickle.loads can execute arbitrary code if the data is ever tampered with, I think switching to something like json or msgpack would be a safer bet for serializing LLM responses and validation results.
I also noticed eval() being used in llmguardplugin/policy.py for policy evaluation. Even though it's likely intended for dynamic rules, raw eval feels a bit heavy-handed for this. It might be worth looking into ast.literal_eval or a restricted evaluator to ensure the logic stays contained and doesn't have unintended side effects.
I'm more than happy to help with a PR if you guys are open to these changes.