Injection is inevitable.
Disaster is optional.
We're building infrastructure to defend AI systems at the execution layer—where it actually matters
Prompt injection is social engineering for software. We can't prevent LLMs being tricked, but we can prevent the consequences. So let's team up and do that.
disreGUARD is a security research lab
from cofounders of npm audit and Code4rena.
Writing
Latest from the blog
February 26, 2026
AI and security: the other bitter lesson
Why we need new primitives to defend against prompt injection
February 7, 2026The auditor in the airlock: a security pattern for AI agent decisions
When an agent needs to make a security-sensitive decision about tainted data, you need an information bottleneck between the taint and the judgment. Here's how to build one.
February 7, 2026Hardening OpenClaw: a practical prompt injection defense
This week researchers demonstrated persistent backdoors in OpenClaw via prompt injection. We're helping harden it with `sig` and patterns apply to any agent framework.