GooseGuard is an AI security patrol platform where attacker and guardian agents simulate real exploit/mitigation loops against your codebase in real time — visualized as a goose standoff.
- A rogue goose probes for weaknesses in your code.
- A guardian goose responds with mitigations and patch guidance.
- A goose patrol arena visualizes every exchange and updates live security health.
- A modern landing page introduces the workflow before users start a scan.
- Open the landing page and start a scan with a GitHub repository URL.
- Watch live attacker-vs-guardian patrol rounds stream into the arena + telemetry panels.
- Review prioritized vulnerabilities and remediation guidance in the report view.
- Export fixes or publish a PR from completed patrol output.
Most security scanners produce static reports that are hard to interpret quickly. GooseGuard turns findings into a patrol simulation that is easier to understand and act on:
- See risk movement live instead of reading a long PDF.
- Track impact with health bars so non-security stakeholders can follow quickly.
- Capture practical fixes in plain language after each probe/defense exchange.
- Frontend:
React 19 + Vite 8 + Phaser 3 - Backend:
FastAPI + WebSocket - Patrol model: round-based attacker/defender simulation with event streaming
backend/
main.py # FastAPI app, routes, WebSocket battle loop
models.py # Pydantic request/response models
classifier.py # Attack-surface keyword classification
llm.py # LLM clients (Anthropic, Gemini, OpenAI)
branch_battle.py # Clone repo, create branch, apply patches, push
pr_bot.py # PR publishing helpers
repo_audit.py # Static repo security audit (secrets, patterns)
repo_scanner.py # Repo file scanning utilities
website_audit.py # HTTP header/content checks for live URLs
agents/
attacker.py
defender.py
frontend/
src/
App.jsx
components/ # LandingScreen, BattleArena, ArenaHud, ReportView, …
game/ # PhaserGame.js, GooFighter.js
hooks/ # useWebSocket.js
utils/ # backendUrl, battleRows, cweMap, scanPayload, …
constants/ # LLM model picker options
assets/ # Sprites, sounds, images
cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cd ..
uvicorn backend.main:app --reload --port 8000Note:
uvicornmust run from the repo root becausemain.pyusesfrom backend.…imports.
cd frontend
npm install
npm run devThen open the local Vite URL (usually http://localhost:5173), review the landing page, and click Release the Goose.
- Agents can run in live LLM mode (Anthropic Claude, Google Gemini, or OpenAI as fallback) or deterministic simulated mode.
- Users can pick provider/model per patrol and toggle extended thinking to show richer attacker/defender reasoning in the live feed.
- Default patrol mode is now branch battle: each round attempts to apply real file patches, commit, and push to a battle branch.
- Landing screen styles live in
frontend/src/components/LandingScreen.css. - Core arena/report styles live in
frontend/src/index.css. - After each patrol, export a machine-readable defense bundle from the UI and hand it to an implementation agent to apply/push hardening changes.
- You can also publish an automatic branch/PR directly from a completed patrol by providing a GitHub token (or setting
GITHUB_TOKENon the backend). - You can run a repo security audit (
/battle/{id}/security-auditendpoint) to detect secrets, risky code patterns, and dependency hygiene issues with a scored report. - A website defense report endpoint (
/battle/{id}/website-defense-report) can scan live URLs for HTTP header and content issues when awebsite_urlis provided with the scan.
To run attacker/defender as live models, set API keys in backend environment before starting FastAPI:
export ANTHROPIC_API_KEY=...
# optional alternatives
export GEMINI_API_KEY=...
# or
export GOOGLE_API_KEY=...
# OpenAI fallback (used when primary provider is unavailable)
export OPENAI_API_KEY=...Optional per-role overrides:
export ATTACKER_LLM_PROVIDER=anthropic # or gemini
export ATTACKER_LLM_MODEL=claude-opus-4-1
export DEFENDER_LLM_PROVIDER=gemini # or anthropic
export DEFENDER_LLM_MODEL=gemini-2.5-proFor branch-battle mode, you also need:
export GITHUB_TOKEN=... # token must be able to push branches to target repoThen choose Agent Runtime = Live LLMs in the UI.
You can also use a local env file:
cp backend/.env.example backend/.envThen fill in keys in backend/.env (this file is gitignored).