Skip to content

shriyaabalaji/hackhackgoose

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GooseGuard

GooseGuard is an AI security patrol platform where attacker and guardian agents simulate real exploit/mitigation loops against your codebase in real time — visualized as a goose standoff.

  • A rogue goose probes for weaknesses in your code.
  • A guardian goose responds with mitigations and patch guidance.
  • A goose patrol arena visualizes every exchange and updates live security health.
  • A modern landing page introduces the workflow before users start a scan.

Product flow

  1. Open the landing page and start a scan with a GitHub repository URL.
  2. Watch live attacker-vs-guardian patrol rounds stream into the arena + telemetry panels.
  3. Review prioritized vulnerabilities and remediation guidance in the report view.
  4. Export fixes or publish a PR from completed patrol output.

Why this format works

Most security scanners produce static reports that are hard to interpret quickly. GooseGuard turns findings into a patrol simulation that is easier to understand and act on:

  • See risk movement live instead of reading a long PDF.
  • Track impact with health bars so non-security stakeholders can follow quickly.
  • Capture practical fixes in plain language after each probe/defense exchange.

Stack

  • Frontend: React 19 + Vite 8 + Phaser 3
  • Backend: FastAPI + WebSocket
  • Patrol model: round-based attacker/defender simulation with event streaming

Project Layout

backend/
  main.py            # FastAPI app, routes, WebSocket battle loop
  models.py          # Pydantic request/response models
  classifier.py      # Attack-surface keyword classification
  llm.py             # LLM clients (Anthropic, Gemini, OpenAI)
  branch_battle.py   # Clone repo, create branch, apply patches, push
  pr_bot.py          # PR publishing helpers
  repo_audit.py      # Static repo security audit (secrets, patterns)
  repo_scanner.py    # Repo file scanning utilities
  website_audit.py   # HTTP header/content checks for live URLs
  agents/
    attacker.py
    defender.py
frontend/
  src/
    App.jsx
    components/      # LandingScreen, BattleArena, ArenaHud, ReportView, …
    game/            # PhaserGame.js, GooFighter.js
    hooks/           # useWebSocket.js
    utils/           # backendUrl, battleRows, cweMap, scanPayload, …
    constants/       # LLM model picker options
    assets/          # Sprites, sounds, images

Run Locally

1) Start backend

cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cd ..
uvicorn backend.main:app --reload --port 8000

Note: uvicorn must run from the repo root because main.py uses from backend.… imports.

2) Start frontend

cd frontend
npm install
npm run dev

Then open the local Vite URL (usually http://localhost:5173), review the landing page, and click Release the Goose.

Notes

  • Agents can run in live LLM mode (Anthropic Claude, Google Gemini, or OpenAI as fallback) or deterministic simulated mode.
  • Users can pick provider/model per patrol and toggle extended thinking to show richer attacker/defender reasoning in the live feed.
  • Default patrol mode is now branch battle: each round attempts to apply real file patches, commit, and push to a battle branch.
  • Landing screen styles live in frontend/src/components/LandingScreen.css.
  • Core arena/report styles live in frontend/src/index.css.
  • After each patrol, export a machine-readable defense bundle from the UI and hand it to an implementation agent to apply/push hardening changes.
  • You can also publish an automatic branch/PR directly from a completed patrol by providing a GitHub token (or setting GITHUB_TOKEN on the backend).
  • You can run a repo security audit (/battle/{id}/security-audit endpoint) to detect secrets, risky code patterns, and dependency hygiene issues with a scored report.
  • A website defense report endpoint (/battle/{id}/website-defense-report) can scan live URLs for HTTP header and content issues when a website_url is provided with the scan.

Live LLM Agent Setup

To run attacker/defender as live models, set API keys in backend environment before starting FastAPI:

export ANTHROPIC_API_KEY=...
# optional alternatives
export GEMINI_API_KEY=...
# or
export GOOGLE_API_KEY=...
# OpenAI fallback (used when primary provider is unavailable)
export OPENAI_API_KEY=...

Optional per-role overrides:

export ATTACKER_LLM_PROVIDER=anthropic   # or gemini
export ATTACKER_LLM_MODEL=claude-opus-4-1
export DEFENDER_LLM_PROVIDER=gemini      # or anthropic
export DEFENDER_LLM_MODEL=gemini-2.5-pro

For branch-battle mode, you also need:

export GITHUB_TOKEN=... # token must be able to push branches to target repo

Then choose Agent Runtime = Live LLMs in the UI.

You can also use a local env file:

cp backend/.env.example backend/.env

Then fill in keys in backend/.env (this file is gitignored).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors