Skip to content

yohey-w/codd-dev

Repository files navigation

CoDD — Coherence-Driven Development

PyPI Python License Stars

日本語 | English | 中文


🚀 Get started in 60 seconds

pip install codd-dev

# Inside your project root
codd init --suggest-lexicons --llm-enhanced   # AI picks the lexicons that fit
codd elicit                                    # AI finds gaps in your requirements
codd dag verify --auto-repair --max-attempts 10  # AI fixes coherence violations

That's it. Three commands, three feedback loops, one coherent project.

Real-world: dogfooded against a Next.js + Prisma + PostgreSQL LMS. See Case study.


✨ What it does

Command One-line summary
🔍 codd elicit LLM finds specification holes in your requirements, scoped against industry-standard lexicons (BABOK, OWASP, WCAG, PCI DSS, ISO 25010, …).
🔄 codd diff Detects drift between requirements and the actual implementation (brownfield-friendly).
🛠️ codd dag verify --auto-repair Validates the requirements → design → implementation → tests DAG; an LLM proposes patches when violations appear and the loop retries until SUCCESS or MAX_ATTEMPTS.
📦 38 lexicon plug-ins Industry standards bundled as opt-in coverage axes — Web (WCAG / OWASP / Web Vitals / WebAuthn / forms / SEO / PWA / browser-compat / responsive), Mobile (HIG / Material 3 / a11y / MASVS), Backend (REST / GraphQL / gRPC / events), Data (SQL / JSON Schema / event sourcing / governance), Ops (CI/CD / Kubernetes / Terraform / observability / DORA), Compliance (ISO 27001 / HIPAA / PCI DSS / GDPR / EU AI Act), Process (ISO 25010 / 29119 / DDD / 12-factor / i18n / model cards / API rate-limit), and Methodology (BABOK).
🌐 codd brownfield Extract → diff → elicit pipeline: point CoDD at an existing codebase and it reverse-engineers requirements, finds drift, and surfaces gaps in one shot.
🎯 codd init --suggest-lexicons --llm-enhanced LLM reads your code/docs, identifies data types and function traits, and recommends which lexicons to install (with confidence + reasoning).
📊 codd lexicon list/install/diff + codd coverage report Manage plug-ins and produce JSON / Markdown / self-contained HTML coverage matrices.
🛡️ CI gate .github/workflows/codd_coverage.yml template + codd coverage check exit code make coverage regressions block merges.

🎨 Visual flow

flowchart LR
    R["Requirements (.md)"] --> E["codd elicit"]
    E -->|gap findings| H{HITL: approve / reject}
    H -->|[x]| L["project_lexicon.yaml + requirements TODOs"]
    H -->|[r]| I["ignored_findings.yaml"]
    L --> V["codd dag verify --auto-repair"]
    V -->|violation| AR["LLM patch propose → apply"]
    AR --> V
    V -->|SUCCESS| D["✅ deploy gate passes"]
    AR -->|max attempts| P["PARTIAL_SUCCESS: unrepairable surfaced honestly"]
Loading

Brownfield path:

flowchart LR
    Code["Existing codebase"] --> X["codd extract"]
    X --> DIFF["codd diff (drift)"]
    DIFF --> EL["codd elicit (coverage gaps)"]
    EL --> H{HITL gate}
    H --> Apply["codd elicit apply"]
    Apply --> V["codd dag verify"]
Loading

📊 Case study: real-world LMS

A Next.js + Prisma + PostgreSQL multi-tenant LMS (≈30 design docs, 12 DB tables, RLS-enforced isolation):

Stage Result
codd init --suggest-lexicons --llm-enhanced LLM detected data types (PII / payment / video) and function traits (auth / payment / public REST), recommended 15 lexicons, 9 of which the human had already chosen — confirming the heuristic.
codd elicit (10 lexicons loaded, scope=system_implementation, phase=mvp) 70 findings across web a11y / data governance / SQL / security / Web Vitals / WebAuthn / API / process. Business-tier dimensions (KPI, UAT detail, risk register) auto-filtered out.
codd dag verify --auto-repair Started with 16 unrepairable violations; through targeted core fixes (deployment chain auto-discovery, runtime-state auto-binding, mock harness no-op, scope/phase filter) the same project now reaches PASS or amber-WARN with deploy allowed.
VPS smoke (/, /login, /api/health) All 3 endpoints 200 OK.

The full pipeline change is zero lines of CoDD core changes per project — every project-specific concern lives in project_lexicon.yaml or in codd_plugins/ (Generality Gate, Layer A / B / C).


🌟 Why CoDD exists

"Write only functional requirements and constraints. Code is generated, repaired, and verified automatically."

Most "AI-assisted dev" tools focus on the generation side. CoDD focuses on the constraint side: the LLM is most useful when it has a precise picture of what must be true. CoDD provides that picture as a DAG that links every artifact, plus a plug-in surface that lets industry standards (BABOK / WCAG / OWASP / PCI / ISO …) supply the constraints mechanically.

When something breaks the DAG, an LLM proposes a patch, the loop re-verifies, and either reaches SUCCESS or surfaces what is structurally unrepairable — honestly.

Generality Gate (three-layer architecture)

Layer Where stack-specific names live Examples
A — Core Nowhere. Zero react, django, Stripe, LMS literals. codd/elicit/, codd/dag/, codd/lexicon_cli/
B — Templates Generic placeholders only. codd/templates/*.j2, codd/templates/lexicon_schema.yaml
C — Plug-ins Free to name anything. codd_plugins/lexicons/*/, codd_plugins/stack_map.yaml

This is what lets CoDD ship one core that works for Next.js, Django, FastAPI, Rails, Go services, mobile apps, ML model cards — and that lets contributors add a lexicon without touching the core.


🧭 Roadmap

  • v2.10.0 (current) — Lexicon-driven completeness, 38 plug-ins, LLM-enhanced init, scope/phase filter, auto-repair across the full DAG.
  • v2.11.0 (next) — Sprint-less codd implement (--design <path> --output <dir> directly; implementation_plan.md parser removed). See migration guide.

🤝 Contributing

CoDD is shaped by the following people:

  • @yohey-w — Maintainer / Architect
  • @Seika86 — Sprint regex insight (PR #11)
  • @v-kato — Brownfield reproduction reports (Issues #17 / #18 / #19)
  • @dev-komenzarsource_dirs bug reproduction (Issue #13)

External issues, PRs, and lexicon proposals are welcome — see Issues.


📚 Documentation

  • CHANGELOG.md — every release with quality metrics
  • docs/ — architecture notes
  • codd --help — full CLI reference

📦 Hook Integration

CoDD ships hook recipes for editor and Git workflows:

  • Claude Code PostToolUse hook recipe for running CoDD checks after file edits
  • Git pre-commit hook recipe for blocking commits when coherence checks fail

Recipes live under codd/hooks/recipes/.


License

MIT — see LICENSE.

Links


When code changes, CoDD traces the impact, detects violations, and produces evidence for merge decisions.

About

CoDD: Coherence-Driven Development — Claude Code plugin for cross-artifact change impact analysis

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages