Research-backed empirical diagnostics for JavaScript and TypeScript codebases
Detect performance anti-patterns, track regressions over time, and reproduce the benchmark evidence behind every rule.
Code Evolution Lab is a monorepo for the public tooling around empirical software diagnostics. It packages the research from liangk/empirical-study into practical developer tools: a CLI for local analysis, a GitHub Action for pull request checks, a reusable core engine, and replayable benchmark suites for reproducibility.
The project focuses on performance patterns that have been studied empirically rather than stylistic lint rules. Today the public packages cover three rule families:
- Loop performance anti-patterns — nested loops, sequential await, repeated regex/JSON work inside loops
- Memory leak patterns — missing cleanup in React, Vue, Angular, timers, listeners, and subscriptions
- Missing Prisma indexes — foreign-key, filter, sort, and composite index gaps
| Feature | Description |
|---|---|
| CLI diagnostics | Run code-evolution-lab analyze, scan, compare, and replay locally |
| Evidence-backed rules | Findings are derived from completed empirical studies, controlled benchmarks, and corpus scans |
| Temporal comparison | Capture a baseline snapshot and fail CI if code health regresses |
| GitHub integration | Post diff-aware diagnostics on pull requests with the GitHub Action |
| Reusable core engine | Programmatic API for custom analysis workflows and reporting |
| Replayable studies | Re-run the underlying benchmark suites locally to inspect the evidence yourself |
| Web interface | Angular-based UI for interactive workflows and project visualization |
# Analyze the current project
npx code-evolution-lab analyze .
# Capture a baseline snapshot before refactoring
npx code-evolution-lab scan
# Compare after changes to catch regressions
npx code-evolution-lab compareSee packages/cli/ for full CLI documentation.
- Node.js 18+ (recommended: 20+)
- PostgreSQL 14+
- npm 9+
git clone https://github.com/liangk/code-evolution-lab.git
cd code-evolution-labnpm installcd backend
npm install
cp .env.example .env
# Edit .env with your database credentials
npm run prisma:migrate
npm run start:apicd apps/web
npm install
npm startAccess the application at http://localhost:8201.
cd packages/cli
npm run build
node bin/code-evolution-lab.js analyze ../..code-evolution-lab/
├── apps/
│ └── web/ # Angular UI for interactive workflows
│ ├── src/
│ │ ├── app/
│ │ │ ├── components/ # UI components
│ │ │ ├── services/ # API services
│ │ │ └── guards/ # Route guards
│ │ └── environments/ # Environment configs
│ └── angular.json
│
├── backend/ # Express.js API server and legacy platform services
│ ├── src/
│ │ ├── api/ # REST API routes
│ │ ├── analyzer/ # Code parsing (Babel AST)
│ │ ├── detectors/ # Issue detection
│ │ │ ├── n1-query-detector.ts
│ │ │ ├── inefficient-loop-detector.ts
│ │ │ ├── memory-leak-detector.ts
│ │ │ └── large-payload-detector.ts
│ │ ├── generators/ # Solution generation
│ │ │ ├── evolutionary-engine.ts
│ │ │ ├── fitness-calculator.ts
│ │ │ └── mutation-operators.ts
│ │ └── cli.ts # Command-line interface
│ └── prisma/ # Database schema
│
├── packages/ # Public tooling packages
│ ├── core-engine/ # Shared detection engine + reporters + baseline logic
│ ├── cli/ # Published npm CLI: code-evolution-lab
│ ├── replay/ # Reproducible benchmark studies
│ └── github-action/ # Pull request diagnostics action
│
└── docs/ # Documentation
├── getting-started/
├── architecture/
├── backend/
├── api/
├── frontend/
└── reference/
| Package | What it is for |
|---|---|
packages/cli |
Local project analysis, baseline snapshots, regression comparison, and study replay |
packages/core-engine |
Programmatic detection engine for custom tooling and integrations |
packages/github-action |
Diff-aware pull request diagnostics in GitHub Actions |
packages/replay |
Bundled benchmark suites that reproduce the empirical study workloads |
This repository connects three layers that are often separate in tooling projects:
- Empirical research — benchmark studies and corpus analysis in
liangk/empirical-study - Detection engine — reusable rules and reporting in
@code-evolution/core-engine - Developer workflows — CLI, GitHub Action, replay tooling, and web UI
The goal is to help developers answer practical questions with evidence:
- Which patterns in this codebase have measurable performance cost?
- Did this refactor improve things or introduce regressions?
- Can I reproduce the benchmark that justified this rule?
POST /api/analyze
Content-Type: application/json
{
"code": "async function fetchUsers() { ... }",
"generateSolutions": true
}{
"success": true,
"score": 72.5,
"totalIssues": 5,
"results": [
{
"detectorName": "n1-query-detector",
"issues": [
{
"type": "n_plus_1_query",
"severity": "critical",
"title": "N+1 Query detected",
"solutions": [...]
}
]
}
]
}Full documentation available in /docs:
- Quick Start Guide
- Installation
- Configuration
- System Architecture
- REST API Reference
- CLI Commands
- Issue Type Catalog
| Layer | Technology |
|---|---|
| Frontend | Angular 21, TypeScript, SCSS |
| Backend | Express.js, TypeScript, Prisma |
| Database | PostgreSQL |
| Code Analysis | Babel Parser, AST Traversal |
| Authentication | JWT, OAuth (Google, GitHub) |
cd apps/web
npm run build
# Deploy dist/web/browser to NetlifyThe project includes railway.toml for easy deployment:
[build]
builder = "nixpacks"
[deploy]
startCommand = "npm run start:api"- Fork the repository
- Create a feature branch:
git checkout -b feature/my-feature - Commit changes:
git commit -am 'Add new feature' - Push to branch:
git push origin feature/my-feature - Submit a Pull Request
MIT License — see LICENSE for details.
Ko-Hsin Liang
- GitHub: @liangk
Built with ❤️ for better code performance