RepoAtlas is a local-first repository analysis app that generates a structured Repo Brief for onboarding, reviews, and architecture understanding.
It analyzes repository files without executing them and produces:
- Folder Map: recursive directory tree
- Architecture Map: interactive ELK-based dependency graph with pan and zoom
- Start Here: ranked reading path with signal-based explanations
- Danger Zones: risk-ranked hotspots with metric breakdowns
- Run and Contribute: extracted run commands, key docs, and CI indicators
- Export: full report as PDF, PNG, or Markdown
Deep analysis is currently implemented for TypeScript/JavaScript, Python, and Java repositories.
The primary workflow is zip upload through the web UI. RepoAtlas extracts the archive, analyzes the repository, stores the report, and returns a report ID that the UI can load or export.
- Features
- How It Works
- Architecture
- Tech Stack
- Requirements
- Quick Start
- Usage
- API Reference
- Configuration
- Project Structure
- Development
- Testing
- Fixtures
- Limits and Behavior
- Security Notes
- Libraries and Licenses
- License
- Single-input workflow: upload a zip of your repo and generate a report
- Deterministic scoring: Start Here and Danger Zones are derived from measurable repo signals
- Multi-language packs: TS/JS, Python, and Java packs provide deeper static analysis
- Interactive visualization: pan and zoom dependency view with ELK layout
- Portable exports:
- Client-side full report export to PDF and PNG
- Server-side Markdown export via
GET /api/reports/:id/export/md
- Report persistence: report JSON on disk (
reports/) or Vercel Blob when deployed withBLOB_READ_WRITE_TOKEN
- A user uploads a zip file from the web UI.
POST /api/analyzereceives the file, saves it to a temp path, and starts analysis.- Repo ingest extracts the zip to a temporary workspace.
- The indexing pipeline collects:
- folder tree
- file metadata and language hints
- key docs and CI config signals
- runnable commands from
package.jsonscripts
- Language packs for TS/JS, Python, and Java compute imports, entrypoints, complexity, and proximity.
- Scoring computes:
start_hererankingdanger_zonesrisk score
- The report is saved and returned by report ID.
- The UI loads the report and supports export.
- Flow: zip upload or JSON
zipRef-> ingest -> analyzer -> storage -> API returns report ID -> UI fetches and exports by report ID - Runtime Architecture Map UI: interactive dependency graph using ELK layout with pan and zoom controls
- Markdown artifact rendering: Mermaid syntax is used only in exported markdown artifacts, not as the runtime graph renderer
- Frontend: Next.js App Router, React, Tailwind CSS
- API routes:
POST /api/analyzeGET /api/reports/:idGET /api/reports/:id/export/md
- Analyzer: in-process TypeScript module
- Storage: report JSON on filesystem (
reports/) or Vercel Blob whenBLOB_READ_WRITE_TOKENis set - Temp workspace: OS temp directory per analysis run
- Application framework: Next.js 14, React 18, TypeScript 5
- Styling: Tailwind CSS, PostCSS, Autoprefixer
- Graph and layout:
elkjs,react-zoom-pan-pinch - Export:
html2canvas,jspdf, Markdown formatter - Testing: Vitest
- Linting: ESLint via
next lint
- Node.js 18+
- npm 9+
- Windows, macOS, or Linux with local filesystem and temp directory access
npm install
npm run devOpen http://localhost:3000, upload a zip of your repository, and click Analyze Repository.
- Open the homepage
- Upload a zip of your repository, for example from GitHub:
Code -> Download ZIP - View generated tabs:
- Overview
- Folder Map
- Architecture Map
- Start Here
- Danger Zones
- Run and Contribute
- Export
- PDF: full report snapshot export
- PNG: full report snapshot export
- Markdown:
GET /api/reports/:id/export/md, also available from UI export controls
Primary upload flow:
curl -X POST http://localhost:3000/api/analyze \
-F "file=@/path/to/repo.zip"Testing or CLI flow:
curl -X POST http://localhost:3000/api/analyze \
-H "Content-Type: application/json" \
-d "{\"zipRef\":\"C:/path/to/local/repo-or-fixture\"}"After analysis, fetch the report JSON:
curl http://localhost:3000/api/reports/<report-id>Export the report as Markdown:
curl -OJ http://localhost:3000/api/reports/<report-id>/export/mdThese routes are implemented from the files in src/app/api/**/route.ts:
| Route file | Methods | Public endpoint |
|---|---|---|
src/app/api/analyze/route.ts |
POST |
/api/analyze |
src/app/api/reports/[id]/route.ts |
GET |
/api/reports/:id |
src/app/api/reports/[id]/export/md/route.ts |
GET |
/api/reports/:id/export/md |
- Accepts
multipart/form-datawith a single zip file infileorzip - Also accepts JSON with
zipReffor local testing - Max upload size: 100 MB
Example JSON body:
{
"zipRef": "C:/path/to/local/repo-or-fixture"
}Success response:
{
"reportId": "uuid"
}Common error codes exposed by the current route:
INVALID_INPUTZIP_NOT_FOUNDREPO_TOO_LARGETIMEOUTANALYSIS_FAILED
Common statuses:
200on success400for malformed payloads or unsupported content type404when JSONzipRefdoes not exist413when upload exceeds 100 MB500for unexpected failures504when analysis exceeds 120 seconds
Returns a previously generated report by ID.
Common statuses:
200with full report JSON400for invalid report IDs404when the report does not exist
Returns the report as downloadable Markdown with text/markdown content type.
Common statuses:
200with markdown body and download headers400for invalid report IDs404when the report does not exist
- Vercel production: set
BLOB_READ_WRITE_TOKEN - Local development:
REPORTS_DIRis optional when not using Blob storage and defaults to<project-root>/reports - Local Blob testing:
BLOB_READ_WRITE_TOKENcan also be set locally if you want to exercise Blob storage
No .env file is required for local development by default.
src/
app/
api/
analyze/route.ts
reports/[id]/route.ts
reports/[id]/export/md/route.ts
analyzer/
packs/
index.ts
pipeline.ts
scoring.ts
components/
lib/
ingest.ts
storage.ts
export.ts
errors.ts
types/
report.ts
fixtures/
reports/
reports/ is created at runtime when filesystem storage is used.
npm run dev # Start Next.js dev server
npm run build # Build for production
npm run start # Run production build
npm run lint # Run ESLint
npm run test # Run Vitest once
npm run test:watch # Run Vitest in watch mode- Unit and integration-style tests are written with Vitest
- Coverage includes analyzer packs, scoring, ingest, API routes, and report export flows
Run:
npm run testFixture repositories in fixtures/:
fixtures/repo-tsfixtures/repo-pythonfixtures/repo-javafixtures/repo-java-mavenfixtures/repo-docs-only
These are used for local regression checks and analyzer test coverage.
Current enforced or expected limits:
- Analysis timeout: 120 seconds
- Repository size guard: approximately 100 MB
- File indexing cap: 10,000 files
- Directory map depth cap: 10
When analysis cannot perform a deep language pass, warnings are added to the report output.
- RepoAtlas performs static file analysis only
- It does not execute target repository code
- Temporary workspaces are cleaned up after analysis
- Input and known failure modes are mapped to typed API errors
The following are the direct libraries currently declared in package.json.
next: application framework and server runtimereact,react-dom: UI renderingelkjs: graph layout engine for the architecture mapreact-zoom-pan-pinch: pan and zoom controls for graph navigationhtml2canvas: DOM capture for image and PDF exportjspdf: PDF generationmermaid: Markdown export diagram syntax generationadm-zip: zip extraction for uploaded repositories@vercel/blob: optional report storage when deployed on Vercel
typescript: type checking and TS toolingvitest: test runnereslint,eslint-config-next: lintingtailwindcss,postcss,autoprefixer: styling pipeline@types/node,@types/react,@types/react-dom,@types/adm-zip: TypeScript definitions
Third-party dependencies are distributed under their own licenses. Check each package's npm page or repository for license details.
This project is licensed under the MIT License. See LICENSE for the full text.