You inherited a codebase with tons of files and zero tests. Where do you even start?
Litmus tells you. Two commands, one ranked list — start testing where it actually matters.
dotnet tool install --global dotnet-litmus
dotnet-litmus scanThat's it. Litmus finds your solution, runs your tests, collects coverage, and hands you a prioritized action plan. No config files, no dashboards, no setup.
No tests yet? Even better — that's exactly what Litmus is for:
dotnet-litmus scan --no-coverageNot another dashboard. Not a wall of warnings. A clear answer to "what should I test first?"
── Act Now ──────────────────────────────────────────────────────────────────────
Rank File Commits Coverage Complexity Coupling Risk Priority
1 Services/OrderService.cs 47 12% 94 Low High High
2 Services/ReportFormatter.cs 22 31% 67 Low High High
── Next Sprint ──────────────────────────────────────────────────────────────────
3 Controllers/PaymentGateway.cs 31 8% 118 Very High High Medium
── Monitor ──────────────────────────────────────────────────────────────────────
4 Data/LegacyDbSync.cs 41 0% 201 Very High High Low
4 files analyzed. 2 high-priority (start today), 1 medium-priority (next sprint).
2 high-risk file(s) need seam introduction before testing.
Act Now — high risk, low coupling. Write tests today. Next Sprint — high risk, but tangled. Introduce seams first, then test. Monitor — keep an eye on it, but don't start here.
Notice how PaymentGateway.cs has higher risk than OrderService.cs but lands in "Next Sprint"? That's Litmus telling you: "Yes, it's dangerous — but it's too entangled to test right now. Introduce seams first."
That's the insight you can't get from coverage reports alone.
Most tools tell you what's untested. Litmus tells you where to start — and what's blocking you.
It cross-references four signals that no single tool combines:
| Signal | The question it answers |
|---|---|
| Git churn | Is this file changing often? (high churn = high blast radius) |
| Code coverage | Is anyone testing it? |
| Cyclomatic complexity | How many paths can break? |
| Coupling analysis | Can you actually write a test for it today? |
That last one is the key. Litmus uses Roslyn to detect unseamed dependencies — things like new HttpClient(), DateTime.Now, concrete constructor params — that make a file impossible to unit test without refactoring first. Then it adjusts the priority accordingly.
The result: a ranked list ordered by practical testability, not just risk.
dotnet-litmus scan --detailed1 Services/OrderService.cs 47 12% 94 Low High High
ProcessOrder — 50% 25
ValidateInput — 0% 18
See exactly which methods inside a high-risk file need attention first.
dotnet-litmus scan --output baseline.json # save a snapshot
dotnet-litmus scan --baseline baseline.json # compare laterA Delta column shows what improved, what degraded, and what's new.
dotnet-litmus scan --explaindotnet-litmus scan --output report.html # shareable HTML with sortable table
dotnet-litmus scan --format json | jq '.[].file' # pipe JSON to your tools
dotnet-litmus scan --output results.csv # CSV for spreadsheets- Auto-detects your solution file — just run from the project root
- One command does everything — tests, coverage, analysis, report
- Grouped output — Act Now / Next Sprint / Monitor
- Seam detection — knows when a file is too entangled to test directly
- Baseline comparison — track how test debt changes over time
- Method-level drill-down — pinpoint the riskiest methods
- Plain-English explanations —
--explaintells you why each file ranks where it does - Multiple formats — table, JSON, CSV, HTML
- CI quality gate —
--fail-on-thresholdbreaks the build on risk regressions - Flexible coverage — works with coverlet or dotnet-coverage
- No tests? No problem —
--no-coverageworks without any test projects
Litmus fits naturally into CI pipelines. Track test debt over time, catch regressions, and share reports with the team.
# .github/workflows/litmus.yml
- name: Install Litmus
run: dotnet tool install --global dotnet-litmus
- name: Run analysis
run: dotnet-litmus scan --output report.json --quiet
- name: Quality gate
run: dotnet-litmus scan --fail-on-threshold 1.0 --quietImportant: Use
fetch-depth: 0in your checkout step — Litmus needs full git history for churn analysis.
Full GitHub Actions example with baseline tracking
name: Litmus Analysis
on: [push]
jobs:
litmus:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-dotnet@v4
with:
dotnet-version: '8.0.x'
- name: Install Litmus
run: dotnet tool install --global dotnet-litmus
- name: Download previous baseline
uses: actions/download-artifact@v4
with:
name: litmus-baseline
continue-on-error: true
- name: Run analysis
run: |
if [ -f baseline.json ]; then
dotnet-litmus scan --output report.json --baseline baseline.json
else
dotnet-litmus scan --output report.json
fi
- name: Save as next baseline
uses: actions/upload-artifact@v4
with:
name: litmus-baseline
path: report.json| CI flag | Purpose |
|---|---|
--quiet |
Suppress console output — only exit code and file export |
--output report.json |
Machine-readable export |
--output report.html |
Shareable HTML report |
--baseline previous.json |
Detect regressions between runs |
--fail-on-threshold 1.0 |
Fail the build if any file exceeds a risk score |
--no-color |
Clean logs without ANSI codes |
Litmus analyzes your codebase in two phases:
Phase 1 — Risk Score: How dangerous is it to leave this file untested?
RiskScore = Churn × (1 - Coverage) × (1 + Complexity)
Phase 2 — Starting Priority: Can you actually test it today?
StartingPriority = RiskScore × (1 - Coupling)
A file with Very High coupling gets its priority reduced — not because it's safe, but because you need to introduce seams before you can test it. High risk + low coupling = start here.
For the full scoring methodology, seam detection signals, and architecture details, see ARCHITECTURE.md.
SonarQube monitors code quality. Litmus answers a different question: "I just inherited this codebase — where do I start testing?"
| SonarQube | Litmus | |
|---|---|---|
| Goal | Broad code quality monitoring | Prioritized test starting list |
| Signals | Static analysis rules, coverage % | Git churn + coverage + complexity + seam detection |
| Output | Dashboard of issues | Ranked action plan: start here, plan next, introduce seams first |
| Setup | Server, database, CI integration | dotnet tool install, run from terminal |
| Delta tracking | Paid tier for branch analysis | --baseline flag (free, built-in) |
| Cost | Free tier limited; paid for full | Free and open source |
They complement each other well. Use SonarQube for ongoing quality gates; use Litmus to prioritize where to invest testing effort.
| Command | Description |
|---|---|
dotnet-litmus scan |
Run tests, collect coverage, and analyze — all in one step |
dotnet-litmus analyze |
Analyze using an existing Cobertura XML coverage file |
Shared options (both commands)
| Option | Default | Description |
|---|---|---|
--solution |
auto-detect | Path to .sln or .slnx |
--since |
1 year ago | Git history cutoff (e.g., 2025-01-01) |
--top |
20 | Number of files to display |
--exclude |
— | Glob pattern(s) to exclude (repeatable) |
--output |
— | Export to .json, .csv, or .html |
--baseline |
— | Previous JSON export for delta comparison |
--format |
table | Stdout format: table, json, csv, html |
--detailed |
false | Method-level drill-down for top files |
--explain |
false | Plain-English annotations per file |
--no-group |
false | Flat table instead of grouped output |
--verbose |
false | Show intermediate scores |
--quiet |
false | Suppress all output except errors |
--fail-on-threshold |
— | Exit code 1 if any score exceeds this (0.0–2.0) |
--no-color |
false | Disable colored output |
scan-only options
| Option | Default | Description |
|---|---|---|
--tests-dir |
solution dir | Directory or project for dotnet test |
--no-coverage |
false | Skip tests — analyze by churn, complexity, and coupling only |
--coverage-tool |
coverlet | Coverage collector: coverlet or dotnet-coverage |
--timeout |
10 | Max minutes for test execution |
analyze-only options
| Option | Default | Description |
|---|---|---|
--coverage |
required | Path to Cobertura XML coverage file |
# From NuGet (recommended)
dotnet tool install --global dotnet-litmus
# From a local build
dotnet pack Litmus/Litmus.csproj -c Release
dotnet tool install --global --add-source Litmus/bin/Release dotnet-litmus
# Or run without installing
dotnet run --project Litmus -- scan- .NET 8 SDK or later (.NET 9, .NET 10 supported)
- git on PATH
- For
scan: test projects needcoverlet.collector(or use--coverage-tool dotnet-coverage) - For
scan --no-coverage: no test setup needed at all
No solution file found
Run from the directory with your .sln/.slnx, or specify it explicitly:
dotnet-litmus scan --solution path/to/MyApp.slnTests fail and no coverage is generated
Coverage can't be collected from failed test runs. Fix failing tests first.
If tests pass but no coverage appears, add the coverlet collector:
dotnet add <test-project> package coverlet.collectorOr switch to dotnet-coverage (no package reference needed):
dotnet tool install --global dotnet-coverage
dotnet-litmus scan --coverage-tool dotnet-coverageScan hangs during test execution
Usually caused by coverlet. Try in order:
- Switch to
dotnet-coverage:dotnet-litmus scan --coverage-tool dotnet-coverage - Upgrade
coverlet.collectorto latest - Increase timeout:
dotnet-litmus scan --timeout 30 - Generate coverage separately and use
analyze
Default file exclusions
These patterns are always excluded to filter auto-generated noise:
*.Designer.cs, *.g.cs, *.g.i.cs, *.generated.cs, *AssemblyInfo.cs, *GlobalUsings.g.cs, *.xaml.cs, **/Migrations/*.cs, *ModelSnapshot.cs, Program.cs, Startup.cs, **/obj/**, **/bin/**, **/wwwroot/**
Add more with --exclude.
| Code | Meaning |
|---|---|
0 |
Success |
1 |
Error — validation failure, test failure, runtime error, or --fail-on-threshold exceeded |
Contributions are welcome! Here's how to get started:
git clone https://github.com/ebrahim-s-ebrahim/litmus.git
cd litmus
dotnet build Litmus.slnx
dotnet test Litmus.slnxLitmus eats its own dog food — the CI pipeline runs dotnet-litmus analyze on itself after every push.
Before submitting a PR:
- Run
dotnet test Litmus.slnxand ensure all tests pass - If adding a new feature, include tests in
Litmus.Tests/ - Keep the architecture documented — see ARCHITECTURE.md