-
Notifications
You must be signed in to change notification settings - Fork 0
Home
ewyct edited this page May 4, 2026
·
6 revisions
Free, open-source visual regression testing with AI-generated tests.
Record it. Test it. Ship it.
Lastest is a self-hosted visual regression testing platform that records your tests, writes them with AI, runs them anywhere, and fixes them when they break -- all in one tool.
1. Point it at your app
2. Record your user flows (point-and-click, no code)
3. AI generates resilient test code with multi-selector fallback
4. Run on remote runners or in an embedded browser container (EB setup required)
5. Screenshots compared with 3 diff engines (pixelmatch, SSIM, Butteraugli)
6. Review and approve visual changes -- or let AI auto-classify them
When self-hosted, your data stays on your server and your screenshots never leave your infra.

Create Tests Run Tests Review
Record manually Embedded Approve/
AI-assisted Browser or Reject
Play Agent auto remote / CI changes
One-time cost Zero AI per run New baseline saved
(AI optional) (pure Playwright)
- Create: Build tests your way -- record manually, let AI generate from a URL or spec, or let the Play Agent autonomously scan your entire app.
- Run: Execute tests in an Embedded Browser pod (default), on remote runners, or in CI/CD. No AI needed -- pure Playwright execution. Local Playwright on the host is no longer supported; the EB stack is required.
- Compare: New screenshots are diffed against baselines using your chosen engine. Optional DOM-diff fallback catches structural changes when pixel comparison is inconclusive.
- Review: Visual diffs are classified. Approve intentional changes -- they become the new baseline.
- Fix: When tests break, AI can propose fixes or the Play Agent can fix and re-run autonomously.
- First run: screenshot becomes the baseline
- Every run after: new screenshot is SHA256-hashed -- if it matches the baseline, instant pass. If it differs, the diff engine runs and you review the change.
- AI costs are one-time: AI is only used during test creation and fixing. Running tests uses zero AI.
- No per-screenshot pricing on self-hosted: every run is unlimited regardless of volume.
- Getting Started -- Installation and first steps
- Creating Tests -- Three ways to create tests
- Running Tests -- Local, remote, and embedded execution
- Visual Diffing -- Diff engines and sensitivity settings
- AI Configuration -- AI providers and settings
- CI/CD Integration -- GitHub Actions, CLI runner, Smart Run
- Remote Runners -- Distributed test execution
- Docker Deployment -- Production deployment
- Settings Reference -- All configuration options
- Environment Variables -- Environment variable reference
- GitHub Integration -- OAuth, webhooks, PR comments
- GitLab Integration -- OAuth, MR comments, webhooks
- Google Sheets Integration -- Test data from spreadsheets
- Custom Webhooks -- Build result notifications
- VSCode Extension API -- VS Code extension and IDE API
- MCP Server -- AI agent integration (29 tools)
- Agent Monitoring -- Real-time Play Agent tracking
- Scheduled Runs -- Cron-based automated builds
- Bug Reports -- In-app bug reporting with auto-context
- Gamification -- Beat the Bot scoring, leaderboard, achievements
- API Tokens -- Long-lived Bearer tokens for MCP, CI, and integrations
- Test Migration -- Move tests between Lastest instances
Lastest Wiki
Getting Started
Core Features
- Visual Diffing
- AI Configuration
- Agent Monitoring
- Gamification
- Scheduled Runs
- Bug Reports
- Settings Reference
Integrations
- GitHub Integration
- GitLab Integration
- Google Sheets Integration
- Custom Webhooks
- VSCode Extension API
- MCP Server
Deployment & CI
Administration
Reference