Explore, Query, and Replicate Data
Run SQL, migrate data, and keep systems in sync — all in one tool.


One query. Multiple sources.
SQL Editor
SELECT *
FROM read_parquet('orders.parquet') o
JOIN postgres.public.customers c
ON o.customer_id = c.id
LIMIT 10;SQL Editor
SELECT *
FROM read_parquet('orders.parquet') o
JOIN postgres.public.customers c
ON o.customer_id = c.id
LIMIT 10;Join a Parquet file with a live PostgreSQL table — no ETL pipeline, no staging database.
What you can do in one workflow
Explore schemas and data, run snapshot migration, then keep changes flowing with CDC.

Explore
Explore and validate
- Browse databases, local files, and S3 in one tree
- Filter and edit rows with server-side pagination
- ER diagrams and schema comparison

Move
Run migration with schema conversion
- Migrate between database types with automatic mapping
- Pull from databases, files, or S3; write to databases or files
- Track progress table-by-table

Keep in sync
Keep systems in sync with CDC
- Capture INSERT, UPDATE, and DELETE from binlog/WAL
- Pause/resume with lifecycle status tracking
- Validate runs with logs and history

Connect databases, files, and object storage
Use databases as sources and targets. Treat files and S3 as queryable datasets.
MySQL
PostgreSQL
Amazon RDS
Google Cloud SQL
Azure Database
S3AWS, MinIO, etc.
CSV
JSONL
Parquet
Snowflake Coming soon
More connectors added regularly.
Run anywhere
Desktop for local work. Docker for servers. Same UI and workflow.
Docker
Self-hosted deployment with Docker Compose.
$
Cloud Images
Marketplace-friendly when needed. Deploy to AWS, Azure, GCP, or DigitalOcean.
Start exploring, querying, and replicating
The IDE is free. Streams run on a pay-as-you-go plan.