Built for Billion-Record
Industrial Workloads
Racing telemetry. Smart cities. Mining sensors. Medical devices.
When you have billions of sensor readings and milliseconds matter, Arc delivers.
docker run -d -p 8000:8000 \
-e STORAGE_BACKEND=local \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:latestSingle binary. No dependencies. Production-ready in minutes.
Industrial IoT Generates Massive Data
100M+ sensor readings in a single race. 10B infrastructure events daily in a smart city.
Traditional time-series databases can't keep up. Arc can.
Real Industrial IoT Scale:
Race telemetry: 100M+ sensor readings in 3 hours
Smart city: 10B infrastructure sensor events daily
Mining operation: Billions of equipment telemetry points
Medical devices: Real-time patient monitoring at scale
Built for billion-record workloads.
DuckDB SQL. No Proprietary Query Language.
Not a custom DSL. Not a query language that changes every major version.
Not vendor lock-in through proprietary syntax.
Just DuckDB-powered SQL with window functions, CTEs, and joins.
SELECT
device_id,
facility_name,
AVG(temperature) OVER (
PARTITION BY device_id
ORDER BY timestamp
ROWS BETWEEN 10 PRECEDING AND CURRENT ROW
) as temp_moving_avg,
MAX(pressure) as peak_pressure,
STDDEV(vibration) as vibration_variance
FROM iot_sensors
WHERE timestamp > NOW() - INTERVAL '24 hours'
AND facility_id IN ('mining_site_42', 'plant_7')
GROUP BY device_id, facility_name, timestamp
HAVING MAX(pressure) > 850 OR STDDEV(vibration) > 2.5;If you know SQL, you know Arc. Powered by DuckDB.
Window Functions
Moving averages, ranking, and complex aggregations built-in
CTEs & Subqueries
Break down complex analysis into readable, composable parts
JOINs Across Sensors
Correlate temperature, pressure, vibration data across devices
Parquet files you actually own
Your data lives in standard Parquet files on S3, MinIO, or local disk.
Arc disappears tomorrow? You still own your data.
Query it with DuckDB, ClickHouse, Snowflake, or any tool that reads Parquet.
This is what "portable data" actually means.
S3 / MinIO / Local
Store anywhere you want
Standard Parquet
Industry standard format
You Own It
No vendor lock-in, ever
11.8M records/sec sustained
High-throughput metrics ingestion via MessagePack columnar format.
No degradation. No memory leaks. Just stable performance.
Hardware: M3 Max 14-core, 36GB RAM
See Arc in Action
Live demos running on Arc. Real data, real-time ingestion, sub-second queries.
Vessel Tracking
Real-time AIS data from vessels in Singapore's Strait of Malacca. Live position updates every 30 seconds.
Flight Tracking
Live ADS-B data tracking aircraft over New York City. Real-time altitude, speed, and position updates.
Weather Tracking
Multi-city weather monitoring across Buenos Aires, London, and Tokyo. Temperature, humidity, and pressure trends.
System Monitoring
Real-time Docker container metrics for the Arc database. CPU, memory, network, and disk I/O tracking.
Migrate from your existing time-series database
We help with migration at no cost.
Arc speaks InfluxDB Line Protocol natively.
Point Telegraf at Arc. Dual-write during migration. Cut over when ready.
No agent changes. No downtime. No data loss.
Multiple InfluxDB 1.x Enterprise customers already testing migration.
Already running TimescaleDB?
We'll help you migrate at no cost.
Keep your SQL queries. Arc uses standard DuckDB SQL with window functions, CTEs, and joins.
Own your data in Parquet. No vendor lock-in. Query with any tool. 10-50x faster queries after compaction.
Already running QuestDB?
We'll help you migrate at no cost.
Broader SQL support. Full window functions, CTEs, complex joins - features QuestDB doesn't support.
Better ecosystem integration. Native Grafana datasource, VSCode extension, Apache Superset dialect.
Features That Matter for IoT
Grafana Integration
Official Grafana datasource plugin. Build dashboards for sensor data, equipment telemetry, and facility monitoring.
Setup guide ->VSCode Extension
Full-featured database manager. Query editor with autocomplete. Notebooks for analysis. CSV import.
Install from marketplace ->Automatic Compaction
Small files merge into optimized 512MB Parquet files. 10-50x faster queries with zero configuration.
Retention Policies
Time-based lifecycle management. Keep 7 days of raw data, 90 days of rollups, 2 years of aggregates.
GDPR-Compliant Deletion
Precise record deletion with file rewrites. No tombstones. No query overhead.
Write-Ahead Log (WAL)
Optional durability for zero data loss. Disabled by default for maximum throughput.
Multi-Database Architecture
Organize by facility, device type, or environment. Isolated namespaces for multi-tenant deployments.
Apache Superset Integration
Native dialect for BI dashboards. Connect your existing visualization tools.
Learn more ->Frequently Asked Questions
Yes. Version 25.12.1 is stable and ready for production use.
Self-hosted Arc is production-ready now.
Enterprise features available for large deployments.
ClickHouse wins on raw analytical performance.
Arc wins on operational simplicity and data portability.
- •ClickHouse is a distributed system requiring cluster management.
- •Arc runs on a single node with object storage (S3/MinIO).
- •ClickHouse uses proprietary MergeTree format.
- •Arc uses standard Parquet files.
- •12x faster ingestion via MessagePack columnar protocol vs InfluxDB Line Protocol.
- •Portable Parquet files vs proprietary TSM format. Query with any tool.
- •Standard SQL vs Flux (which InfluxData deprecated).
- •InfluxDB 1.x Enterprise customers are stuck on deprecated software. Arc provides a migration path with Line Protocol compatibility.
- •Parquet compression gives 3-5x reduction vs raw sensor data.
- •S3 storage costs ~$0.023/GB/month.
- •1TB of sensor data = ~$23/month in storage.
- •Significantly cheaper than proprietary IoT platforms.
- •Built-in retention policies automatically delete old data.
- •Or keep everything - storage is cheap, queries are fast.
Single-node architecture in open source version.
Arc Enterprise includes clustering and high availability.
For OSS: Run primary + standby with object storage replication.
WAL enabled for zero data loss during failover.
API token authentication included in all versions.
Grafana integration supports Azure AD OAuth.
Arc Enterprise includes RBAC, SSO, and multi-tenancy.
- 1Point Telegraf at Arc (Line Protocol compatible)
- 2Dual-write to both systems during transition
- 3Verify data in Arc matches InfluxDB
- 4Update Grafana dashboards to Arc data source
- 5Cut over when ready
Migration tooling and documentation: Migration guide ->
Pricing
Start free with open source. Scale with enterprise features when you need them.
Open Source
- 11.8M records/sec ingestion
- Full SQL query engine (DuckDB)
- Parquet storage (S3, GCS, local)
- Docker and Kubernetes ready
- Community support (Discord)
Enterprise
Starting price for up to 8 cores. Clustering, RBAC, and dedicated support.
- Everything in Open Source
- Horizontal clustering and HA
- Role-based access control (RBAC)
- Tiered storage and auto-aggregation
- Dedicated support and SLAs
Enterprise Features
Clustering
Horizontal scaling with automatic data distribution. Query routing and load balancing across nodes.
Security
Fine-grained RBAC with database and table-level permissions. LDAP/SAML integration available.
Data Management
Automated retention policies, continuous queries for aggregation, and tiered storage for cost optimization.
Get Started
Open Source Installation
docker run -d -p 8000:8000 \
-e STORAGE_BACKEND=local \
-v arc-data:/app/data \
ghcr.io/basekick-labs/arc:latestcurl -X POST http://localhost:8000/api/v1/write/line-protocol \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: text/plain" \
-H "x-arc-database: default" \
--data-binary "cpu,host=server01 value=0.64"curl -X POST http://localhost:8000/api/v1/query \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"sql":"SELECT * FROM cpu LIMIT 10"}'Arc Enterprise
For organizations requiring clustering, RBAC, and dedicated support.
Multi-node clustering and high availability
Role-based access control (RBAC) and SSO
Dedicated support and SLAs
Commercial licensing available
enterprise@basekick.net
Ready to handle billion-record workloads?
Deploy Arc in minutes. Own your data in Parquet.