Proxmox Backup Hosting.
Native PBS protocol. No abstraction layers.

Hosted Proxmox Backup Server infrastructure running on ZFS. Chunk-based deduplication, HMAC-verified integrity, client-side AES-256 encryption. API access included.

✓ Native PBS protocol·✓ ZFS + chunk dedup·✓ HMAC verification

Start Free — 100GB included

Powered by remote-backups.com — managed PBS hosting, German datacenters.

The technical version of Proxmox backup hosting.

We don't abstract away the PBS internals. You get real PBS infrastructure with ZFS ARC caching, chunk-level dedup, and full API access.

Chunk-based deduplication on ZFS

PBS splits backups into variable-length chunks stored exactly once per datastore. On ZFS with ARC, hot chunks stay in RAM. Cross-VM deduplication applies automatically — identical OS base layers are stored once, regardless of how many VMs share them.

HMAC-verified chunk integrity

Every chunk is HMAC-SHA256-verified on write and on restore. Silent corruption is caught before it reaches your restore. Verification runs against stored chunk hashes — no re-downloading required.

API access for automation

Your PBS datastore exposes the full PBS REST API. Query backup task status, trigger verification jobs, read chunk statistics, or integrate into your monitoring stack.

How Proxmox backup hosting works

1
Provision a PBS datastore

Register and create a datastore. ZFS-backed on dedicated hardware. Credentials delivered immediately.

2
Connect via native PBS remote protocol

Add the endpoint as a PBS remote from your existing PBS, or as direct storage in Proxmox VE. Standard TLS, your encryption keys. No agents.

3
Query dedup stats and run verification

Use the PBS API or our dashboard to check your chunk dedup ratio, schedule verification runs, and monitor backup health over time.

Frequently Asked Questions

Technical questions about Proxmox backup hosting

Deduplication is per-datastore. If you sync from multiple local PBS instances into the same hosted datastore, chunks shared between those backups are stored only once. This applies to identical VM templates and shared OS base chunks across environments.

Use the PBS API endpoint GET /api2/json/admin/datastore//gc-status, or the PBS web UI under Datastore → GC Status. This shows total data size vs. stored size, giving your effective dedup ratio. We also surface this in our dashboard.

PBS computes an HMAC-SHA256 for each chunk on write. On verification, it recomputes the hash and compares. Mismatched chunks are flagged immediately. Because this runs against stored chunk hashes rather than re-downloading encrypted data, it catches disk-level corruption early without decrypting anything.

PBS uses TLS 1.3 for all connections. For LAN or high-bandwidth links, TLS overhead is negligible. For WAN connections, compression is applied before encryption by default, reducing transferred bytes. Initial seeds over slow WAN links can be seeded locally and imported.

Compression is controlled on the PBS client side per backup job. The hosted datastore accepts any client-side compression setting. For already-compressed data (databases, archives), disabling compression can speed up backup jobs without meaningfully increasing storage usage.

Hosted PBS with the internals you care about.

Native protocol. ZFS. HMAC verification. From €8.50/TB. 100GB free.

Start Free

✓ Native PBS protocol·✓ ZFS + chunk dedup·✓ HMAC verification