dump Command in Linux With Examples (Deep, Practical Guide)

Backups usually fail in one of two ways: you never had one, or you had one that you never tested. I learned the second lesson the hard way on a recovery where ‘we run backups nightly’ really meant ‘we run a script that writes something somewhere.’ If you maintain Linux systems long enough, you’ll eventually meet the older, filesystem-aware tools that were built for reliability on Unix filesystems—dump is one of them.

dump backs up an entire filesystem (not just a directory tree) and can do true incrementals using dump levels. That makes it a useful tool when you’re on classic ext2/ext3 filesystems and you want predictable, repeatable backups that track changes over time. It’s also the kind of tool that will happily do the wrong thing if you point it at the wrong device, so I’ll show you safe, reproducible examples using a loopback image rather than your real disks.

By the end, you’ll know what dump actually stores, how the level system works, how to restore (including quick verification restores), and how I’d automate it in a 2026 workflow with encryption and offsite copies—while staying honest about where dump is a poor fit.

What dump Actually Backs Up (and What It Doesn’t)

At a high level, dump reads raw filesystem structures and writes them into a dump archive. The big mental model shift is this:

  • File-copy backup tools (like tar or rsync) walk paths.
  • dump works at the filesystem level, reading inodes and blocks.

That difference matters.

What you typically get with dump:

  • Accurate preservation of permissions, ownership, timestamps, hard links, and directory metadata.
  • Incremental backups based on dump ‘levels’ (0–9), so after a baseline full dump you can capture only files that changed since the last lower-level dump.
  • Consistent behavior for classic ext2/ext3 filesystems.

What you should not assume:

  • Cross-filesystem support. Classic dump is intended for ext2/ext3 and is not a universal backup tool for XFS, FAT, ReiserFS, etc. (Some modern packages can handle ext4, but I treat that as ‘verify in your distro, then test restores’—not as a given.)
  • Snapshot semantics. If the filesystem is changing while you back it up (busy databases, log churn, containers writing constantly), dump is not magically crash-consistent. You can improve consistency by backing up from a snapshot (LVM snapshot, VM snapshot, storage snapshot) or by quiescing services.

I also want to call out a practical detail: dump backups are usually restored with the companion tool restore. In day-to-day ops, I treat dump and restore as a pair—if restore can’t read it, you don’t have a backup.

There’s another consequence of filesystem-level backups that people sometimes miss: dump knows about inodes, not your intentions. If you accidentally back up the wrong device (or the wrong layer, like the parent block device instead of a partition), dump will still produce a file that looks like a backup. It just won’t be the backup you needed.

So my rule is simple: every dump job starts with a device selection step and ends with a restore verification step.

Syntax and the Options You Actually Use

The generic syntax is:

dump [options] files-to-dump

In real life, files-to-dump is commonly a filesystem device like /dev/sdb1 (or a mounted filesystem’s backing block device), and your options decide the dump level, destination, and bookkeeping.

A practical ‘core options’ table:

Option

What I use it for

Notes —

-0..-9

Set dump level

0 is full; 1-9 are incrementals -f

Where the dump is written

File, device, or - for stdout -u

Update /etc/dumpdates

Critical if you rely on incrementals -W / -w

Show filesystems needing backup

Helpful for status/reporting -a

Auto-size for disk output

Useful when not writing to tape -b

Dump record size

Default often 10KB; tune for throughput -B

Records per volume

More relevant for multi-volume/tape workflows -z

zlib compression

Default compression level often 2 -j

bzip2 compression (if supported)

Slower, sometimes smaller -S

Estimate dump size

Great for planning storage

If you run dump with no args, it prints usage, supported options, and the version.

Two more flags I use in real environments (not always, but often):

  • Verbose output: helpful for logs and postmortems when something fails mid-run.
  • A conservative I/O plan: dump can be fast enough to hurt you on busy hosts. I’d rather finish in 45 minutes reliably than in 12 minutes that intermittently trips storage latency alerts.

Install and Safety Setup (Without Touching Your Real Disks)

The most common mistake I see with filesystem tools is experimenting on real partitions. Don’t. You can practice everything dump/restore does using a loopback image file.

The sequence below creates a 2GB ext3 filesystem inside a file, mounts it, writes realistic data, and then backs it up.

# 1) Create an image file (2 GiB)

mkdir -p ~/lab/dump-demo

cd ~/lab/dump-demo

truncate -s 2G fs-ext3.img

# 2) Format it as ext3 (requires e2fsprogs)

mkfs.ext3 -F fs-ext3.img

# 3) Attach to a loop device and mount

sudo mkdir -p /mnt/dump-demo

sudo mount -o loop fs-ext3.img /mnt/dump-demo

# 4) Put some data in it

sudo mkdir -p /mnt/dump-demo/{etc,home/app/logs,var/lib/app}

sudo sh -c ‘echo APP_ENV=production > /mnt/dump-demo/etc/app.env‘

sudo sh -c ‘dd if=/dev/urandom of=/mnt/dump-demo/var/lib/app/blob.bin bs=1M count=64 status=progress‘

sudo sh -c ‘for day in 2026-01-{29,30,31}; do echo "$day api requests=12345" >> /mnt/dump-demo/home/app/logs/access.log; done‘

# 5) Flush writes

sync

Now you have a safe ‘filesystem device’ to practice with: it’s mounted at /mnt/dump-demo, and the underlying filesystem is in fs-ext3.img.

To find the corresponding loop device (optional but useful):

sudo losetup -a | rg ‘fs-ext3.img‘

When I do demos, I often keep it simple and run dump directly against the image file, which behaves like a block device for this purpose.

Before You Run dump on a Real Server

I know the whole point is production backups, but I always do a quick pre-flight. It prevents the classic failure modes (backing up the wrong thing, writing backups into the filesystem being backed up, or generating files nobody can restore).

My pre-flight checklist:

  • Confirm the filesystem type: dump is about filesystems, not directories.
  • Identify the correct source device reliably (I’ll show a safe method below).
  • Ensure the destination directory is not inside the filesystem you are backing up.
  • Confirm you have enough space for at least the worst-case backup size.
  • Decide how you will verify restores (at minimum: list contents; ideally: extract and diff a few known files).
  • Decide what ‘success’ means operationally (exit code, checksum, and a log line you can alert on).

If you’re backing up anything with a database, I also decide what consistency means for that system:

  • If I can take a snapshot: I take a snapshot.
  • If I can’t: I at least quiesce or use application-native backup tools for the data directory.

That sounds like ‘extra work,’ but it’s cheaper than discovering during an incident that your backup is logically inconsistent.

Finding the Right Filesystem Device Safely

This is the part that bites people. On Linux you have multiple layers that all look like ‘devices’:

  • A disk like /dev/sda
  • A partition like /dev/sda1
  • An LVM logical volume like /dev/mapper/vg0-root
  • A device-mapper target for encryption like /dev/mapper/cryptroot

You typically want the block device that actually holds the filesystem you care about (often the partition, LV, or decrypted mapping).

The safest workflow I use:

1) Start from a mount point you trust.

findmnt -no SOURCE,FSTYPE,TARGET /var

This tells you the source device and filesystem type for /var.

2) Cross-check with lsblk so you understand what it really is.

lsblk -f

3) If you’re scripting, take the mountpoint as input and derive the source device, rather than hardcoding /dev/sdb1 in a script that someone later runs on a different host.

If you do choose to hardcode, at least add a sanity check that the target is the expected FSTYPE and is mounted where you think it is.

For example, I might enforce:

  • Source device must be ext2, ext3, or whatever I’ve tested.
  • The device must currently be mounted at the mountpoint I’m expecting.

That’s not foolproof, but it prevents the most catastrophic operator mistakes.

Full Backups: Level 0 Done Right

A level 0 dump is a full backup of the filesystem. The pattern I recommend is:

  • Write the dump to a dedicated backup directory that is NOT inside the filesystem you’re backing up.
  • Include a date/time and level in the filename.
  • Verify with restore -t (table of contents) right after.

Here’s a full backup of our demo filesystem image:

mkdir -p ~/lab/dump-demo/backups

# Full backup (level 0), update dumpdates (-u), output to a file (-f)

sudo dump -0u -f ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump fs-ext3.img

A few details I care about in production:

  • I include -u only when I’m intentionally maintaining incrementals. If you’re experimenting, -u can be confusing because it updates /etc/dumpdates on your host.
  • I often add -a when writing to disk files (not tape), because it skips tape-size calculations:

sudo dump -0au -f ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump fs-ext3.img

Now verify the dump file is readable and see what it contains:

# List the backup contents (does not restore yet)

sudo restore -t -f ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump | head

If restore -t fails, stop right there. A backup you can’t list is a backup you can’t restore.

Writing to stdout (useful for pipelines)

dump can write to standard output if you pass -f -. That’s the foundation for compression/encryption/offsite workflows:

sudo dump -0a -f – fs-ext3.img > ~/lab/dump-demo/backups/dump-level0.raw

In practice, I almost never store raw dumps uncompressed or unencrypted, but it’s a good baseline to understand the data flow.

Incremental Backups: Levels 1–9 and /etc/dumpdates

The dump ‘level’ system is the main reason people still reach for dump.

  • Level 0 is a full baseline.
  • A level 1 backup includes everything changed since the last level 0.
  • A level 2 backup includes everything changed since the last level 1 (or lower) backup.
  • …and so on up to level 9.

A common schedule looks like this:

  • Weekly full: level 0 (Sunday early morning)
  • Daily incremental: level 1 (Mon–Sat)

You can build more elaborate plans (monthly level 0, weekly level 1, daily level 2), but I only do that if the restore time math works out.

The role of /etc/dumpdates

When you pass -u, dump records metadata about the backup in /etc/dumpdates. That file is what makes ‘changed since the last lower-level dump’ work consistently across runs.

Two commands help you reason about the schedule:

# Filesystems that need backing up (based on dumpdates)

dump -W

# Similar output, sometimes more verbose/interactive depending on build

dump -w

A practical note: /etc/dumpdates becomes part of your backup system’s state. I treat it like configuration:

  • It should be readable by the ops team.
  • It should be backed up (yes, meta-backups matter).
  • It should be consistent across the job host(s). If you run backups from multiple places, you need to be very deliberate, or your incremental logic becomes nonsense.

Incremental demo: change a few files, then run level 1

Let’s modify the filesystem content:

sudo sh -c ‘echo FEATUREFLAGNEW_CHECKOUT=true >> /mnt/dump-demo/etc/app.env‘

sudo sh -c ‘echo 2026-02-03 api requests=24109 >> /mnt/dump-demo/home/app/logs/access.log‘

sudo sh -c ‘dd if=/dev/urandom of=/mnt/dump-demo/var/lib/app/blob.bin bs=1M count=8 seek=32 conv=notrunc status=progress‘

sync

Now run a level 1 dump:

sudo dump -1au -f ~/lab/dump-demo/backups/dump-level1-2026-02-03.dump fs-ext3.img

If you list both archives with restore -t, you’ll notice:

  • The level 0 contains everything.
  • The level 1 contains only the files (and required directories/metadata) that changed since the level 0.

This is the point where I tell teams: incrementals are only as good as your restore drill. Make sure you can actually rebuild a filesystem from:

1) the last full, plus

2) the chain of incrementals you expect to apply.

If any incremental is missing or corrupted, your recovery point might jump back further than you think.

Understanding dump levels as ‘restore math’

Here’s how I think about levels operationally:

  • A level 0 is expensive once, but it resets complexity.
  • Each incremental reduces daily storage and runtime but adds restore steps.

So I choose a schedule by modeling the restore path:

  • If I do weekly level 0 and daily level 1, the worst case restore is: 1 full + up to 6 incrementals.
  • If I do monthly level 0, weekly level 1, daily level 2, the worst case restore is: 1 full + 1 weekly + up to 6 daily.

Both are ‘up to 8 archives’ in the worst case, but the second plan spreads work and storage differently.

I pick based on what I can restore under pressure with minimal human error. In most teams, that’s fewer steps.

A practical naming convention

In the field, I want filenames that answer three questions in a single glance:

  • What system/filesystem is this?
  • What level is it?
  • What timestamp?

Example:

  • db01-root-l0-2026-02-02T0200Z.dump
  • db01-root-l1-2026-02-03T0200Z.dump

Even better, store a small manifest file next to the dumps that records:

  • dump level
  • filesystem UUID
  • host
  • dump command line
  • checksum

That metadata saves time when you’re restoring under pressure.

Restoring: Fast Verification and Real Recovery

I judge backup systems by restore behavior. I want:

  • Fast proof the archive is readable.
  • A predictable process to restore individual files.
  • A predictable process to rebuild the whole filesystem.

restore supports a few common modes:

  • restore -t: list contents
  • restore -x: extract files
  • restore -i: interactive restore (browse/select)

Quick verification: list and spot-check

sudo restore -t -f ~/lab/dump-demo/backups/dump-level1-2026-02-03.dump

rg ‘etc/app.env

access.log‘

To extract a single file to your current directory:

mkdir -p ~/lab/dump-demo/restore-out

cd ~/lab/dump-demo/restore-out

# Extract a specific path from the dump

sudo restore -x -f ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump ./etc/app.env

# After restore, the file appears as ./etc/app.env relative to the working dir

cat etc/app.env

Interactive restore for ‘find it and pull it back’ work

If you’re recovering one or two files and you don’t remember the exact path, interactive mode is my default:

cd ~/lab/dump-demo/restore-out

sudo restore -i -f ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump

Inside the prompt, you can use commands like ls, cd, add, extract, and quit. (Run help inside the session to see the exact command set available in your build.)

Rebuilding a filesystem (conceptual process)

A full bare-metal restore is more involved and environment-specific, but the core idea is:

1) Create/format the target filesystem.

2) Mount it somewhere like /mnt/target.

3) cd /mnt/target.

4) Run restore to extract the level 0.

5) Apply incrementals in order (level 1, then 2, etc.).

In a lab (not on your live host), the flow looks like this:

# Create a new empty image to restore into

cd ~/lab/dump-demo

truncate -s 2G fs-ext3-restore.img

mkfs.ext3 -F fs-ext3-restore.img

sudo mkdir -p /mnt/dump-restore

sudo mount -o loop fs-ext3-restore.img /mnt/dump-restore

# Restore into that mount point

cd /mnt/dump-restore

sudo restore -x -f ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump

# Then apply incrementals in order

sudo restore -x -f ~/lab/dump-demo/backups/dump-level1-2026-02-03.dump

sync

sudo umount /mnt/dump-restore

If you’re building a real DR procedure, write it down as a runbook and test it quarterly. The muscle memory matters.

Restoring an Incremental Chain Without Surprises

This is the part I expand in every runbook because it’s where humans make mistakes.

My ‘no surprises’ restore rules:

  • Always restore into an empty filesystem. (If you restore into a dirty mount, you’ll get a mixture of old and new state unless you’re extremely careful.)
  • Always apply dumps in the correct order.
  • Always record which archives were applied and when.

A practical pattern:

1) Put the required archives into a single directory.

2) Sort them by the timestamp you encoded in the filename.

3) Restore the level 0 first.

4) Apply each incremental in order.

5) Verify a handful of known files and permissions.

Even in a lab, I keep a tiny checklist file like:

  • Applied: db01-root-l0-2026-02-02T0200Z
  • Applied: db01-root-l1-2026-02-03T0200Z
  • Verified: etc/app.env, var/lib/app/blob.bin hash, and directory ownership

I do this because in a real incident, multiple people may touch the restore. A simple log of what happened prevents confusion and accidental rework.

Performance, Compression, and Media Controls

dump comes from a world where tape drives and multi-volume backups were common, but many of those knobs still matter for speed and storage.

Block size (-b) and records per volume (-B)

  • -b changes the size (in kilobytes) per dump record.
  • -B sets how many records fit per volume.

On disk targets (files), you usually don’t need -B. On tape-like workflows, it’s part of planning.

Here’s an example that changes the block size:

sudo dump -0au -b 20 -f ~/lab/dump-demo/backups/dump-level0-b20-2026-02-03.dump fs-ext3.img

If your storage is fast, larger records can improve throughput. If you’re on slow disks or a busy system, the gains may be small. I usually benchmark a few settings rather than guessing.

Estimating size before you run (-S)

When you’re deciding retention and offsite copy size, -S is handy:

sudo dump -S fs-ext3.img

I treat the estimate as a planning number, not a promise.

Compression: built-in (-z / -j) vs external

dump can compress blocks before writing them:

# zlib compression (level 2 is often the default)

sudo dump -0au -z 2 -f ~/lab/dump-demo/backups/dump-level0-z2-2026-02-03.dump fs-ext3.img

When I’m designing pipelines, I often prefer external compression because it’s easier to swap algorithms and tune threads. For example, zstd is a common pick in 2026 for speed/ratio balance:

# Write to stdout, compress with zstd, store as .zst

sudo dump -0a -f – fs-ext3.img | zstd -T0 -6 -o ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump.zst

I also like external compression because it keeps dump focused on one job: reading the filesystem correctly.

Encryption (don’t store raw dumps)

If the filesystem contains credentials, customer data, or anything regulated, encrypt the backup at rest. A simple pattern is: dump -> compress -> encrypt -> write.

Using age (widely used for file encryption):

# Example recipient key file: ~/.config/age/recipients.txt

sudo dump -0a -f – fs-ext3.img \

| zstd -T0 -6 \

| age -R ~/.config/age/recipients.txt \

> ~/lab/dump-demo/backups/dump-level0-2026-02-03.dump.zst.age

In a team setting, I store recipient keys in your secrets system and rotate them like any other credential.

Snapshot Backups: How I Make dump Safer on Busy Systems

dump is happiest when the filesystem is stable while it reads it. Real systems are rarely stable.

So if the data matters, I take a snapshot and dump the snapshot.

There are multiple ways to do that, depending on your stack:

  • LVM snapshots: straightforward for LVM-based installations.
  • VM snapshots: if the filesystem lives inside a VM disk.
  • Storage snapshots: if you’re on a SAN/NAS or cloud volume that supports it.

A common LVM approach (conceptual):

  • Create snapshot LV for the target LV.
  • Mount the snapshot read-only.
  • Run dump against the snapshot device.
  • Unmount and remove the snapshot.

The key operational win is that your backup reads a point-in-time view, even while the live filesystem continues changing.

If you can’t snapshot, I at least consider quiescing high-churn services. For example:

  • Pause ingestion for a minute.
  • Rotate logs.
  • Use database-native backups for /var/lib/postgresql, /var/lib/mysql, etc.

I’m not claiming snapshots solve everything (they have their own failure modes and performance impact), but they change your backup from ‘maybe consistent’ to ‘deliberately consistent.’

Automation Patterns I’d Use in 2026

Even if you love dump, your backup system should look modern:

  • Automated scheduling
  • Clear retention
  • Checksums and verification
  • Offsite copies
  • Alerting when runs fail

A simple backup script with checksums and retention

Here’s a small, runnable shell script that:

  • runs a level 0 or level 1 dump
  • compresses and encrypts
  • writes a checksum
  • writes a tiny manifest
  • deletes older backups beyond a retention window
  • prevents overlapping runs

#!/usr/bin/env bash

set -euo pipefail

umask 077

# Source: device or image file that contains an ext filesystem

SOURCEFS=${SOURCEFS:-/var/backups/fs-ext3.img}

# Destination

BACKUPDIR=${BACKUPDIR:-/var/backups/dumps}

# Encryption recipients file for age

RECIPIENTS=${RECIPIENTS:-/etc/backup/age-recipients.txt}

# Level: 0 for full, 1 for daily incremental (you can extend as needed)

LEVEL=${1:-0}

# Retention policy (days)

RETENTIONDAYSFULL=${RETENTIONDAYSFULL:-30}

RETENTIONDAYSINCR=${RETENTIONDAYSINCR:-14}

# Locking to avoid overlapping runs

LOCKFILE=${LOCKFILE:-/var/lock/dump-backup.lock}

mkdir -p "$BACKUP_DIR"

TS=$(date -u +%Y-%m-%dT%H%M%SZ)

HOST=$(hostname -s 2>/dev/null || hostname)

OUTBASE="$BACKUPDIR/${HOST}-l${LEVEL}-${TS}.dump.zst.age"

OUTSHA256="$OUTBASE.sha256"

OUTMANIFEST="$OUTBASE.manifest"

# Optional: sanity checks

if [[ ! -e "$SOURCE_FS" ]]; then

echo "ERROR: SOURCEFS not found: $SOURCEFS" >&2

exit 2

fi

if [[ ! -r "$RECIPIENTS" ]]; then

echo "ERROR: age recipients file not readable: $RECIPIENTS" >&2

exit 2

fi

# Run the backup with a lock

exec 9>"$LOCK_FILE"

if ! flock -n 9; then

echo "ERROR: another backup run is in progress" >&2

exit 3

fi

echo "START dump level=$LEVEL source=$SOURCEFS out=$OUTBASE" >&2

# Note about -u:

# – If you rely on dump‘s incremental logic, you usually want -u

# – It updates /etc/dumpdates, so be intentional about where this runs

sudo dump -${LEVEL}a -u -f – "$SOURCE_FS" \

| zstd -T0 -6 \

| age -R "$RECIPIENTS" \

> "$OUT_BASE"

sha256sum "$OUTBASE" > "$OUTSHA256"

{

echo "host=$HOST"

echo "timestamp_utc=$TS"

echo "level=$LEVEL"

echo "source=$SOURCE_FS"

echo "output=$OUT_BASE"

echo "sha256file=$OUTSHA256"

echo "dumpcommand=sudo dump -${LEVEL}a -u -f – $SOURCEFS"

echo "pipeline=zstd -T0 -6 | age -R $RECIPIENTS"

} > "$OUT_MANIFEST"

# Fast verification: ensure restore can read the archive

# (Decrypt -> decompress -> restore -t)

if ! age -d "$OUT_BASE"

zstd -d -q

sudo restore -t -f – >/dev/null; then

echo "ERROR: restore verification failed for $OUT_BASE" >&2

exit 4

fi

echo "OK dump verified: $OUT_BASE" >&2

# Retention: delete old files based on whether they‘re full or incremental

# This assumes filenames include ‘-l0-‘ or ‘-l1-‘

find "$BACKUPDIR" -type f -name "-l0-.dump.zst.age" -mtime "+$RETENTIONDAYS_FULL" -delete

find "$BACKUPDIR" -type f -name "-l0-.dump.zst.age.sha256" -mtime "+$RETENTIONDAYS_FULL" -delete

find "$BACKUPDIR" -type f -name "-l0-.dump.zst.age.manifest" -mtime "+$RETENTIONDAYS_FULL" -delete

find "$BACKUPDIR" -type f -name "-l1-.dump.zst.age" -mtime "+$RETENTIONDAYS_INCR" -delete

find "$BACKUPDIR" -type f -name "-l1-.dump.zst.age.sha256" -mtime "+$RETENTIONDAYS_INCR" -delete

find "$BACKUPDIR" -type f -name "-l1-.dump.zst.age.manifest" -mtime "+$RETENTIONDAYS_INCR" -delete

echo "DONE" >&2

A few practical notes about this script:

  • I verify the archive immediately by doing a read-only listing (restore -t). That catches ‘encrypted garbage’, ‘truncated output’, and ‘restore can’t parse it’ failures.
  • The verification uses a pipeline that mirrors how you will restore: decrypt, decompress, then restore.
  • I deliberately write a manifest. When you’re restoring weeks later, the manifest reduces guesswork.
  • I keep retention simple. You can get fancy, but simple is reliable.

Scheduling: cron works, systemd timers are nicer

Cron is fine. I’ve used it for years. But on modern distributions, systemd timers are easier to make robust because you can:

  • Set timeouts
  • Capture logs cleanly
  • Automatically retry
  • Control concurrency

A conceptual systemd approach:

  • A dump-backup.service that runs the script.
  • A dump-backup.timer that schedules it.

What I care about operationally is not the scheduler, but the behavior:

  • The job must not overlap.
  • The job must log enough to debug.
  • Failures must alert someone.

Offsite copies: don’t confuse ‘encrypted’ with ‘safe’

Encryption protects confidentiality. Offsite copies protect availability.

I like an offsite pattern that includes:

  • One copy local for fast restores.
  • One copy offsite for disaster scenarios.
  • One copy that is hard to delete (immutability or write-once retention).

How you implement offsite depends on your environment:

  • rsync to another server
  • object storage with lifecycle policies
  • a backup repository tool that supports retention and pruning

If you do your own file copies, I strongly recommend storing:

  • the dump archive
  • its checksum file
  • its manifest

…and verifying the checksum after transfer.

Common Pitfalls (and How I Avoid Them)

Here are the mistakes I see repeatedly, plus what I do instead.

Pitfall 1: backing up the wrong device

Symptom: you can restore, but the data is not what you expected.

Fix: derive the device from findmnt on a known mountpoint and validate FSTYPE before running dump.

Pitfall 2: writing backups into the filesystem being backed up

Symptom: backups grow the filesystem, the filesystem changes while being backed up, and you get churn or even a runaway loop.

Fix: store backups on a different filesystem. I enforce this in scripts (destination path cannot be under the mountpoint).

Pitfall 3: trusting incrementals without a restore drill

Symptom: you have a chain of dumps, but one is corrupted or missing. Restore fails mid-way.

Fix: schedule periodic full restores into a test image or VM. At minimum, verify each archive with restore -t at creation time.

Pitfall 4: assuming dump equals application-consistent

Symptom: the filesystem restores, but the database won’t start or data is inconsistent.

Fix: use snapshots, quiesce, or application-native backup tooling for databases.

Pitfall 5: not recording metadata

Symptom: months later you don’t know what host, device, or mountpoint an archive corresponds to.

Fix: include the host and mountpoint/device in the filename, plus a manifest file.

Pitfall 6: forgetting the human factor

Symptom: restore steps exist ‘in someone’s head.’ During an incident, that person is unavailable.

Fix: write a runbook and practice it. The runbook should be executable by someone who didn’t write it.

Practical Scenarios: When I Reach for dump (and When I Don’t)

I like dump when:

  • The filesystem is ext2/ext3 (and ext4 only if I’ve explicitly validated on my distro).
  • I need true filesystem-level incrementals.
  • I want a straightforward ‘backup the whole filesystem’ approach.
  • I have a clean snapshot mechanism.

I avoid dump when:

  • The filesystem isn’t a good match (XFS, ZFS, btrfs, etc.).
  • I need cross-platform restores.
  • I need file-level backups across many mounts with exclude patterns and application awareness.
  • The environment is container-heavy and the meaningful data lives in databases/object storage instead of local filesystems.

In other words: dump is great when you treat it as a filesystem specialist, not as a universal backup solution.

Alternatives and How They Compare

There’s no single best tool, so I choose based on what I’m protecting and how I’ll restore.

Tool style

Examples

Strengths

Weaknesses

Filesystem-level backup

dump/restore

True incrementals by level; preserves inode metadata well

Filesystem support is limited; needs careful device selection; not inherently application-consistent

Archive-based

tar

Simple; portable; great for a directory tree

Incrementals are more manual; can miss hardlink semantics unless careful; path-based

Sync-based

rsync

Great for mirroring; efficient over network

Not a ‘backup’ unless versioned; deletions propagate; needs snapshot/versioning for rollback

Modern backup repos

e.g., deduplicating backup tools

Encryption, dedupe, retention, prune, easy restores

More moving parts; repo corruption is a consideration; learning curve

Filesystem-native snapshot/replication

ZFS, btrfs snapshots

Crash-consistent snapshots; replication built-in

Ties you to a filesystem; operational complexityWhen I say ‘modern backup repos’, I’m not dismissing dump. I’m saying: dump can be one component in a modern system, but you should be honest about the trade-offs.

A Practical Backup Drill Checklist (What I Actually Test)

If you want to be confident, test what you’ll do on the worst day.

Here’s a drill I like because it’s repeatable:

1) Create a fresh empty filesystem image (or a disposable VM disk).

2) Restore last full + incrementals into it.

3) Verify:

– A couple of known files exist and match expected contents.

– Ownership and permissions on a few directories look right.

– A binary file hash matches if you have a reference.

4) Time the restore. That becomes your baseline RTO estimate.

5) Write down:

– which dumps were used

– how long it took

– what failed or surprised you

If you do this quarterly, your backups stop being a comforting story and start being an operational capability.

Final Thoughts

I still like dump because it’s honest: it’s a filesystem tool that does filesystem backups. When you pair it with a safety-first workflow (snapshots, careful device selection, encrypted pipelines, immediate restore verification, and periodic full restore drills), it can be a reliable part of a backup strategy.

But I don’t romanticize it. If your environment is modern and heterogeneous—multiple filesystems, containers everywhere, data in managed services—then a filesystem-only tool may not match what you need to recover.

My bias is simple: pick the tool that makes restores boring. If dump does that for your use case, use it well and test it. If it doesn’t, be willing to move on.

Scroll to Top