Backup

LVM Snapshots for Consistent Server Backups on Linux

Backing up a database while it is running creates a problem most people don’t think about until it bites them. A regular file copy of a live PostgreSQL or MariaDB data directory can catch files mid-write, producing a backup that looks fine but won’t actually restore. LVM snapshots solve this by freezing the filesystem state in under a tenth of a second, giving you a consistent copy to back up while the database keeps accepting writes.

Original content from computingforgeeks.com - post 165205

This guide walks through the full workflow: setting up a dedicated LVM volume for database storage, creating snapshots during active writes, and proving that the snapshot data is consistent. We cover both PostgreSQL and MariaDB with their respective quiesce methods, automated backup scripting with systemd timers, and snapshot sizing for production workloads. SELinux stays enforcing throughout.

Tested April 2026 on Rocky Linux 10.1 (kernel 6.12, SELinux enforcing) and Ubuntu 24.04.4 LTS with PostgreSQL 17.9 and MariaDB 10.11.15

Prerequisites

  • OS: Rocky Linux 10 / AlmaLinux 10 or Ubuntu 24.04 LTS
  • A dedicated disk for LVM (at least 20 GB). This can be a second virtual disk, a spare physical drive, or unpartitioned space
  • Root or sudo access
  • Basic LVM familiarity (physical volumes, volume groups, logical volumes)

How LVM Snapshots Work

An LVM snapshot is a copy-on-write (COW) image of a logical volume at a specific point in time. When you create a snapshot, LVM does not copy the entire volume. It records a marker and begins tracking changes. Every time a block on the original volume is about to be overwritten, LVM copies the old block to the snapshot’s COW area first, then allows the write. Reads from the snapshot check the COW area; if the block was changed, the snapshot returns the saved copy. If it was not changed, the read goes to the original volume.

Snapshot creation takes a fraction of a second regardless of volume size because it only writes metadata. In our testing, creating a 3 GB snapshot on a 15 GB volume took 0.097 seconds. Removal time depends on how much COW data accumulated.

The snapshot needs its own storage space for the COW area. Size it based on how much data will change during the backup window. For a 15-minute database backup, 10 to 20 percent of the origin volume is typical. If the snapshot runs out of space, it becomes permanently invalid and returns I/O errors. There is no recovery from an overflowed snapshot; you must remove it and create a new one.

Thick vs Thin Snapshots

LVM supports two snapshot types. Traditional thick snapshots pre-allocate a fixed COW area and are simpler to reason about. Thin provisioning snapshots share a pool and allocate space on demand, which is more space-efficient but riskier: if the thin pool fills up, every volume in the pool goes read-only, not just one snapshot.

FeatureThick (Traditional)Thin Provisioning
COW spacePre-allocated at creationAllocated on demand from pool
Overflow impactOnly the snapshot is lostAll volumes in the pool freeze
Write overhead3x to 78x depending on chunk sizeLower, shared metadata
Multiple snapshotsEach needs its own COW spaceEfficient, shared pool
Snapshot of snapshotNot supportedSupported
Best forShort-lived backup windowsFrequent snapshots, dev/test

For production database backups where you create a snapshot, back up, and remove it within minutes, thick snapshots are the safer choice. This guide uses thick snapshots throughout.

Set Up the LVM Data Volume

Production databases should live on a dedicated LVM volume, separate from the OS. This gives you control over snapshot sizing and prevents backup operations from affecting root filesystem space. We will use a second disk (/dev/vdb in our VM, adjust the device name to match your hardware).

Create the physical volume, volume group, and logical volume:

sudo pvcreate /dev/vdb
sudo vgcreate datavg /dev/vdb
sudo lvcreate -L 15G -n datalv datavg

This creates a 15 GB logical volume on a 20 GB disk, leaving roughly 5 GB free in the volume group for snapshots. Verify with vgs:

sudo vgs datavg

The VFree column confirms how much space remains for snapshots:

  VG     #PV #LV #SN Attr   VSize   VFree
  datavg   1   1   0 wz--n- <20.00g <5.00g

Format the volume with XFS (Rocky/RHEL) or ext4 (Ubuntu/Debian), create the mount point, and mount it:

On Rocky Linux 10 / AlmaLinux 10 / RHEL 10:

sudo mkfs.xfs /dev/datavg/datalv
sudo mkdir -p /data
sudo mount /dev/datavg/datalv /data

On Ubuntu 24.04 / Debian 13:

sudo mkfs.ext4 /dev/datavg/datalv
sudo mkdir -p /data
sudo mount /dev/datavg/datalv /data

Add the mount to /etc/fstab so it persists across reboots:

echo '/dev/datavg/datalv /data xfs defaults 0 0' | sudo tee -a /etc/fstab

Replace xfs with ext4 on Ubuntu/Debian systems.

SELinux Contexts (Rocky/RHEL Only)

SELinux enforcing mode requires the correct file context on custom data directories. Without it, PostgreSQL and MariaDB will fail to start or access their data files. Set the database contexts before installing:

sudo semanage fcontext -a -t postgresql_db_t "/data/pgsql(/.*)?"
sudo semanage fcontext -a -t mysqld_db_t "/data/mysql(/.*)?"

After creating directories and populating data, apply the contexts:

sudo restorecon -Rv /data/pgsql
sudo restorecon -Rv /data/mysql

Verify no SELinux denials after starting the services:

sudo ausearch -m avc -ts recent

A clean result shows <no matches>. Ubuntu and Debian use AppArmor, which does not require any special configuration for LVM operations or custom data directories.

Install PostgreSQL on the LVM Volume

Install PostgreSQL 17 from the official repository. On Rocky Linux 10:

sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-10-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo dnf install -y postgresql17-server postgresql17

On Ubuntu 24.04:

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg
sudo apt update
sudo apt install -y postgresql-17

Initialize the data directory on the LVM volume instead of the default location:

sudo mkdir -p /data/pgsql
sudo chown postgres:postgres /data/pgsql
sudo -u postgres /usr/pgsql-17/bin/initdb -D /data/pgsql

On Ubuntu, the binary path is /usr/lib/postgresql/17/bin/initdb instead.

On Rocky Linux, apply the SELinux contexts (set up earlier) and configure the systemd override:

sudo restorecon -Rv /data/pgsql
sudo mkdir -p /etc/systemd/system/postgresql-17.service.d
echo -e "[Service]\nEnvironment=PGDATA=/data/pgsql" | sudo tee /etc/systemd/system/postgresql-17.service.d/override.conf
sudo systemctl daemon-reload

Start PostgreSQL and verify it is running from the LVM volume:

sudo systemctl enable --now postgresql-17
sudo systemctl status postgresql-17

The output confirms the custom data directory:

● postgresql-17.service - PostgreSQL 17 database server
     Active: active (running)
    Drop-In: /etc/systemd/system/postgresql-17.service.d
             └─override.conf
   Main PID: 2306 (postgres)
             ├─2306 /usr/pgsql-17/bin/postgres -D /data/pgsql

Create a test database with sample data for the snapshot tests:

sudo -u postgres psql -c "CREATE DATABASE backuptest;"
sudo -u postgres psql backuptest -c "
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    customer_name VARCHAR(100),
    amount NUMERIC(10,2),
    created_at TIMESTAMP DEFAULT now()
);
INSERT INTO orders (customer_name, amount)
SELECT 'customer_' || i, (random() * 1000)::numeric(10,2)
FROM generate_series(1, 50000) AS i;"

Verify the baseline:

sudo -u postgres psql backuptest -c "SELECT count(*) FROM orders;"

You should see 50,000 rows:

 count
-------
 50000
(1 row)

Snapshot a Live PostgreSQL Database

The basic snapshot backup workflow has four steps: flush dirty buffers with a checkpoint, create the snapshot, back up from the mounted snapshot, then remove it. PostgreSQL handles crash recovery automatically when you restore from a snapshot, so the data is safe even without calling pg_backup_start(). The WAL replay brings the database to a consistent state, just like recovering from a power failure.

Run a checkpoint to flush dirty pages to disk:

sudo -u postgres psql -c "CHECKPOINT;"

Create the LVM snapshot immediately after:

sudo lvcreate -L 3G -s -n datasnap /dev/datavg/datalv

That completes in under a second:

  Logical volume "datasnap" created.

Check the snapshot status. Data% shows how much of the COW area has been used:

sudo lvs datavg

With no writes since the snapshot, usage sits at zero:

  LV       VG     Attr       LSize  Pool Origin Data%  Meta%
  datalv   datavg owi-aos--- 15.00g
  datasnap datavg swi-a-s---  3.00g      datalv 0.00

Mount the snapshot read-only. XFS requires the nouuid option because the snapshot has the same UUID as the origin:

sudo mkdir -p /mnt/snap
sudo mount -o ro,nouuid /dev/datavg/datasnap /mnt/snap

On ext4 (Ubuntu/Debian), drop the nouuid flag since ext4 does not enforce UUID uniqueness on mount:

sudo mount -o ro /dev/datavg/datasnap /mnt/snap

Back up the snapshot contents with tar:

sudo mkdir -p /backup
sudo tar czf /backup/pgsql-$(date +%F).tar.gz -C /mnt/snap .

In our test, tar compressed 102 MB of PostgreSQL data into a 9.1 MB archive in 1.2 seconds.

Unmount and remove the snapshot as soon as the backup completes. Every write to the origin triggers COW overhead while a snapshot exists, so leaving snapshots around degrades performance:

sudo umount /mnt/snap
sudo lvremove -f /dev/datavg/datasnap

Prove Data Consistency During Active Writes

The real test of a snapshot backup is whether it captures consistent data while the database is under write load. We ran a continuous insert loop against PostgreSQL, created a snapshot mid-write, then started a temporary PostgreSQL instance on the snapshot data to verify integrity.

Start a background process that inserts rows one at a time (each insert is its own transaction):

for i in $(seq 1 30000); do
  sudo -u postgres psql -d backuptest -q -c \
    "INSERT INTO orders (customer_name, amount) VALUES ('txn_$i', $((RANDOM % 1000)));"
done &

While writes are running, flush and snapshot:

sudo -u postgres psql -c "CHECKPOINT;"
sudo lvcreate -L 3G -s -n datasnap /dev/datavg/datalv

Mount the snapshot and copy the data to a temporary directory (a read-only mount cannot run a PostgreSQL instance directly):

sudo mkdir -p /mnt/snap /tmp/snap-verify
sudo mount -o ro,nouuid /dev/datavg/datasnap /mnt/snap
sudo cp -a /mnt/snap/pgsql /tmp/snap-verify/
sudo chown -R postgres:postgres /tmp/snap-verify/pgsql
sudo rm -f /tmp/snap-verify/pgsql/postmaster.pid

The postmaster.pid file must be removed because it references the running instance. Start a temporary PostgreSQL on port 5433:

sudo -u postgres /usr/pgsql-17/bin/pg_ctl \
  -D /tmp/snap-verify/pgsql \
  -o "-p 5433 -c logging_collector=off -c unix_socket_directories=/tmp" \
  -w start

PostgreSQL detects the unclean shutdown and automatically replays the WAL:

LOG:  database system was not properly shut down; automatic recovery in progress
LOG:  redo starts at 0/66404E0
LOG:  redo done at 0/6643078
LOG:  database system is ready to accept connections

Query the snapshot database to check the frozen row count:

sudo -u postgres psql -h /tmp -p 5433 -d backuptest -c "SELECT count(*) FROM orders;"

Compare with the live database (which has been accumulating writes since the snapshot):

sudo -u postgres psql -d backuptest -c "SELECT count(*) FROM orders;"

Our results:

SourceRow Count
Snapshot (frozen point-in-time)300,380
Live DB (3 seconds after snapshot)300,612
Live DB (after writer finished)300,678

The snapshot captured exactly 300,380 rows, all with unique IDs and valid timestamps. The live database continued accepting writes without interruption. An integrity check on the snapshot confirmed zero null values and no duplicate keys. This is the core value of LVM snapshots: a consistent, point-in-time copy taken in 0.097 seconds while production traffic continues.

Clean up the temporary instance and snapshot:

sudo -u postgres /usr/pgsql-17/bin/pg_ctl -D /tmp/snap-verify/pgsql stop
sudo rm -rf /tmp/snap-verify
sudo umount /mnt/snap
sudo lvremove -f /dev/datavg/datasnap

MariaDB Snapshot Backup with BACKUP STAGE

MariaDB offers a more granular lock mechanism than the traditional FLUSH TABLES WITH READ LOCK. The BACKUP STAGE commands (available since MariaDB 10.4.1) minimize the impact on production traffic by only blocking commits at the moment the snapshot is taken, not during the entire preparation phase.

Install MariaDB and configure its data directory on the LVM volume. On Rocky Linux 10:

sudo dnf install -y mariadb-server

On Ubuntu 24.04:

sudo apt install -y mariadb-server

Configure the custom data directory. On Rocky, apply the SELinux context first (shown in the earlier section), then create the config override:

sudo mkdir -p /data/mysql
sudo chown mysql:mysql /data/mysql
echo -e "[mysqld]\ndatadir=/data/mysql" | sudo tee /etc/my.cnf.d/custom-datadir.cnf
sudo mysql_install_db --user=mysql --datadir=/data/mysql
sudo restorecon -Rv /data/mysql
sudo systemctl enable --now mariadb

Create a test database:

sudo mariadb -e "
CREATE DATABASE backuptest;
USE backuptest;
CREATE TABLE transactions (
    id INT AUTO_INCREMENT PRIMARY KEY,
    account_name VARCHAR(100),
    amount DECIMAL(10,2),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
) ENGINE=InnoDB;
INSERT INTO transactions (account_name, amount)
SELECT CONCAT('acct_', seq), ROUND(RAND() * 1000, 2)
FROM seq_1_to_50000;"

The BACKUP STAGE workflow runs inside a single MariaDB session. The system command executes the snapshot creation from within the same connection that holds the lock:

sudo mariadb -e "
BACKUP STAGE START;
BACKUP STAGE FLUSH;
BACKUP STAGE BLOCK_DDL;
BACKUP STAGE BLOCK_COMMIT;
system sudo lvcreate -L 3G -s -n datasnap /dev/datavg/datalv;
BACKUP STAGE END;"

Each stage does progressively less work: START signals backup tools to prepare, FLUSH flushes non-transactional tables, BLOCK_DDL prevents schema changes, and BLOCK_COMMIT pauses transaction commits just long enough to take the snapshot. The entire sequence completes in under a second for most workloads.

Mount, backup, and clean up using the same steps as the PostgreSQL section:

sudo mount -o ro,nouuid /dev/datavg/datasnap /mnt/snap
sudo tar czf /backup/mariadb-$(date +%F).tar.gz -C /mnt/snap mysql/
sudo umount /mnt/snap
sudo lvremove -f /dev/datavg/datasnap

Legacy Alternative: FLUSH TABLES WITH READ LOCK

Older MariaDB or MySQL versions without BACKUP STAGE support use the heavier FLUSH TABLES WITH READ LOCK. This blocks all writes (including InnoDB) until you release it. Keep the lock duration to an absolute minimum:

sudo mariadb -e "
FLUSH TABLES WITH READ LOCK;
system sudo lvcreate -L 3G -s -n datasnap /dev/datavg/datalv;
UNLOCK TABLES;"

Prefer BACKUP STAGE when available. It blocks only transaction commits at the snapshot moment, not all writes throughout the preparation phase.

Automated Backup Script

Wrapping the snapshot workflow in a script lets you schedule it with a systemd timer and get consistent logging. Save this as /usr/local/bin/lvm-snapshot-backup.sh:

#!/bin/bash
# LVM snapshot backup for PostgreSQL
# Adjust VG, LV, SNAP_SIZE, and BACKUP_DIR for your environment

set -euo pipefail

VG="datavg"
LV="datalv"
SNAP_NAME="backup-snap-$(date +%Y%m%d-%H%M%S)"
SNAP_SIZE="3G"
SNAP_DEV="/dev/$VG/$SNAP_NAME"
MOUNT="/mnt/snap"
BACKUP_DIR="/backup"
LOG="/var/log/lvm-backup.log"

log() { echo "$(date '+%F %T') $1" | tee -a "$LOG"; }

mkdir -p "$BACKUP_DIR" "$MOUNT"

log "Starting backup"

# Step 1: Checkpoint
log "Running PostgreSQL CHECKPOINT..."
SECONDS=0
sudo -u postgres psql -c "CHECKPOINT;" >> "$LOG" 2>&1
log "Checkpoint done (${SECONDS}s)"

# Step 2: Create snapshot
log "Creating snapshot $SNAP_NAME ($SNAP_SIZE)..."
SECONDS=0
lvcreate -L "$SNAP_SIZE" -s -n "$SNAP_NAME" "/dev/$VG/$LV" >> "$LOG" 2>&1
log "Snapshot created (${SECONDS}s)"

# Step 3: Mount
mount -o ro,nouuid "$SNAP_DEV" "$MOUNT"
log "Snapshot mounted at $MOUNT"

# Step 4: Check snapshot usage
USAGE=$(lvs "$SNAP_DEV" -o data_percent --noheadings | tr -d ' ')
log "Snapshot COW usage: ${USAGE}%"

# Step 5: Backup
ARCHIVE="$BACKUP_DIR/pgsql-$(date +%F_%H%M%S).tar.gz"
log "Creating archive: $ARCHIVE"
SECONDS=0
tar czf "$ARCHIVE" -C "$MOUNT" .
log "Archive complete (${SECONDS}s, $(du -h "$ARCHIVE" | cut -f1))"

# Step 6: Cleanup
umount "$MOUNT"
lvremove -f "$SNAP_DEV" >> "$LOG" 2>&1
log "Snapshot removed"

# Step 7: Retention (keep last 7 days)
find "$BACKUP_DIR" -name "pgsql-*.tar.gz" -mtime +7 -delete
log "Retention applied (7 days)"

log "Backup complete: $ARCHIVE"

Make it executable:

sudo chmod +x /usr/local/bin/lvm-snapshot-backup.sh

Create a systemd service and timer. The service unit:

sudo vi /etc/systemd/system/lvm-snapshot-backup.service

Add the following:

[Unit]
Description=LVM Snapshot Backup for PostgreSQL
After=postgresql-17.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/lvm-snapshot-backup.sh
User=root

The timer unit runs the backup daily at 2 AM:

sudo vi /etc/systemd/system/lvm-snapshot-backup.timer

Add the following:

[Unit]
Description=Daily LVM Snapshot Backup

[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=true

[Install]
WantedBy=timers.target

Enable and start the timer:

sudo systemctl daemon-reload
sudo systemctl enable --now lvm-snapshot-backup.timer
sudo systemctl list-timers lvm-snapshot-backup.timer

Test it manually first:

sudo systemctl start lvm-snapshot-backup.service
sudo journalctl -u lvm-snapshot-backup.service --no-pager

Snapshot Sizing and Auto-Extend

The snapshot COW area must hold every original block that gets overwritten during the backup window. Undersizing means losing the snapshot mid-backup. Oversizing wastes disk space. The right size depends on your write rate and how long the backup takes.

For a rough estimate: if your database writes 500 MB/hour and the backup takes 15 minutes, the snapshot needs at least 125 MB of COW space. Double that for safety. For most database backups that complete in under 30 minutes, 15 to 20 percent of the origin volume is a safe starting point.

Monitor snapshot usage during the backup with:

sudo lvs -o lv_name,lv_size,data_percent,origin datavg

Auto-Extend as a Safety Net

LVM can automatically grow a snapshot before it overflows. Edit /etc/lvm/lvm.conf and uncomment these two settings in the activation section:

sudo vi /etc/lvm/lvm.conf

Find and set:

snapshot_autoextend_threshold = 70
snapshot_autoextend_percent = 20

This tells LVM to grow the snapshot by 20% when it reaches 70% capacity. The minimum threshold is 50; the default of 100 disables auto-extend entirely. The lvm2-monitor service (or monitoring service on some distributions) must be running for auto-extend to trigger:

sudo systemctl enable --now lvm2-monitor

Auto-extend only works if the volume group has free space. If the VG is full, the snapshot still overflows. Monitor VG free space alongside snapshot usage in production.

What Happens When a Snapshot Overflows

An overflowed snapshot is permanently invalid. The kernel disables it and logs:

device-mapper: snapshot: Invalidating snapshot: Unable to allocate exception.

Reads from the snapshot return I/O errors. There is no way to repair it. Remove the snapshot with lvremove, increase the size or enable auto-extend, and create a new one. This is the single biggest pitfall with LVM snapshots, and the reason you should never leave a snapshot running longer than necessary.

Performance Tuning: Chunk Size Matters

The snapshot chunk size controls how large each COW copy unit is. The default on most distributions is 4 KB, which creates significant write amplification: every 4 KB write triggers a 4 KB read-copy-write cycle. Benchmarks show that larger chunk sizes dramatically reduce write overhead.

Chunk SizeWrite Overhead with Active SnapshotNotes
4 KB (default)Up to 78x slowerWorst case for random writes
64 KB~3x slowerModerate improvement
256 KB~1.1x (near native)Good balance
512 KB~1x (no measurable penalty)Best throughput, uses more COW space

Create snapshots with a larger chunk size using the -c flag:

sudo lvcreate -c 512K -L 3G -s -n datasnap /dev/datavg/datalv

Verify the chunk size on an existing snapshot:

sudo lvs -o lv_name,lv_size,chunk_size,origin datavg

The output confirms the chunk size:

  LV       LSize Chunk   Origin
  datasnap 3.00g 512.00k datalv

The tradeoff is that larger chunks use more COW space per write (a 1-byte change still copies an entire 512 KB chunk). For short-lived backup snapshots, the extra space is negligible and the performance improvement is significant. Update the chunk size in your backup script if database write performance matters during the backup window.

Rocky Linux 10 vs Ubuntu 24.04: Key Differences

ItemRocky Linux 10Ubuntu 24.04
Default VG name (installer)rlubuntu-vg
LVM2 version2.03.322.03.22
Default filesystemXFSext4
Snapshot mount flag-o nouuid required (XFS)No special flag needed (ext4)
Security frameworkSELinux (enforcing), requires semanage fcontext for custom pathsAppArmor, no LVM-specific action needed
PostgreSQL packagepostgresql17-serverpostgresql-17
PostgreSQL binary path/usr/pgsql-17/bin//usr/lib/postgresql/17/bin/
MariaDB packagemariadb-server (10.11.x from OS repo)mariadb-server (10.11.x from OS repo)
Firewallfirewalldufw

The LVM snapshot workflow itself is identical across distributions. The differences are in package names, file paths, the filesystem mount flags, and the mandatory access control system. Ubuntu 24.04 allocates only about 100 GB for its root LV by default even on larger disks, which conveniently leaves free space in the VG for snapshots without needing a second disk.

SELinux After Restoring from a Snapshot Backup

When you mount a snapshot, the files retain their original SELinux contexts. This is fine for read-only backup operations. However, after restoring data from a snapshot backup to a new location, you must re-apply the correct contexts:

sudo restorecon -Rv /data/pgsql
sudo restorecon -Rv /data/mysql

Skipping this step causes SELinux denials when the database service tries to access the restored files. Always run restorecon as part of your restore procedure on RHEL-based systems.

PostgreSQL pg_backup_start() vs CHECKPOINT

PostgreSQL 15 renamed the backup functions from pg_start_backup() / pg_stop_backup() to pg_backup_start() / pg_backup_stop(), and removed the exclusive backup mode entirely. The non-exclusive mode requires the calling session to stay connected from start to stop, which complicates scripting.

For LVM snapshot backups, pg_backup_start() is not strictly required. A CHECKPOINT followed by an immediate snapshot produces crash-consistent data that PostgreSQL recovers from by replaying WAL, just like after a power failure. The PostgreSQL documentation confirms this: “The database server can be stopped, if desired, but need not be… the backup will appear as if the server crashed.” Use pg_backup_start() / pg_backup_stop() when you need to track which WAL segments to archive alongside the backup for point-in-time recovery.

Related Articles

AlmaLinux Install R and Rstudio on Rocky Linux 9 / AlmaLinux 9 AlmaLinux Install LEMP Stack on Rocky Linux 8|AlmaLinux 8 Ubuntu Install Jenkins on Ubuntu 24.04 / 22.04 Ansible Create LVM Logical Volume and Mount using Ansible

Leave a Comment

Press ESC to close