Linux Storage Management

Part 0 – Environment Setup and Package Management

This part prepares your system so every subsequent part works consistently. The tools covered here are the substrate. Nothing in Parts 1–5 functions without these foundations in place.

1. Core Concepts of Package Management

Repositories and Mirrors

Every Linux distribution pulls packages from one or more repositories – remote servers hosting compiled binaries, libraries, and metadata. A mirror is a geographically distributed copy of a repository. Closer mirrors mean faster downloads.

The package manager reads a local index (cached metadata) and resolves dependencies before downloading anything. Always update the index before installing anything on a fresh system.

Key repository concepts:

  • Main / Core – officially supported, stable packages.
  • Community / Universe / EPEL – community-maintained or extended packages. Often needed for tools like ZFS on some distros.
  • Extra / Multilib – larger or supplementary packages.
  • AUR (Arch only) – user-submitted source recipes, not binary packages.

Package Naming Differences Across Distros

The same tool often has slightly different package names across distributions. The table below covers the tools used throughout this manual:

Tool / LibraryAlpineArchUbuntu / DebianRocky / RHEL
LVM2lvm2lvm2lvm2lvm2
mdadmmdadmmdadmmdadmmdadm
cryptsetupcryptsetupcryptsetupcryptsetupcryptsetup
btrfs toolsbtrfs-progsbtrfs-progsbtrfs-progsbtrfs-progs
XFS toolsxfsprogsxfsprogsxfsprogsxfsprogs
ext2/3/4 toolse2fsprogse2fsprogse2fsprogse2fsprogs
ZFSzfszfs-dkmszfsutils-linuxvia ZFS on Linux
SMART toolssmartmontoolssmartmontoolssmartmontoolssmartmontools
NVMe CLInvme-clinvme-clinvme-clinvme-cli
partedpartedpartedpartedparted
NFS utilitiesnfs-utilsnfs-utilsnfs-commonnfs-utils
SSHFSsshfssshfssshfsfuse-sshfs
NTFS driverntfs-3gntfs-3gntfs-3gntfs-3g (EPEL)
exFAT toolsexfatprogsexfatprogsexfatprogsexfatprogs
FAT32 toolsdosfstoolsdosfstoolsdosfstoolsdosfstools
iSCSI initiatoropen-iscsiopen-iscsiopen-iscsiiscsi-initiator-utils
pv (progress)pvpvpvpv
ncduncduncduncduncdu
ripgrepripgrepripgrepripgrepripgrep
treetreetreetreetree
rename (Perl)perl-renameperl-renamerenameprename
colordiffcolordiffcolordiffcolordiffcolordiff

Base vs Optional Utilities

  • Base (always install): util-linux, e2fsprogs, xfsprogs, btrfs-progs, lvm2, mdadm, smartmontools, parted
  • Situational (install when needed): cryptsetup, nvme-cli, nfs-utils, sshfs, ntfs-3g, zfs
  • Lab / dev only: pv, ncdu, colordiff, tree, ripgrep

On Alpine (typical for containers and constrained systems), be surgical. Every package has a footprint cost.

Verifying Installed Tools

Before assuming a tool is available, verify it:

which lvm                          # returns path if found in $PATH
command -v mdadm                   # POSIX-safe check (works in scripts)
type cryptsetup                    # shows type: file, alias, builtin, function
lvm --version
mdadm --version
zpool --version

Check if a package is installed:

# Alpine
apk info lvm2

# Arch
pacman -Qi lvm2

# Ubuntu / Debian
dpkg -l lvm2
apt list --installed 2>/dev/null | grep lvm2

# Rocky / RHEL
rpm -q lvm2
dnf list installed lvm2

2. Alpine Linux Setup

Alpine uses apk – the Alpine Package Keeper. It is fast, minimal, and well-suited to containers and constrained environments.

apk update
apk upgrade

apk add \
  util-linux \
  e2fsprogs \
  xfsprogs \
  btrfs-progs \
  lvm2 \
  mdadm \
  cryptsetup \
  smartmontools \
  parted \
  nvme-cli \
  blkid \
  sgdisk

# ZFS
apk add zfs zfs-libs

# iSCSI
apk add open-iscsi

# Network filesystems
apk add nfs-utils sshfs

# Filesystem extras
apk add ntfs-3g exfatprogs dosfstools

# Utilities
apk add pv ncdu tree ripgrep colordiff perl-rename

# Search
apk search lvm
apk search -v lvm          # verbose (shows description)

# Remove
apk del package-name
apk del --purge package-name    # remove config files too

Alpine uses OpenRC, not systemd. Service management commands differ (see Section 6). Many tools that are preinstalled on other distros – including bash, shadow, util-linux sub-utilities – are absent or minimal on Alpine. The shell defaults to ash (busybox). Install bash explicitly if your scripts require it:

apk add bash

LVM on Alpine requires the lvm2 service to start at boot via OpenRC, not systemd units.

3. Arch Linux Setup

Arch uses pacman. Always do a full system upgrade before installing new packages. Installing onto a partial upgrade causes dependency problems.

sudo pacman -Syu

sudo pacman -S \
  util-linux \
  e2fsprogs \
  xfsprogs \
  btrfs-progs \
  lvm2 \
  mdadm \
  cryptsetup \
  smartmontools \
  parted \
  nvme-cli

# ZFS -- requires DKMS and kernel headers
sudo pacman -S linux-headers
yay -S zfs-dkms zfs-utils           # from AUR (requires yay or another AUR helper)

# Or use archzfs repo:
# Add [archzfs] to /etc/pacman.conf, then:
sudo pacman -S zfs-linux

# iSCSI
sudo pacman -S open-iscsi

# Network filesystems
sudo pacman -S nfs-utils sshfs

# Filesystem extras
sudo pacman -S ntfs-3g exfatprogs dosfstools f2fs-tools

# Utilities
sudo pacman -S pv ncdu tree ripgrep colordiff perl-rename

# Search
pacman -Ss lvm
pacman -Qi lvm2                    # installed package info
pacman -Ql lvm2                    # list files in package

# Remove
sudo pacman -R package-name
sudo pacman -Rs package-name       # remove with unused dependencies
sudo pacman -Rns package-name      # remove, deps, and config files

ZFS on Arch is not in the official repositories due to licence incompatibility with the Linux kernel. Use either the AUR (zfs-dkms) or the unofficial archzfs repository. DKMS recompiles the ZFS kernel module whenever the kernel is updated.

After a kernel upgrade on Arch, always verify:

sudo dkms status

If ZFS modules are not built, reboot will break ZFS mounts.

4. Rocky Linux (RHEL-Family) Setup

Rocky Linux uses dnf. The RHEL ecosystem prioritises LVM (default installer stack), XFS (default filesystem), and enterprise features like Stratis and VDO.

sudo dnf update -y

sudo dnf install -y \
  util-linux \
  e2fsprogs \
  xfsprogs \
  lvm2 \
  mdadm \
  cryptsetup \
  smartmontools \
  parted \
  nvme-cli

# Btrfs availability depends on version. On Rocky 9 it is present but limited.
# On RHEL 9, it is deprecated.
sudo dnf install -y btrfs-progs    # if available

# Enable EPEL (Extended Package Repository)
sudo dnf install -y epel-release
sudo dnf update -y

# After EPEL
sudo dnf install -y ntfs-3g pv ncdu ripgrep colordiff

# ZFS
sudo dnf install -y epel-release
sudo dnf install -y https://zfsonlinux.org/epel/zfs-release-2-3$(rpm --eval "%{dist}").noarch.rpm
sudo dnf install -y kernel-devel zfs
sudo modprobe zfs

# Persist ZFS module load
echo "zfs" | sudo tee /etc/modules-load.d/zfs.conf

# Stratis -- layered storage management
sudo dnf install -y stratisd stratis-cli

# VDO -- Virtual Data Optimiser (dedup + compression)
# Note: on Rocky 9 and RHEL 9, VDO is integrated into LVM as lvm-vdo.
# The standalone vdo package is for older releases.
sudo dnf install -y lvm2     # lvm-vdo is included in lvm2 on RHEL 9+

# Optional tools
sudo dnf install -y \
  nfs-utils \
  fuse-sshfs \
  iscsi-initiator-utils \
  dosfstools \
  exfatprogs \
  tree \
  colordiff

# Remove
sudo dnf remove package-name
sudo dnf autoremove

XFS is the default filesystem on Rocky/RHEL. All standard LVM + XFS tooling is battle-tested here.

5. Ubuntu / Debian Setup

Ubuntu uses apt. ZFS is better-supported here than on most distros – it ships as a DKMS package maintained by Canonical and available directly from universe.

sudo apt update && sudo apt upgrade -y

sudo apt install -y \
  util-linux \
  e2fsprogs \
  xfsprogs \
  btrfs-progs \
  lvm2 \
  mdadm \
  cryptsetup \
  smartmontools \
  parted \
  nvme-cli \
  gdisk

# ZFS -- fully supported on Ubuntu, no external repo required
sudo apt install -y zfsutils-linux

# Network filesystems
sudo apt install -y nfs-common sshfs cifs-utils

# iSCSI
sudo apt install -y open-iscsi

# Filesystem extras
sudo apt install -y ntfs-3g exfatprogs dosfstools f2fs-tools

# Utilities
sudo apt install -y pv ncdu tree ripgrep colordiff rename

# Remove
sudo apt remove package-name
sudo apt purge package-name           # remove + config files
sudo apt autoremove

Ubuntu’s mdadm install prompts for email configuration. You can safely skip this or configure it later in /etc/mdadm/mdadm.conf. After any mdadm change, update the initramfs:

sudo update-initramfs -u

6. Service Management Differences

Storage daemons – LVM monitor, mdadm monitor, iSCSI – must be enabled to start at boot or to run in the background.

systemd (Arch, Rocky, Ubuntu)

# LVM
sudo systemctl enable --now lvm2-monitor

# mdadm RAID monitor
sudo systemctl enable --now mdmonitor

# iSCSI initiator
sudo systemctl enable --now iscsid

# Stratis (Rocky)
sudo systemctl enable --now stratisd

# Check status
sudo systemctl status lvm2-monitor
sudo systemctl status mdmonitor

Key systemd commands:

systemctl start <service>
systemctl stop <service>
systemctl enable <service>           # enable at boot
systemctl disable <service>
systemctl enable --now <service>     # enable + start in one command
systemctl status <service>
systemctl is-active <service>
journalctl -u <service>              # logs for a specific service
journalctl -u <service> -f           # follow logs live

OpenRC (Alpine)

# LVM
sudo rc-update add lvm boot
sudo rc-service lvm start

# mdadm
sudo rc-update add mdadm boot
sudo rc-service mdadm start

# iSCSI
sudo rc-update add iscsid boot
sudo rc-service iscsid start

Key OpenRC commands:

rc-service <n> start
rc-service <n> stop
rc-service <n> restart
rc-service <n> status
rc-update add <n> <runlevel>      # enable at runlevel
rc-update del <n>
rc-update show                    # list all enabled services

OpenRC runlevels:

RunlevelPurpose
sysinitVery early boot (kernel, cgroups)
bootCore system services (LVM, mdadm, fsck)
defaultNormal operation (networking, daemons)
shutdownServices run during shutdown

LVM and mdadm belong in the boot runlevel so they initialise before filesystems are mounted.

7. Lab Environment Recommendations

When learning or testing storage operations, never experiment directly on production disks. Use virtual disks – they are disposable, recreatable, and allow safe experimentation with destructive commands like mdadm --zero-superblock or dd.

# 1GB zeroed image
dd if=/dev/zero of=disk1.img bs=1M count=1024

# Sparse image (no actual disk space used until written)
truncate -s 1G disk1.img

# Pre-allocate without sparse
fallocate -l 1G disk1.img

Attach Images as Loop Devices

Loop devices make image files appear as block devices:

sudo losetup -fP disk1.img         # attach and detect partitions
sudo losetup -l                    # see which loop device was assigned
sudo losetup /dev/loop0 disk1.img  # attach to specific loop device
sudo losetup -d /dev/loop0         # detach
sudo losetup -D                    # detach all

After attaching, the image appears as /dev/loop0 (or similar) and can be partitioned, formatted, added to RAID, and mounted exactly like a physical disk.

Multi-Disk Lab Setup

# Create 4 x 512MB virtual disks
for i in 1 2 3 4; do
  truncate -s 512M disk${i}.img
  sudo losetup -fP disk${i}.img
done

sudo losetup -l
# /dev/loop0, /dev/loop1, /dev/loop2, /dev/loop3

# Clean up all at once
for i in 0 1 2 3; do sudo losetup -d /dev/loop${i}; done
rm -f disk{1..4}.img

Safety Practices for Lab Work

# Always check which device you're targeting before destructive commands
lsblk
sudo blkid

# Double-check the loop device is your image, not a real disk
sudo losetup -l | grep disk1.img

# Keep a second terminal showing lsblk output as a sanity check
watch -n 1 lsblk

Part 1 – Block Storage and Device Management

This part covers everything from the moment a disk is plugged in to the point where it is ready for a filesystem, RAID array, or LVM layer. Nothing in later parts is possible without the concepts and tools here.

1. Storage Concepts and Terminology

The Storage Stack

Before touching any command, understand where you are in the stack at any given time:

Physical Disk (e.g. /dev/sda)
  └── Partition (e.g. /dev/sda1, /dev/sda2)
        └── RAID Array (e.g. /dev/md0)       ← optional
              └── LUKS Container              ← optional
                    └── LVM Physical Volume
                          └── Volume Group
                                └── Logical Volume (e.g. /dev/vg0/lv-data)
                                      └── Filesystem (ext4, xfs, btrfs...)
                                            └── Mount Point (/mnt/data)

Each layer is optional. You can format a raw disk directly, or build the full stack for maximum flexibility and resilience.

Block Devices vs Character Devices

Linux exposes hardware through two device categories:

  • Block devices (b in ls -la /dev/) – transfer data in fixed-size blocks. All storage disks, SSDs, NVMe drives, and loop devices are block devices. They support random access and buffered I/O. Examples: /dev/sda, /dev/nvme0n1, /dev/loop0.
  • Character devices (c in ls -la /dev/) – transfer data as a stream, without buffering. Examples: /dev/tty, /dev/urandom, /dev/null.

Storage operations exclusively concern block devices.

Sector Size, Block Size, Alignment

  • Sector – the smallest addressable unit on a disk. Traditionally 512 bytes on older drives; modern drives use 4096-byte (4K) sectors, sometimes presenting a 512-byte logical sector for compatibility. Check with sudo smartctl -i /dev/sda.
  • Block – the filesystem’s minimum allocation unit. Typically 4096 bytes for ext4/xfs. A file of 1 byte occupies one full block on disk.
  • Alignment – partitions should start on sector boundaries aligned to 1MiB to ensure optimal performance on both 512-byte and 4K drives. parted handles this automatically when using percentage-based sizes (1MiB 100%).

Misalignment causes performance degradation – the disk must read and write extra sectors for every I/O operation.

Device Names and Persistent Identifiers

Device names assigned by the kernel are not stable. /dev/sda today may be /dev/sdb after a reboot if hardware detection order changes.

Name TypeExampleStable?
Kernel device name/dev/sda, /dev/nvme0n1No
UUID/dev/disk/by-uuid/a1b2-...Yes
Label/dev/disk/by-label/DATAYes*
Disk ID/dev/disk/by-id/ata-Samsung_...Yes
Partition UUID (PARTUUID)/dev/disk/by-partuuid/...Yes

Labels are stable but must be unique. Two disks with the same label causes collisions. Always use UUIDs in /etc/fstab, GRUB config, and mdadm configuration.

2. Device Discovery and Inspection

lsblk – List Block Devices

The first command to run on any system. Shows all block devices in a tree structure.

lsblk                                              # basic tree view
lsblk -f                                           # include filesystem info
lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT,UUID     # custom columns
lsblk -d                                           # disks only (no partitions)
lsblk -l                                           # flat list (no tree)
lsblk -J                                           # JSON output (scripting)
lsblk -p                                           # full device paths
lsblk /dev/sda                                     # specific device only
ColumnMeaning
NAMEDevice name
SIZESize of the device
TYPEdisk, part, lvm, raid, loop, rom
FSTYPEFilesystem type (if formatted)
MOUNTPOINTWhere it is mounted (if mounted)
UUIDFilesystem UUID
LABELFilesystem label
RORead-only flag
RMRemovable flag

blkid – Identify Block Devices

Shows UUIDs, labels, filesystem types, and partition types.

sudo blkid                         # all devices
sudo blkid /dev/sdb                # specific device
sudo blkid /dev/sdb1               # specific partition
sudo blkid -t TYPE=ext4            # filter by filesystem type
sudo blkid -o list                 # list format (more readable)
sudo blkid -o export /dev/sdb1     # shell-variable format (KEY=VALUE)

fdisk -l and parted -l

sudo fdisk -l                      # all disks
sudo fdisk -l /dev/sda             # specific disk

sudo parted -l                     # all disks, verbose
sudo parted /dev/sda print         # specific disk
sudo parted /dev/sda print free    # show unallocated space

Persistent Device Paths

ls -la /dev/disk/by-uuid/          # UUIDs → device symlinks
ls -la /dev/disk/by-id/            # hardware IDs → device symlinks
ls -la /dev/disk/by-label/         # labels → device symlinks
ls -la /dev/disk/by-partuuid/      # partition UUIDs

Use these paths in scripts and config files instead of /dev/sdX.

smartctl – SMART Disk Health

SMART reads health data from the drive’s internal sensors. Not available on all drives (especially some NVMe or USB-attached drives).

sudo smartctl -H /dev/sda          # quick health check
sudo smartctl -a /dev/sda          # full report
sudo smartctl -i /dev/sda          # drive identity and capabilities
sudo smartctl -t short /dev/sda    # start a short self-test (minutes)
sudo smartctl -t long /dev/sda     # start a long self-test (hours)
sudo smartctl -l selftest /dev/sda # view test results
sudo smartctl -A /dev/sda          # all SMART attributes

Critical SMART attributes to monitor:

IDAttributeWhat it Means
5Reallocated Sector CountSectors remapped due to errors. >0 is a warning.
187Reported UncorrectableErrors the drive could not correct.
188Command TimeoutCommands that timed out.
197Current Pending SectorSectors waiting to be reallocated.
198Offline UncorrectableSectors failing offline checks.

Non-zero values in 187, 197, or 198 are serious warning signs. Replace the drive.

Continuous SMART Monitoring with smartd

Running a one-off smartctl is useful for inspection, but real operational value comes from continuous monitoring. smartd is a daemon that polls drives on a schedule and sends alerts.

Configure /etc/smartd.conf:

# Monitor all drives, run short test weekly, long test monthly, email on failure
DEVICESCAN -a -o on -S on -s (S/../../6/02|L/../../1/02) -m admin@yourdomain.com -M exec /usr/share/smartmontools/smartd-runner

Enable and start:

sudo systemctl enable --now smartd   # systemd
sudo rc-update add smartd default    # OpenRC (Alpine)

Test that alerts work:

sudo smartd -q onecheck              # run one check cycle and exit

If your system does not have a local MTA, pipe alerts to a script or use msmtp as a relay. Verifying that the email path works before you need it is essential – a SMART alert that never arrives is the same as no alert at all.

NVMe-Specific Inspection

NVMe drives use a different interface. They appear as /dev/nvme0n1, where nvme0 is the controller and n1 is the namespace.

sudo nvme list                          # list all NVMe devices
sudo nvme id-ctrl /dev/nvme0            # controller identity
sudo nvme id-ns /dev/nvme0n1            # namespace info
sudo nvme smart-log /dev/nvme0          # NVMe SMART data
sudo nvme error-log /dev/nvme0          # error log

NVMe partitions: /dev/nvme0n1p1, /dev/nvme0n1p2, etc.

Reading dmesg for Storage Events

The kernel logs all storage events – disk detection, errors, RAID events, I/O failures.

sudo dmesg | grep -i "sd\|nvme\|ata\|scsi"    # disk-related messages
sudo dmesg | grep -i "error\|fail\|reset"      # errors
sudo dmesg | tail -50                          # recent events
sudo dmesg -T                                  # human-readable timestamps
sudo dmesg --follow                            # live stream
sudo dmesg -l err,crit                         # only errors and critical

When a disk is misbehaving, always check dmesg first. Look for patterns like blk_update_request: I/O error, ata1: COMRESET failed, or EXT4-fs error. These tell you which device is failing before SMART data even updates.

3. Partitioning

MBR vs GPT

FeatureMBRGPT
Max disk size2 TB9.4 ZB (effectively unlimited)
Max primary partitions4 (or 3 + extended)128
Partition table locationFirst 512 bytes of diskStart and end of disk (backup)
UEFI supportLimitedNative
Boot supportBIOS bootBIOS + UEFI
RedundancyNonePrimary + backup GPT header

Use GPT for all new systems. MBR is only relevant for legacy BIOS systems or disks under 2TB that must boot on very old hardware.

parted is non-interactive by default, supports GPT, and handles alignment correctly.

# Create a GPT partition table
sudo parted /dev/sdb mklabel gpt

# Single partition -- entire disk
sudo parted /dev/sdb mkpart primary 1MiB 100%

# Partition for LVM
sudo parted /dev/sdb mklabel gpt
sudo parted /dev/sdb mkpart primary 1MiB 100%
sudo parted /dev/sdb set 1 lvm on

# Partition for RAID
sudo parted /dev/sdb mklabel gpt
sudo parted /dev/sdb mkpart primary 1MiB 100%
sudo parted /dev/sdb set 1 raid on

# Multiple partitions
sudo parted /dev/sdb mklabel gpt
sudo parted /dev/sdb mkpart primary 1MiB 512MiB      # EFI or boot
sudo parted /dev/sdb mkpart primary 512MiB 100%       # data
sudo parted /dev/sdb set 1 esp on
sudo parted /dev/sdb set 2 lvm on

# Verify alignment
sudo parted /dev/sdb align-check optimal 1    # returns: 1 aligned

gdisk – GPT-Specific Interactive Tool

sudo gdisk /dev/sdb

# Commands inside gdisk:
# n → new partition
# p → print partition table
# t → change partition type
# L → list known partition types
# w → write and exit
# q → quit without saving

Common type codes:

CodeType
8300Linux filesystem
8200Linux swap
8e00Linux LVM
fd00Linux RAID
ef00EFI System
ef02BIOS boot

Inform the Kernel of Partition Changes

sudo partprobe /dev/sdb                  # re-read partition table
sudo blockdev --rereadpt /dev/sdb        # alternative
sudo udevadm settle                      # wait for udev events to complete

If the disk is in use (e.g. mounted), partprobe may fail. Reboot in that case.

4. Swap

Swap is secondary memory space on disk. When RAM is exhausted, the kernel moves inactive memory pages to swap. It is not a replacement for RAM – it is a pressure valve.

Swap Partitions

sudo parted /dev/sdb mkpart primary linux-swap 1MiB 4GiB

sudo mkswap /dev/sdb1                    # create swap signature
sudo swapon /dev/sdb1                    # activate swap

# Get UUID and add to fstab
sudo blkid /dev/sdb1
# fstab entry:
UUID=xxxx-xxxx  none  swap  defaults  0  0

Swap Files

# Method 1: fallocate (fast, not suitable for Btrfs)
sudo fallocate -l 2G /swapfile

# Method 2: dd (works everywhere, slower)
sudo dd if=/dev/zero of=/swapfile bs=1M count=2048

sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

swapon --show
free -h

# fstab
/swapfile  none  swap  defaults  0  0

On Btrfs, swap files require the nodatacow attribute and must not be on a snapshotted subvolume:

sudo btrfs subvolume create /swap
sudo chattr +C /swap                        # disable CoW
sudo fallocate -l 2G /swap/swapfile
sudo chmod 600 /swap/swapfile
sudo mkswap /swap/swapfile
sudo swapon /swap/swapfile

Managing Swap

swapon --show                              # show active swap devices
swapon -a                                  # activate all swap in /etc/fstab
swapoff /swapfile                          # deactivate
swapoff -a                                 # deactivate all swap
free -h                                    # show RAM and swap usage
cat /proc/swaps                            # raw swap info

Swappiness

vm.swappiness controls how aggressively the kernel swaps. Range: 0–200.

  • 0 – avoid swapping unless absolutely necessary
  • 10 – common production recommendation for most servers
  • 60 – default on most distros
cat /proc/sys/vm/swappiness                # check current value
sudo sysctl vm.swappiness=10               # set temporarily

# Persist
echo "vm.swappiness=10" | sudo tee /etc/sysctl.d/99-swappiness.conf
sudo sysctl -p /etc/sysctl.d/99-swappiness.conf

On ZFS systems, set swappiness to 0 or very close to it. ZFS uses RAM aggressively for its ARC (Adaptive Replacement Cache). A high swappiness value causes the kernel to swap out memory that ZFS needs for ARC, which undermines ZFS performance without improving anything. The recommended value for ZFS systems is 0 on Linux.

echo "vm.swappiness=0" | sudo tee /etc/sysctl.d/99-swappiness.conf

When Swap Helps and When It Hurts

Swap helps on systems with modest, bursty memory usage. It absorbs spikes without crashing. Hibernation requires swap at least as large as RAM.

Swap hurts on databases (PostgreSQL, MySQL – paging causes severe latency spikes), high-throughput containers, NVMe/SSD systems under heavy write workloads (write amplification and wear), and systems under constant memory pressure where swap just delays the OOM killer.

5. Block I/O Fundamentals

Read/Write Paths

When an application writes data, the path through the kernel is:

Application
  → VFS (Virtual Filesystem Switch)
    → Filesystem layer (ext4, xfs, btrfs)
      → Page cache (RAM buffer)
        → Block layer
          → I/O scheduler
            → Device driver
              → Hardware (disk)

fsync() and sync force the page cache to flush to disk. Without them, data may appear written but is still in RAM, vulnerable to power loss.

I/O Schedulers

Check the current scheduler for a device:

cat /sys/block/sda/queue/scheduler
# Example output: [mq-deadline] kyber bfq none
# The bracketed name is active.

Set a scheduler temporarily:

echo "mq-deadline" | sudo tee /sys/block/sda/queue/scheduler
echo "none" | sudo tee /sys/block/nvme0n1/queue/scheduler

Persist via udev rule in /etc/udev/rules.d/60-ioscheduler.rules:

ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="mq-deadline"
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="nvme[0-9]*", ATTR{queue/scheduler}="none"
ACTION=="add|change", KERNEL=="vd[a-z]", ATTR{queue/scheduler}="none"

Note: rotational==0 (SSDs) uses none, not mq-deadline. NVMe hardware manages its own queuing and wrapping it in a software scheduler adds overhead without benefit.

Device TypeRecommended SchedulerReason
NVMe SSDnoneNVMe hardware handles queuing
SATA SSDnoneHardware queue management
HDD (spinning)mq-deadlineReduces seek time via merge/sort
Virtual disknoneHypervisor handles I/O ordering

Read-Ahead

Read-ahead pre-fetches data the kernel expects will be needed next. Increase it for sequential workloads (backup, large file servers). Keep it low for random-access workloads (databases).

sudo blockdev --getra /dev/sda             # get current read-ahead (in 512-byte sectors)
sudo blockdev --setra 2048 /dev/sda        # set to 1MB (2048 * 512B sectors)
sudo blockdev --setra 256 /dev/sda         # set to 128KB (lower for SSDs/random I/O)

Basic Performance Observation

# iostat -- per-device I/O statistics
# Install: apk add sysstat / pacman -S sysstat / apt install sysstat / dnf install sysstat
iostat                             # snapshot
iostat -x 2                        # extended stats, refresh every 2 seconds
iostat -x -d 5 /dev/sda           # specific device, 5-second intervals

# Key iostat columns:
# r/s, w/s     -- reads and writes per second
# rkB/s, wkB/s -- read and write bandwidth in KB/s
# await        -- average I/O wait time in milliseconds
# %util        -- device utilisation (100% = saturated)

# iotop -- per-process I/O (requires root)
sudo iotop
sudo iotop -o                      # only show processes doing I/O

# blktrace -- low-level block I/O tracing (advanced)
sudo blktrace /dev/sda -d 10       # trace for 10 seconds
blkparse -i sda.blktrace.0         # analyse output

Part 2 – Storage Protection and Volume Design

This part covers the abstraction layers between physical media and filesystems: LVM for flexibility, RAID for redundancy, LUKS for encryption, iSCSI for network block storage, and modern unified stacks.

1. Logical Volume Management (LVM)

What LVM Is and Why It Exists – A Plain Explanation

Without LVM, when you format a disk partition, you get a fixed block of storage. If you need more space, you are stuck. You cannot resize it easily. You cannot span it across multiple disks. You cannot snapshot it before making changes.

LVM solves all of this by inserting a flexible abstraction layer between raw disks and the filesystems that sit on top.

The analogy is a bank. Your raw disks are cash deposits. LVM is the bank. You deposit your disks into the bank, the bank pools the money, and then you take out loans (logical volumes) of whatever size you need – and you can adjust the loan later without touching the underlying deposits.

The three layers:

Physical Volumes (PVs) → Volume Groups (VGs) → Logical Volumes (LVs)
         ↑                        ↑                       ↑
   "deposits"              "the bank pool"            "your loans"

PV (Physical Volume): A disk or partition that has been initialised for LVM use. The raw ingredient. You run pvcreate on it and LVM stamps metadata onto it.

VG (Volume Group): One or more PVs pooled together. Think of this as the bank’s total reserves. The VG has a total size equal to the sum of all the PVs inside it. You can add more PVs to a VG later to grow it.

LV (Logical Volume): A slice of the VG that you carve out and use. This is what you format and mount. An LV can be resized up or down (within the constraints of the VG). It can be snapshotted. It can be migrated to a different PV without downtime.

Worked example: You have three 1TB disks. You create three PVs, pool them into a VG (total 3TB), and carve out three LVs: one 500GB for /data, one 200GB for /var/log, and one 100GB for a staging area. The remaining 2.2TB sits in the VG as free space, available to expand any LV on demand without taking the system down.

1.1 Physical Volumes

# Initialise a disk or partition as a PV
sudo pvcreate /dev/sdb
sudo pvcreate /dev/sdb1
sudo pvcreate /dev/sdb /dev/sdc /dev/sdd         # multiple at once

# Display PV information
sudo pvs                                          # summary table
sudo pvdisplay                                    # verbose
sudo pvdisplay /dev/sdb                           # specific PV
sudo pvscan                                       # scan and list all PVs

# Remove a PV (must first remove from VG)
sudo pvremove /dev/sdb

1.2 Volume Groups

# Create a VG from one or more PVs
sudo vgcreate vg0 /dev/sdb
sudo vgcreate vg0 /dev/sdb /dev/sdc              # multi-disk VG
sudo vgcreate -s 4M vg0 /dev/sdb                # custom PE size (default = 4MB)

# Display VG information
sudo vgs                                          # summary
sudo vgdisplay                                    # verbose
sudo vgdisplay vg0

# Extend VG by adding a PV
sudo vgextend vg0 /dev/sdc

# Reduce VG -- move data off a PV before removing it
sudo pvmove /dev/sdc                              # move all extents off the PV
sudo vgreduce vg0 /dev/sdc                        # remove empty PV from VG
sudo pvremove /dev/sdc

# Activate / deactivate
sudo vgchange -ay vg0                             # activate
sudo vgchange -an vg0                             # deactivate

# Rename
sudo vgrename vg0 vg-data

# Export and import (for moving between machines)
sudo vgexport vg0                                 # on source
sudo vgimport vg0                                 # on destination
sudo vgchange -ay vg0

# Remove VG (must have no LVs)
sudo vgremove vg0

A note on pvmove: pvmove is the correct way to evacuate data from a disk before removing it from a VG. However, it can fail or be interrupted – particularly on large arrays under load. An interrupted pvmove leaves data in a mid-migration state. If the operation is interrupted:

sudo pvmove --abort            # abort the interrupted move
# Then investigate and retry
sudo pvmove /dev/sdc

Always run pvmove in a tmux session or with nohup on production systems so that a lost terminal does not orphan it.

sudo pvmove -b /dev/sdc        # run in background
# Monitor progress
watch -n 1 'sudo pvs'

1.3 Logical Volumes

# Create LV -- absolute size
sudo lvcreate -L 20G -n lv-data vg0

# Create LV -- percentage of free space
sudo lvcreate -l 100%FREE -n lv-data vg0
sudo lvcreate -l 80%FREE -n lv-data vg0

# Create LV -- percentage of total VG
sudo lvcreate -l 50%VG -n lv-data vg0

# Display LV information
sudo lvs                                          # summary
sudo lvdisplay                                    # verbose
sudo lvdisplay /dev/vg0/lv-data
sudo lvscan

# Remove an LV
sudo lvremove /dev/vg0/lv-data

1.4 Formatting and Mounting LVs

The LV is accessible via two equivalent paths:

/dev/vg0/lv-data
/dev/mapper/vg0-lv--data           ← same device, two names
# Format
sudo mkfs.ext4 /dev/vg0/lv-data
sudo mkfs.xfs /dev/vg0/lv-data
sudo mkfs.btrfs /dev/vg0/lv-data

# Mount
sudo mkdir -p /mnt/data
sudo mount /dev/vg0/lv-data /mnt/data

# Persist in /etc/fstab (get UUID first)
sudo blkid /dev/vg0/lv-data

# fstab entry
/dev/vg0/lv-data  /mnt/data  ext4  defaults,noatime  0  2
UUID=xxxx-xxxx    /mnt/data  ext4  defaults,noatime  0  2

1.5 Resizing LVs

One of LVM’s primary advantages is online resizing. You can extend a mounted, live filesystem without taking anything down.

Extend (online – filesystem stays mounted):

sudo lvextend -L +10G /dev/vg0/lv-data           # extend by 10G
sudo lvextend -L 50G /dev/vg0/lv-data            # extend to total of 50G
sudo lvextend -l +100%FREE /dev/vg0/lv-data      # use all free VG space
sudo lvextend -L +10G -r /dev/vg0/lv-data        # extend LV + resize filesystem together (recommended)

Resize filesystem separately (after extending LV):

sudo resize2fs /dev/vg0/lv-data                  # ext4 (online)
sudo xfs_growfs /mnt/data                        # xfs (online, must be mounted)
sudo btrfs filesystem resize max /mnt/data        # btrfs (online)

Shrink (ext4 only – must be unmounted):

sudo umount /mnt/data
sudo e2fsck -f /dev/vg0/lv-data                 # mandatory before shrink
sudo resize2fs /dev/vg0/lv-data 15G             # shrink filesystem first
sudo lvreduce -L 15G /dev/vg0/lv-data           # then shrink LV
sudo mount /dev/vg0/lv-data /mnt/data

XFS cannot be shrunk. If you need a smaller XFS volume, back up, destroy, and recreate.

1.6 LVM Snapshots

Snapshots capture an LV’s state at a point in time using copy-on-write – only changed blocks are duplicated. The snapshot itself consumes space only as the source LV changes. If the snapshot’s allocated space fills completely, it becomes invalid.

Use case: You are about to run a risky database migration. Take a snapshot before you start. If the migration corrupts data, you merge the snapshot to restore the LV to its pre-migration state instantly, without restoring from tape.

# Create a snapshot (5G budget for changed data)
sudo lvcreate -s -n lv-data-snap -L 5G /dev/vg0/lv-data

# Mount snapshot read-only (to inspect or copy data from it)
sudo mount -o ro /dev/vg0/lv-data-snap /mnt/snap

# Check snapshot usage (if this hits 100%, the snapshot is invalidated)
sudo lvdisplay /dev/vg0/lv-data-snap | grep "Allocated"
sudo lvs -o +snap_percent

# Restore LV from snapshot (merges snapshot back into origin -- consumes the snapshot)
sudo umount /mnt/data
sudo lvconvert --merge /dev/vg0/lv-data-snap
sudo lvchange -an /dev/vg0/lv-data
sudo lvchange -ay /dev/vg0/lv-data

# Remove snapshot (if you no longer need it)
sudo lvremove /dev/vg0/lv-data-snap

1.7 LVM Thin Provisioning

Thin provisioning lets you allocate more virtual space than physically exists. LVs only consume space as data is written. This is the same concept cloud providers use – they sell more storage than they physically have, betting that not all customers will use their full allocation at once.

Use case: You are running 10 development VMs, each claiming a 50GB disk (500GB total). In practice, the VMs each only use 5–10GB. With thin provisioning, you can create a 100GB thin pool and issue ten 50GB thin LVs. All VMs work normally, and you only buy more disks when actual usage approaches the pool limit.

# Create a thin pool
sudo lvcreate -L 100G --thinpool tp0 vg0

# Create thin LVs from the pool
sudo lvcreate -V 20G --thin -n lv-app1 vg0/tp0
sudo lvcreate -V 20G --thin -n lv-app2 vg0/tp0
sudo lvcreate -V 20G --thin -n lv-app3 vg0/tp0
# Total virtual: 60G. Pool: 100G. Volumes only consume what they write.

# Monitor usage -- critical to watch this so the pool does not silently fill
sudo lvs -o +data_percent,metadata_percent

# Extend pool when it fills
sudo lvextend -L +50G vg0/tp0

1.8 LVM Striping and Mirroring

LVM can stripe or mirror across PVs without mdadm.

# Striped LV across 2 PVs (performance -- splits writes across both disks)
sudo lvcreate -L 20G -n lv-stripe -i 2 -I 64 vg0
# -i 2: stripe across 2 PVs
# -I 64: 64KB stripe size

# Mirrored LV (redundancy -- writes identical data to both PVs)
sudo lvcreate -L 20G -n lv-mirror -m 1 vg0
# -m 1: one mirror (two copies total)

1.9 LVM Cache (dm-cache)

LVM cache allows you to place a fast SSD in front of a slow HDD as a caching tier. Frequently accessed data lives on the SSD; the HDD holds the full dataset. The cache is transparent to the filesystem above it.

Use case: You have a large 8TB spinning disk array for a media server. Reads are slow. You add a 250GB NVMe SSD as a cache. The most-accessed files are automatically served from the SSD at NVMe speeds while the full dataset remains on the HDDs.

# Assume vg0 contains /dev/sdb (HDD) and /dev/sdc (SSD)

# Create the slow data LV
sudo lvcreate -L 4T -n lv-data vg0 /dev/sdb

# Create the cache pool on the SSD
sudo lvcreate -L 200G -n cache-pool vg0 /dev/sdc

# Attach the cache pool to the data LV
sudo lvconvert --type cache --cachepool vg0/cache-pool vg0/lv-data

# Check cache hit statistics
sudo lvdisplay vg0/lv-data | grep -i cache
sudo dmsetup status vg0-lv--data

1.10 Moving Data Between PVs

Use before removing a disk from a VG:

sudo pvmove /dev/sdb                    # move all data off /dev/sdb
sudo pvmove /dev/sdb /dev/sdc           # move to a specific destination PV
sudo pvmove -b /dev/sdb                 # run in background

# Monitor progress
sudo lvs -a -o +devices
watch -n 1 'sudo pvs'

1.11 LVM Metadata and Recovery

# Scan for all PVs, VGs, LVs
sudo pvscan
sudo vgscan
sudo lvscan

# Metadata is backed up automatically here
ls /etc/lvm/archive/
ls /etc/lvm/backup/

# Restore VG from metadata backup
sudo vgcfgrestore -f /etc/lvm/archive/vg0_00001.vg vg0

# Display metadata
sudo vgcfgbackup -f /tmp/vg0-backup vg0
cat /tmp/vg0-backup

2. Software RAID with mdadm

What RAID Is and Why It Exists – A Plain Explanation

Hard drives fail. SSDs fail. When they do, data is gone unless you have redundancy. RAID (Redundant Array of Independent Disks) is a system that spreads data across multiple drives in a structured way so that the failure of one or more drives does not mean data loss.

Think of your data as a book that needs to survive a disaster. Different RAID levels are different strategies for protecting that book:

RAID 0 – Striping (no protection): You tear the book in half and store each half in a different warehouse. You can read or write twice as fast because two workers work simultaneously. But if either warehouse burns down, you lose the book entirely – half is gone, the other half is useless. RAID 0 doubles performance but doubles risk. It is not actually redundant at all; calling it “RAID” is historical.

RAID 1 – Mirroring: You keep an identical copy of the book in two warehouses simultaneously. Every time you write a new chapter, both warehouses get it at the same time. If one warehouse burns down, the full book still exists in the other. Reads can be served from either warehouse (faster reads). Writes go to both (same speed as a single drive). You lose half your raw capacity to redundancy. RAID 1 is the simplest real redundancy.

RAID 5 – Striping with distributed parity: You have three warehouses. The book is split across warehouses 1 and 2. Warehouse 3 stores “parity” – a mathematical summary of the other two. If warehouse 1 burns, you can reconstruct its contents by combining warehouse 2 and the parity in warehouse 3. Critically, the parity is not always in the same warehouse – it rotates across all three drives, so no single drive is the bottleneck. You can lose any one drive and survive. Minimum three drives. You lose the capacity of one drive to parity.

RAID 6 – Double parity: Same idea as RAID 5, but with two sets of parity distributed across all drives. You can lose any two drives simultaneously and still have all your data. Minimum four drives. You lose two drives’ worth of capacity. Write performance is lower than RAID 5 because two parity values must be computed on every write.

RAID 10 – Mirror + Stripe: You have two pairs of mirrored warehouses, and you stripe your data across both pairs. Reads and writes are fast (striping). Each pair can survive the loss of one warehouse (mirroring). You need at least four drives and lose half your capacity to mirroring, but the performance is the best of any redundant RAID level.

Practical guidance on which level to choose:

ScenarioRecommendationWhy
Boot drive, small critical dataRAID 1Simple, reliable, easy to recover
General server storage, moderate budgetRAID 5 (3+ drives)Good balance of space and protection
High data durability requirementRAID 6 (4+ drives)Survives two simultaneous failures
High performance + redundancy, budget availableRAID 10 (4+ drives)Fastest writes, survives one failure per mirror pair
Scratch space, maximum speed, data is disposableRAID 0No protection, don’t use for anything you can’t lose

The rebuild window problem: When a RAID 5 drive fails, the array is degraded. It still works, but if a second drive fails during the rebuild (which can take hours on large drives), all data is lost. Larger drives mean longer rebuilds mean higher risk of a second failure during rebuild. On very large arrays (8TB+ per drive), RAID 6 is strongly recommended over RAID 5 for this reason.

RAID Levels Summary Table

LevelMin DisksRedundancyReadWriteUse Case
RAID 02NoneFastFastPerformance, scratch space
RAID 121 disk failureFastSlowerBoot, small critical data
RAID 531 disk failureFastMediumGeneral purpose, space-efficient
RAID 642 disk failuresFastSlowerHigh durability requirements
RAID 1041 per mirrorFastFastHigh performance + redundancy

2.1 Creating RAID Arrays

Prepare disks (zero any previous RAID superblock):

sudo mdadm --zero-superblock /dev/sdb /dev/sdc /dev/sdd /dev/sde
sudo wipefs -a /dev/sdb                        # wipe all filesystem signatures

Always start with clean disks. Old superblocks cause unexpected array assembly at boot.

RAID 1 – Mirroring (two disks, survive one failure):

sudo mdadm --create /dev/md0 \
  --level=1 \
  --raid-devices=2 \
  /dev/sdb /dev/sdc

RAID 5 – Striping with parity (three disks, survive one failure):

sudo mdadm --create /dev/md0 \
  --level=5 \
  --raid-devices=3 \
  /dev/sdb /dev/sdc /dev/sdd

# RAID 5 with a hot spare (automatically starts rebuilding on failure)
sudo mdadm --create /dev/md0 \
  --level=5 \
  --raid-devices=3 \
  --spare-devices=1 \
  /dev/sdb /dev/sdc /dev/sdd /dev/sde

A hot spare is a drive that sits idle in the array. The moment a member drive fails, mdadm automatically begins rebuilding onto the spare. This eliminates the time you would otherwise spend noticing the failure, sourcing a replacement, and initiating the rebuild manually. On systems where drives fail often, hot spares pay for themselves.

RAID 6 – Double parity (four disks, survive two failures):

sudo mdadm --create /dev/md0 \
  --level=6 \
  --raid-devices=4 \
  /dev/sdb /dev/sdc /dev/sdd /dev/sde

RAID 10 – Mirror + Stripe (four disks, survive one failure per mirror pair):

sudo mdadm --create /dev/md0 \
  --level=10 \
  --raid-devices=4 \
  /dev/sdb /dev/sdc /dev/sdd /dev/sde

RAID 0 – Striping only (two disks, no redundancy):

sudo mdadm --create /dev/md0 \
  --level=0 \
  --raid-devices=2 \
  /dev/sdb /dev/sdc

Wait for initial sync before using the array for important data:

watch -n 1 cat /proc/mdstat
sudo mdadm --detail /dev/md0

2.2 Saving the RAID Configuration

Without this step, the array may not assemble correctly at boot.

sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf

# Update initramfs (distro-specific)
sudo update-initramfs -u                # Ubuntu / Debian
sudo mkinitcpio -P                      # Arch
sudo dracut --force                     # Rocky / RHEL
# Alpine: configuration is read from /etc/mdadm.conf at boot via OpenRC

2.3 Inspecting RAID

cat /proc/mdstat                        # overview of all RAID arrays
sudo mdadm --detail --scan              # full config of all arrays
sudo mdadm --detail /dev/md0            # detail of a specific array
sudo mdadm --examine /dev/sdb           # examine a member disk

RAID status indicators in /proc/mdstat:

SymbolMeaning
UDisk up (healthy)
_Disk missing/failed
FDisk failed
SSpare

[3/2] [U_U] means a 3-disk RAID with one disk missing. The array is degraded and needs immediate attention.

2.4 Managing Array Members

sudo mdadm --add /dev/md0 /dev/sde          # add as spare
sudo mdadm --add /dev/md0 /dev/sdb          # re-add after replacement

sudo mdadm --fail /dev/md0 /dev/sdb         # mark as failed (required before remove)
sudo mdadm --remove /dev/md0 /dev/sdb       # remove failed disk
sudo mdadm /dev/md0 --remove detached       # force remove detached disk

2.5 Disk Failure and Replacement

When a disk fails, the array degrades. Act promptly – while the array is degraded it is no longer protected against a second failure.

# Identify the failure
cat /proc/mdstat
sudo mdadm --detail /dev/md0
sudo dmesg | grep -i "error\|fail\|reset" | tail -30
sudo smartctl -H /dev/sdb                   # check health of suspected disk

# Remove the failed disk
sudo mdadm --fail /dev/md0 /dev/sdb
sudo mdadm --remove /dev/md0 /dev/sdb

# Physically replace the disk, then add the replacement
sudo mdadm --add /dev/md0 /dev/sdb

# Monitor rebuild
watch -n 2 cat /proc/mdstat

Rebuild speed depends on array size and disk speed. Expect hours for large arrays. During rebuild, the array is degraded – a second failure is catastrophic on RAID 5.

2.6 Growing an Array

# Add a new disk to expand a RAID 5 or RAID 6 array
sudo mdadm --add /dev/md0 /dev/sde
sudo mdadm --grow /dev/md0 --raid-devices=4

# Watch reshape progress
cat /proc/mdstat

# Then resize the filesystem
sudo resize2fs /dev/md0                      # ext4
sudo xfs_growfs /mnt/data                   # xfs (mounted)
sudo btrfs filesystem resize max /mnt/data  # btrfs

2.7 Stopping and Removing Arrays

sudo mdadm --stop /dev/md0
sudo mdadm --zero-superblock /dev/sdb /dev/sdc /dev/sdd
# Manually remove the ARRAY line from /etc/mdadm/mdadm.conf

2.8 RAID Monitoring and Alerts

In /etc/mdadm/mdadm.conf:

MAILADDR admin@yourdomain.com

Enable monitoring:

# systemd
sudo systemctl enable --now mdmonitor

# OpenRC (Alpine)
sudo rc-update add mdadm default
sudo rc-service mdadm start

Important: The MAILADDR directive sends alerts via the local MTA. If your system does not have a working mail transfer agent (Postfix, msmtp, etc.), the email never leaves the machine and you will not receive alerts. Always verify the mail path works:

echo "Test alert" | mail -s "RAID test" admin@yourdomain.com

If your system has no MTA, configure msmtp as a lightweight relay, or use mdadm --monitor with a custom --program that calls a script to send alerts via another channel (curl to a Slack webhook, for example).

3. Encryption with LUKS

Threat Model and Goals

LUKS (Linux Unified Key Setup) provides full-disk or partition-level encryption. It protects data at rest – if a disk is physically stolen, its contents are unreadable without the key. It does not protect a running, mounted system.

Use cases: laptops and portable storage, encrypted containers for sensitive data, RAID members before LVM, LVM LVs containing sensitive datasets.

3.1 LUKS Overview

LUKS stores key slots and encryption metadata in a header at the beginning of the device. Up to 8 key slots are available, each holding a different passphrase or key file. The actual data is encrypted with a master key that is itself encrypted by the slot keys.

# Install cryptsetup
apk add cryptsetup          # Alpine
sudo pacman -S cryptsetup   # Arch
sudo apt install cryptsetup # Ubuntu
sudo dnf install cryptsetup # Rocky

3.2 Encrypting a Partition

# Format with LUKS2 (default)
sudo cryptsetup luksFormat /dev/sdb1

# Specify cipher explicitly
sudo cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 --key-size 512 /dev/sdb1

# Open (unlock) the container -- mapped to /dev/mapper/cryptdata
sudo cryptsetup open /dev/sdb1 cryptdata

# Format the unlocked device
sudo mkfs.ext4 /dev/mapper/cryptdata

# Mount
sudo mount /dev/mapper/cryptdata /mnt/data

# Close (lock)
sudo umount /mnt/data
sudo cryptsetup close cryptdata

3.3 Encrypting a RAID Array

sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc

sudo cryptsetup luksFormat /dev/md0
sudo cryptsetup open /dev/md0 md0_crypt

# Create LVM on top
sudo pvcreate /dev/mapper/md0_crypt
sudo vgcreate vg0 /dev/mapper/md0_crypt
sudo lvcreate -L 50G -n lv-data vg0
sudo mkfs.ext4 /dev/vg0/lv-data

3.4 Key Management

# Show key slots
sudo cryptsetup luksDump /dev/sdb1

# Add a new passphrase (to a second key slot)
sudo cryptsetup luksAddKey /dev/sdb1

# Add a key file
dd if=/dev/urandom of=/etc/luks-keyfile bs=512 count=4
sudo chmod 400 /etc/luks-keyfile
sudo cryptsetup luksAddKey /dev/sdb1 /etc/luks-keyfile

# Remove a passphrase (specify which slot)
sudo cryptsetup luksKillSlot /dev/sdb1 1

# Test a passphrase without opening
sudo cryptsetup luksOpen --test-passphrase /dev/sdb1

3.5 Unlocking at Boot

Add to /etc/crypttab:

# <name>     <device>              <keyfile>    <options>
cryptdata    UUID=xxxx-xxxx        none         luks
cryptdata    UUID=xxxx-xxxx        /etc/keyfile  luks,key-slot=1

Rebuild initramfs after editing crypttab:

sudo update-initramfs -u          # Ubuntu
sudo mkinitcpio -P                # Arch
sudo dracut --force               # Rocky

3.6 systemd-cryptsetup and TPM Unlocking

On modern systemd systems, systemd-cryptsetup handles LUKS at boot automatically when entries exist in /etc/crypttab. For servers where you want automatic unlocking without typing a passphrase at boot (but still with encryption at rest for theft protection), you can enroll the LUKS key into the system’s TPM chip.

# Requires systemd 248+ and a TPM 2.0 chip
sudo systemd-cryptenroll --tpm2-device=auto /dev/sdb1

# Update crypttab to use TPM
# cryptdata  UUID=xxxx  none  luks,tpm2-device=auto

The TPM releases the key only when the system boots with the same firmware state as when the key was enrolled, providing protection against physical theft while allowing passwordless boot on the legitimate machine.

4. Network Block Storage (iSCSI)

iSCSI (Internet Small Computer Systems Interface) presents block devices over a network. A remote disk appears locally as if it were a physical drive.

  • Initiator – the client that connects to remote storage.
  • Target – the server that exports block storage.
  • IQN (iSCSI Qualified Name) – unique identifier. Format: iqn.YYYY-MM.com.domain:identifier
  • LUN (Logical Unit Number) – a specific block device exported by a target.

4.1 iSCSI Initiator Setup

# Install
apk add open-iscsi                # Alpine
sudo pacman -S open-iscsi         # Arch
sudo apt install open-iscsi       # Ubuntu
sudo dnf install iscsi-initiator-utils  # Rocky

# Enable and start
sudo systemctl enable --now iscsid
sudo rc-update add iscsid default    # OpenRC

# Discover targets
sudo iscsiadm --mode discovery --type sendtargets --portal 192.168.1.100

# Login to a target
sudo iscsiadm --mode node --targetname iqn.2024-01.com.example:storage1 --portal 192.168.1.100 --login

# Logout
sudo iscsiadm --mode node --targetname iqn.2024-01.com.example:storage1 --portal 192.168.1.100 --logout

# List active sessions
sudo iscsiadm --mode session

# Make login persistent
sudo iscsiadm --mode node --targetname iqn.2024-01.com.example:storage1 --portal 192.168.1.100 --op update --name node.startup --value automatic

4.2 iSCSI Target Setup with targetcli

sudo apt install targetcli-fb    # Ubuntu
sudo dnf install targetcli       # Rocky

sudo targetcli

Inside targetcli:

/> backstores/block create name=disk1 dev=/dev/sdb
/> iscsi/ create iqn.2024-01.com.example:target1
/> iscsi/iqn.2024-01.com.example:target1/tpg1/luns create /backstores/block/disk1
/> iscsi/iqn.2024-01.com.example:target1/tpg1/acls create iqn.2024-01.com.example:initiator1
/> saveconfig
/> exit

sudo systemctl enable --now target

5. Modern Storage Stacks

5.1 ZFS

ZFS is a fully integrated storage stack: volume management, RAID, and filesystem in one layer. It does not separate these concerns the way mdadm + LVM + ext4 does. ZFS’s model:

Disks → vdevs → zpool → datasets
  • vdev – virtual device: one or more disks in a RAID configuration. The redundancy unit. A mirror vdev is RAID 1. A raidz1 vdev is roughly RAID 5.
  • zpool – collection of vdevs. The storage pool.
  • dataset – filesystem, volume, or snapshot inside a zpool.

ZFS has built-in checksumming, compression, snapshots, send/receive replication, and scrubbing. It is opinionated and powerful. The trade-off is RAM: ZFS ARC (Adaptive Replacement Cache) is aggressive about using available RAM for caching. Give ZFS at least 8GB RAM; 1GB per TB of storage is a common production rule of thumb.

Creating ZPools:

sudo zpool create tank /dev/sdb                              # single disk (no redundancy)
sudo zpool create tank mirror /dev/sdb /dev/sdc             # RAID-1 equivalent
sudo zpool create tank raidz1 /dev/sdb /dev/sdc /dev/sdd   # RAID-5 equivalent
sudo zpool create tank raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde  # RAID-6 equivalent

# RAID-10 equivalent (mirror pairs, striped)
sudo zpool create tank \
  mirror /dev/sdb /dev/sdc \
  mirror /dev/sdd /dev/sde

# With custom mount point
sudo zpool create -m /mnt/tank tank /dev/sdb /dev/sdc

# Add log (ZIL) device for sync write performance
sudo zpool add tank log mirror /dev/sde /dev/sdf

# Add cache (L2ARC) device for read performance
sudo zpool add tank cache /dev/sdg

# Add hot spare
sudo zpool add tank spare /dev/sdh

Pool Management:

sudo zpool list
sudo zpool list -v                 # with vdev detail
sudo zpool status
sudo zpool status -x               # only pools with problems

sudo zpool export tank             # export (safe disconnect)
sudo zpool import tank             # import (reconnect)
sudo zpool import -d /dev/disk/by-id tank   # import using stable device IDs
sudo zpool destroy tank            # permanently destroy pool

ZFS Datasets:

sudo zfs create tank/data
sudo zfs create tank/home

# ZFS volume (raw block device -- for VMs, databases, iSCSI)
sudo zfs create -V 20G tank/vol0
# Accessible as /dev/zvol/tank/vol0

sudo zfs list
sudo zfs list -t all               # include snapshots and volumes
sudo zfs list -r tank

sudo zfs mount tank/data
sudo zfs umount tank/data
sudo zfs mount -a                  # mount all datasets

sudo zfs destroy tank/data
sudo zfs destroy -r tank/home      # recursive

ZFS Properties:

sudo zfs get all tank
sudo zfs get all tank/data

# Compression
sudo zfs set compression=lz4 tank
sudo zfs set compression=zstd tank/data

# Disable access time updates
sudo zfs set atime=off tank

# Record size (tune by workload)
sudo zfs set recordsize=1M tank/data        # large sequential files
sudo zfs set recordsize=8K tank/db          # databases

# Quotas
sudo zfs set quota=100G tank/home/user      # hard limit
sudo zfs set refquota=50G tank/home/user    # quota excluding snapshots
sudo zfs set reservation=20G tank/home/user # guaranteed minimum space

# Deduplication (RAM-intensive: ~5GB RAM per 1TB deduplicated)
sudo zfs set dedup=on tank

ZFS Snapshots:

sudo zfs snapshot tank/data@snap1
sudo zfs snapshot -r tank@2026-04-22                     # recursive

sudo zfs list -t snapshot
sudo zfs rollback tank/data@snap1
sudo zfs rollback -Rf tank/data@snap1                    # force, delete newer

sudo zfs clone tank/data@snap1 tank/data-clone           # writable copy

sudo zfs destroy tank/data@snap1

# Send and receive (backup / replication)
# Note: zfs send transfers data unencrypted unless you use encrypted datasets.
# Always wrap in SSH for network transfers.
sudo zfs send tank/data@snap1 | sudo zfs receive backup/data
sudo zfs send -i tank/data@snap1 tank/data@snap2 | sudo zfs receive backup/data

# Remote replication over SSH (encrypted in transit)
sudo zfs send tank/data@snap1 | ssh remotehost sudo zfs receive backup/data

# Compressed send (-c) -- note: receiving pool must support compatible compression
sudo zfs send -c tank/data@snap1 | ssh remotehost sudo zfs receive backup/data

Automated snapshot script with retention:

#!/bin/bash
DATASET="tank/data"
DATE=$(date +%F-%H%M)
sudo zfs snapshot "${DATASET}@${DATE}"
# Retain most recent 30 snapshots, destroy older ones
sudo zfs list -t snapshot -o name -s creation -r "${DATASET}" \
  | head -n -30 \
  | xargs -I{} sudo zfs destroy {}

ZFS Scrub:

sudo zpool scrub tank
sudo zpool scrub -s tank          # stop scrub
sudo zpool status tank            # view scrub progress

Run scrubs on a cron schedule: monthly for read-heavy pools, weekly for critical data.

ZFS Disk Replacement:

sudo zpool replace tank /dev/sdb /dev/sde    # replace sdb with sde
sudo zpool status tank                        # watch resilver progress
sudo zpool online tank /dev/sdb              # online a device
sudo zpool offline tank /dev/sdb             # offline for hot-swap

5.2 Btrfs

Btrfs is a copy-on-write filesystem with built-in RAID, snapshots, and compression.

Btrfs RAID 5 and RAID 6 stability warning: The kernel’s Btrfs RAID 5 and RAID 6 implementation has had long-standing data-loss bugs. The Btrfs documentation itself advises against using RAID 5/6 in production as of current kernel versions. For production use, restrict Btrfs RAID to RAID 1 and RAID 10, which are stable. If you need RAID 5/6 semantics with Btrfs, layer ZFS raidz1/raidz2 or mdadm RAID beneath a single-device Btrfs filesystem.

Btrfs RAID (stable levels only):

sudo mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc         # RAID-1 (stable)
sudo mkfs.btrfs -m raid10 -d raid10 /dev/sdb /dev/sdc /dev/sdd /dev/sde  # RAID-10 (stable)

# Add device to mounted Btrfs
sudo btrfs device add /dev/sdd /mnt/btrfs
sudo btrfs balance start /mnt/btrfs          # redistribute data

# Remove a device
sudo btrfs device delete /dev/sdb /mnt/btrfs

Btrfs Subvolumes:

sudo btrfs subvolume create /mnt/btrfs/data
sudo btrfs subvolume create /mnt/btrfs/home
sudo btrfs subvolume list /mnt/btrfs

sudo mount -o subvol=data /dev/sdb /mnt/data
sudo mount -o subvolid=256 /dev/sdb /mnt/data

sudo btrfs subvolume delete /mnt/btrfs/data

Btrfs Snapshots:

# Read-only snapshot
sudo btrfs subvolume snapshot -r /mnt/btrfs/data /mnt/btrfs/data-snap-$(date +%F)

# Writable snapshot
sudo btrfs subvolume snapshot /mnt/btrfs/data /mnt/btrfs/data-snap

# Roll back
sudo btrfs subvolume delete /mnt/btrfs/data
sudo mv /mnt/btrfs/data-snap /mnt/btrfs/data

Btrfs Compression:

sudo mount -o compress=zstd /dev/sdb /mnt/btrfs

# fstab
UUID=xxxx  /mnt/btrfs  btrfs  defaults,compress=zstd,noatime  0  2

# Compress existing data
sudo btrfs filesystem defragment -r -czstd /mnt/btrfs

Btrfs Scrub and Inspection:

sudo btrfs scrub start /mnt/btrfs
sudo btrfs scrub status /mnt/btrfs

sudo btrfs filesystem show /mnt/btrfs
sudo btrfs filesystem usage /mnt/btrfs
sudo btrfs device stats /mnt/btrfs
sudo btrfs check /dev/sdb                # offline check (unmounted only)

5.3 Stratis (Rocky / RHEL)

Stratis is a high-level storage manager that wraps LVM thin provisioning and XFS. It simplifies pool creation and filesystem management for RHEL environments. It is appropriate when you want a managed storage experience similar to ZFS but within the Red Hat ecosystem. Stratis does not yet offer the same depth of features as ZFS (no scrubbing, no send/receive, limited snapshot automation), and it has less community tooling. It is a good fit for straightforward RHEL storage provisioning; it is not a replacement for ZFS or a full RAID solution.

sudo dnf install -y stratisd stratis-cli
sudo systemctl enable --now stratisd

# Create a pool
sudo stratis pool create pool1 /dev/sdb

# Add a filesystem
sudo stratis filesystem create pool1 fs1
sudo mount /dev/stratis/pool1/fs1 /mnt/data

# List
sudo stratis pool list
sudo stratis filesystem list
sudo stratis blockdev list

# Snapshot
sudo stratis filesystem snapshot pool1 fs1 fs1-snap

5.4 VDO (Virtual Data Optimiser) – Rocky / RHEL

VDO provides inline deduplication and compression at the block level. On RHEL 9 and Rocky 9, VDO is integrated directly into LVM as lvm-vdo. The standalone vdo package applies to RHEL 8 and older.

RHEL 9 / Rocky 9 (lvm-vdo):

# lvm-vdo is included in the lvm2 package on RHEL 9+
# Create a VDO LV directly through LVM
sudo lvcreate --type vdo -L 100G -V 1T -n lv-vdo vg0

sudo mkfs.xfs /dev/vg0/lv-vdo
sudo mount /dev/vg0/lv-vdo /mnt/vdo

# Check stats
sudo lvs -o +data_percent,vdo_compression_state vg0

RHEL 8 (standalone vdo):

sudo dnf install -y vdo kmod-kvdo
sudo systemctl enable --now vdo

sudo vdo create --name=vdo1 --device=/dev/sdb --vdoLogicalSize=1T
sudo mkfs.xfs /dev/mapper/vdo1
sudo mount /dev/mapper/vdo1 /mnt/vdo
sudo vdostats --human-readable

mdadm RAID provides hardware-level redundancy. LVM on top provides flexibility. This combination is the standard production approach: RAID protects your data from disk failure, and LVM lets you resize and manage volumes without repartitioning.

# 1. Create RAID array
sudo mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

# 2. Wait for initial sync
watch cat /proc/mdstat

# 3. Create LVM PV on the RAID device
sudo pvcreate /dev/md0

# 4. Create VG
sudo vgcreate vg0 /dev/md0

# 5. Create LVs
sudo lvcreate -L 50G -n lv-data vg0
sudo lvcreate -L 20G -n lv-logs vg0

# 6. Format and mount
sudo mkfs.ext4 /dev/vg0/lv-data
sudo mkfs.xfs /dev/vg0/lv-logs
sudo mount /dev/vg0/lv-data /mnt/data
sudo mount /dev/vg0/lv-logs /var/log

7. Storage Layering Patterns

Choosing the right stack is a design decision. The technology follows the requirement.

Pattern A – Full Stack (Maximum Flexibility + Redundancy)

Disk → Partition → RAID (mdadm) → LUKS → LVM → Filesystem

Use when: fault tolerance, flexible LV resizing, and encryption are all required. Enterprise servers, sensitive data stores.

Pattern B – RAID + LVM (No Encryption)

Disk → Partition → RAID (mdadm) → LVM → Filesystem

Use when: redundancy and flexibility without encryption overhead. Most production servers.

Pattern C – LVM Only (Single Disk)

Disk → Partition → LVM → Filesystem

Use when: single disk but you want snapshot capability and easy resizing. Development VMs, workstations.

Pattern D – ZFS or Btrfs Directly

Disk → ZFS Pool or Btrfs Filesystem

Use when: you want the integrated stack – RAID, snapshots, compression, checksums in one layer. Backup servers, NAS, container storage pools.

Pattern E – Thin and Ephemeral

Disk → LVM Thin Pool → Thin LVs

Use when: container environments, virtualisation hosts, environments where many LVs exist but only some are active at once.

RequirementRecommended Stack
Max redundancy + flexibilityRAID + LUKS + LVM
Simplicity + integrity checksumsZFS
Built-in snapshots + compressionZFS or Btrfs (RAID 1/10 only)
RHEL/Rocky enterpriseLVM + XFS (or Stratis)
Container host (Incus, Docker)LVM thin, ZFS, or Btrfs
Portable encrypted storageLUKS on a partition
Backup / NAS applianceZFS (raidz2 + send/receive)

Part 3 – Filesystems and Mounting

Mounting is the act of attaching a filesystem to a directory in the Linux tree. Until a device is mounted, it is invisible to the OS. The mount point is the junction where a device’s filesystem becomes accessible.

1. Filesystem Creation

Choosing a Filesystem

FilesystemMax File SizeMax VolumeJournalledBest For
ext416 TB1 EBYesGeneral purpose, widest support
XFS8 EB8 EBYesLarge files, high throughput
Btrfs16 EB16 EBCoWSnapshots, compression, RAID
vFAT/FAT324 GB2 TBNoRemovable media, compatibility
exFAT16 EB128 PBNoLarge files on removable media
ZFS16 EB256 ZBCoWIntegrated RAID + filesystem

General rules: ext4 is the safest default – widely supported, mature, predictable. XFS is better for large files and high-concurrency workloads, and is the default on Rocky/RHEL. Btrfs is appropriate when you need snapshots or integrated RAID at the filesystem layer (with RAID 1 or 10 only). vFAT and exFAT are only for interoperability with Windows or macOS.

mkfs.ext4

sudo mkfs.ext4 /dev/sdb1
sudo mkfs.ext4 -L DATA /dev/sdb1                           # with label
sudo mkfs.ext4 -b 4096 /dev/sdb1                          # 4096-byte blocks (default)
sudo mkfs.ext4 -m 1 /dev/sdb1                             # reserve 1% for root (default 5%)
sudo mkfs.ext4 -E lazy_itable_init=0 /dev/sdb1            # full init (slower, safer)
sudo mkfs.ext4 -F /dev/sdb1                               # force (skips safety checks)

mkfs.xfs

sudo mkfs.xfs /dev/sdb1
sudo mkfs.xfs -L DATA /dev/sdb1                           # with label
sudo mkfs.xfs -f /dev/sdb1                                # force (overwrite existing)
sudo mkfs.xfs -b size=4096 /dev/sdb1                      # block size
sudo mkfs.xfs -d agcount=4 /dev/sdb1                      # allocation groups (parallelism)

mkfs.btrfs

sudo mkfs.btrfs /dev/sdb1
sudo mkfs.btrfs -L DATA /dev/sdb1
sudo mkfs.btrfs /dev/sdb /dev/sdc                        # span two devices
sudo mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc     # RAID-1 (stable)

mkfs.vfat / mkfs.exfat

sudo mkfs.vfat /dev/sdc1
sudo mkfs.vfat -F 32 -n USBDRIVE /dev/sdc1
sudo mkfs.exfat /dev/sdc1
sudo mkfs.exfat -n USBDRIVE /dev/sdc1

Filesystem Labels

# ext4
sudo e2label /dev/sdb1 DATA
sudo tune2fs -L DATA /dev/sdb1

# XFS (unmounted)
sudo xfs_admin -L DATA /dev/sdb1

# Btrfs
sudo btrfs filesystem label /mnt/btrfs NEWLABEL

# View label
sudo blkid /dev/sdb1

2. Mounting and Unmounting

sudo mkdir -p /mnt/data
sudo mount /dev/sdb1 /mnt/data

# With explicit filesystem type
sudo mount -t ext4 /dev/sdb1 /mnt/data
sudo mount -t ntfs-3g /dev/sdb1 /mnt/windows
sudo mount -t vfat /dev/sdc1 /mnt/usb

# Mount options
sudo mount -o ro /dev/sdb1 /mnt/data              # read-only
sudo mount -o noatime /dev/sdb1 /mnt/data         # no access time updates (performance)
sudo mount -o noexec /dev/sdb1 /mnt/data          # prevent binary execution
sudo mount -o nosuid /dev/sdb1 /mnt/data          # ignore SUID bits
sudo mount -o nodev /dev/sdb1 /mnt/data           # ignore device files
sudo mount -o sync /dev/sdb1 /mnt/data            # synchronous writes (safe, slow)
sudo mount -o ro,noexec,nosuid /dev/sdb1 /mnt     # combine multiple options

# Remount without unmounting
sudo mount -o remount,rw /mnt/data
sudo mount -o remount,ro /mnt/data

Show Currently Mounted Filesystems

mount -l
cat /proc/mounts
findmnt                            # tree view of mount points
findmnt -t ext4,xfs
findmnt /mnt/data

Unmounting

sudo umount /mnt/data              # by mount point
sudo umount /dev/sdb1              # by device
sudo umount -l /mnt/data           # lazy: detach now, clean up when not busy
sudo umount -f /mnt/data           # force (use only if lazy fails and data is safe)

If unmount fails with “target is busy”:

lsof +f -- /mnt/data               # show open files on the mount point
fuser -vm /mnt/data                # show all processes using the mount point
cd ~                               # ensure you are not inside the mount point yourself
fuser -km /mnt/data                # kill processes using the mount point (careful)
sudo umount /mnt/data

3. Persistent Mount Configuration (fstab)

/etc/fstab defines which filesystems mount automatically at boot. Always use UUIDs, not device names – device names like /dev/sda1 can change between reboots; UUIDs are stable.

sudo blkid /dev/sdb1               # get UUID

fstab Entry Format

UUID=<uuid>   <mount-point>   <fstype>   <options>   <dump>   <pass>
FieldPurpose
UUIDDevice identifier (stable)
mount-pointDirectory where the filesystem is attached
fstypeFilesystem type
optionsMount options (defaults, noatime, ro, etc.)
dumpBackup flag – almost always 0
passfsck check order – 0=skip, 1=root filesystem, 2=other

Common fstab Examples

# Standard ext4 data drive
UUID=a1b2c3d4-e5f6-7890-abcd-ef1234567890  /mnt/data     ext4      defaults,noatime         0  2

# XFS data drive
UUID=aaaabbbb-cccc-dddd-eeee-ffffffffffff  /mnt/data     xfs       defaults,noatime         0  2

# Btrfs with compression
UUID=deadbeef-1234-5678-abcd-000000000001  /mnt/btrfs    btrfs     defaults,compress=zstd   0  2

# Btrfs subvolume
UUID=deadbeef-1234-5678-abcd-000000000001  /home         btrfs     defaults,subvol=home     0  2

# NTFS Windows partition (read-write)
UUID=ABCD1234EF567890  /mnt/windows  ntfs-3g  defaults,uid=1000,gid=1000           0  0

# exFAT USB drive
UUID=5A21-B3C4          /mnt/usb      exfat    defaults,uid=1000,gid=1000,noexec   0  0

# Swap partition
UUID=xxxx-xxxx           none          swap     defaults                             0  0

# Swap file
/swapfile                none          swap     defaults                             0  0

# tmpfs (RAM filesystem)
tmpfs  /tmp  tmpfs  defaults,noatime,mode=1777,size=1G  0  0

# Bind mount
/home/user/data  /var/www/html  none  bind  0  0

# Optional drive (will not halt boot if absent)
UUID=xxxx  /mnt/external  ext4  defaults,nofail,x-systemd.device-timeout=5s  0  2

# NFS (defer until network is up)
192.168.1.100:/srv/nfs/share  /mnt/nfs  nfs  defaults,_netdev,noatime  0  0

The nofail option is critical for external or optional disks. Without it, a missing disk prevents the system from booting entirely.

Verify fstab Without Rebooting

sudo mount -a

If this returns no errors, your fstab entries are valid. Fix errors before rebooting – a broken fstab can prevent boot entirely.

Boot-Time Mount Ordering

  • 0 – skip fsck entirely (swap, network mounts, ZFS, Btrfs)
  • 1 – check first (only the root filesystem)
  • 2 – check after root (all other local filesystems)

Network mounts must use _netdev so they are deferred until networking is up.

4. systemd Mount Handling

For removable drives, automounting, and complex dependency ordering, systemd mount units are cleaner than fstab entries.

.mount Units

Create /etc/systemd/system/mnt-data.mount:

[Unit]
Description=Data Drive
After=local-fs.target

[Mount]
What=/dev/disk/by-uuid/a1b2c3d4-e5f6-7890-abcd-ef1234567890
Where=/mnt/data
Type=ext4
Options=defaults,noatime

[Install]
WantedBy=multi-user.target

The unit file name must match the mount point path with / replaced by -. /mnt/data becomes mnt-data.mount.

sudo systemctl daemon-reload
sudo systemctl enable --now mnt-data.mount
systemctl status mnt-data.mount

.automount Units

Automount mounts on first access and unmounts after idle time. When using automount, the corresponding .mount unit must exist but must not be enabled directly – enabling both causes them to conflict.

Create /etc/systemd/system/mnt-data.automount:

[Unit]
Description=Automount Data Drive

[Automount]
Where=/mnt/data
TimeoutIdleSec=60

[Install]
WantedBy=multi-user.target

Enable only the automount unit, not the mount unit:

sudo systemctl enable --now mnt-data.automount
# Do not run: systemctl enable mnt-data.mount

Dependencies and Boot Integration

[Unit]
Description=Database Volume
After=local-fs.target
Before=postgresql.service
RequiredBy=postgresql.service

For network-dependent mounts:

[Unit]
After=network-online.target
Wants=network-online.target

5. Network Filesystems

NFS – Network Filesystem

# Install NFS utilities
apk add nfs-utils
sudo pacman -S nfs-utils
sudo apt install nfs-common
sudo dnf install nfs-utils

# Mount a remote NFS share
sudo mount -t nfs 192.168.1.100:/srv/nfs/share /mnt/nfs
sudo mount -t nfs -o ro,noatime 192.168.1.100:/srv/nfs/share /mnt/nfs

NFS server setup:

sudo apt install nfs-kernel-server     # Ubuntu
sudo dnf install nfs-utils             # Rocky

# Configure /etc/exports
# Grant read-write access to the 192.168.1.0/24 subnet.
# root_squash (the default) maps remote root to nobody, preventing root on the
# client from acting as root on the NFS server. This is the safe default.
# Use no_root_squash only if you have a specific, understood reason.
echo "/srv/nfs/share  192.168.1.0/24(rw,sync,no_subtree_check)" | sudo tee -a /etc/exports

sudo exportfs -ra
sudo systemctl enable --now nfs-server

CIFS / SMB – Windows Network Shares

sudo apt install cifs-utils
sudo dnf install cifs-utils
sudo pacman -S cifs-utils

# Mount
sudo mount -t cifs //192.168.1.100/Share /mnt/smb \
  -o username=user,password=pass,uid=1000,gid=1000

# In fstab -- use a credentials file, never put passwords in fstab directly
//192.168.1.100/Share  /mnt/smb  cifs  credentials=/etc/samba/credentials,uid=1000  0  0

Credentials file /etc/samba/credentials:

username=user
password=pass
domain=WORKGROUP
sudo chmod 600 /etc/samba/credentials

The credentials file must not be readable by the user whose UID is specified in the mount options. On a multi-user system, verify:

ls -la /etc/samba/credentials    # should be -rw------- root root

SSHFS – Mount Remote Directory over SSH

apk add sshfs
sudo pacman -S sshfs
sudo apt install sshfs
sudo dnf install fuse-sshfs

# Mount
sshfs user@server:/remote/path /mnt/remote
sshfs -o IdentityFile=~/.ssh/id_ed25519 user@server:/path /mnt/remote

# Unmount
fusermount -u /mnt/remote
sudo umount /mnt/remote

# fstab
user@server:/path  /mnt/remote  fuse.sshfs  defaults,_netdev,IdentityFile=/home/user/.ssh/id_ed25519  0  0

Authentication and Permissions Issues

IssueCauseSolution
Permission denied on NFSUID mismatch between client/serverMatch UIDs, or use no_root_squash on server
CIFS authentication failureWrong credentials or SMB versionSpecify vers=3.0 in mount options
SSHFS: Connection refusedSSH not running or wrong portssh -p PORT user@server to test first
Mount succeeds but files lockedFile locking protocol issuesUse nolock option for NFS

6. Special and Virtual Filesystems

tmpfs – RAM Filesystem

Data lives in RAM (or swap if RAM is pressured). Lost on reboot.

sudo mount -t tmpfs -o size=512M tmpfs /mnt/ram

# fstab
tmpfs  /tmp  tmpfs  defaults,noatime,mode=1777,size=1G  0  0

Use cases: /tmp, /dev/shm, ephemeral build directories, container overlay workdirs.

Bind Mounts

Expose a directory at a second location in the tree without copying data.

sudo mount --bind /home/user/data /var/www/html

# Read-only bind mount
sudo mount --bind /path/src /path/dst
sudo mount -o remount,ro,bind /path/dst

# fstab
/home/user/data  /var/www/html  none  bind  0  0

Use cases: sharing directories into containers, remapping paths for services.

ISO / Disk Image Mounting

sudo mount -o loop /path/to/image.iso /mnt/iso
sudo mount -t iso9660 -o loop /path/to/image.iso /mnt/iso
sudo mount -o loop /path/to/image.img /mnt/img

OverlayFS

Layers multiple directories into a unified view. Used by Docker and Incus for container filesystems.

sudo mount -t overlay overlay \
  -o lowerdir=/lower,upperdir=/upper,workdir=/work \
  /mnt/merged
  • lowerdir – read-only base layer (image)
  • upperdir – read-write layer (container changes)
  • workdir – internal working directory (must be on same filesystem as upperdir)

procfs and sysfs

# /proc -- kernel's process and system information
sudo mount -t proc proc /proc

# /sys -- kernel's device and driver interface
sudo mount -t sysfs sysfs /sys

# Useful in chroot environments
sudo mount --bind /proc /mnt/chroot/proc
sudo mount --bind /sys /mnt/chroot/sys
sudo mount --bind /dev /mnt/chroot/dev

7. Filesystem Integrity and Repair

Disk Imaging Before Repair

If a filesystem fails to mount and the data is important:

  1. Do not write to the device. Every write risks overwriting recoverable data.
  2. Image the disk first:
# dd -- simple and universal
sudo dd if=/dev/sdb of=/backup/sdb.img bs=4M status=progress

# ddrescue -- strongly preferred for failing or damaged drives
# Install: apt install gddrescue / dnf install ddrescue / pacman -S ddrescue
# ddrescue handles read errors gracefully: it skips bad sectors on the first
# pass, retries them later, and maintains a log file so the process can be
# resumed if interrupted. dd stops on the first read error.
sudo ddrescue /dev/sdb /backup/sdb.img /backup/sdb.log

# If ddrescue was interrupted, resume from where it left off
sudo ddrescue /dev/sdb /backup/sdb.img /backup/sdb.log

# Once the image is complete, run fsck on the image, not the original
sudo e2fsck /backup/sdb.img
sudo mount -o loop /backup/sdb.img /mnt/recovery

ext4 – fsck / e2fsck

Always run fsck on unmounted filesystems. Running on a mounted filesystem causes corruption.

sudo e2fsck -f /dev/sdb1                 # force check
sudo e2fsck -p /dev/sdb1                 # automatic repair (non-interactive)
sudo e2fsck -y /dev/sdb1                 # answer yes to all prompts
sudo e2fsck -n /dev/sdb1                 # dry run (no changes)

# If superblock is corrupt
sudo dumpe2fs /dev/sdb1 | grep "Backup superblock"
sudo e2fsck -b 32768 /dev/sdb1           # use backup superblock at block 32768

XFS – xfs_repair

sudo xfs_repair -n /dev/sdb1             # dry run
sudo xfs_repair /dev/sdb1               # repair
sudo xfs_repair -L /dev/sdb1            # force log zeroing (last resort)

# If XFS is dirty (journal needs replay) -- mount and unmount to replay first
sudo mount /dev/sdb1 /mnt/tmp
sudo umount /mnt/tmp
# Then run xfs_repair

# Grow XFS (must be mounted)
sudo xfs_growfs /mnt/data

XFS cannot be shrunk. Backup and restore if you need a smaller XFS volume.

Btrfs

sudo btrfs check /dev/sdb                # check (read-only, unmounted)
sudo btrfs check --repair /dev/sdb       # repair (dangerous -- image the disk first)
sudo btrfs rescue super-recover /dev/sdb # recover from corrupt superblock
sudo btrfs restore /dev/sdb /mnt/recovery  # attempt data rescue to another location

8. Mount Troubleshooting

ErrorLikely CauseSolution
target is busyProcess has files open on mount pointlsof +f -- /mnt to find and close processes; cd ~ first
permission deniedNot running as rootUse sudo
unknown filesystem typeKernel driver not installedInstall: ntfs-3g, exfatprogs, etc.
can't read superblockCorrupt filesystem or wrong typesudo fsck /dev/sdb1; verify type with blkid
mount point does not existTarget directory missingsudo mkdir -p /mnt/target
write-protectedMounted read-only or drive lockedUse -o rw; check physical write-protect switch
fstab breaks bootBad entry in /etc/fstabBoot to recovery; edit the file; add nofail to optional entries
NFS: stale file handleServer-side path changedUnmount, remount; check server exports
SSHFS: disconnect on idleSSH keepalive not configuredAdd ServerAliveInterval=15 to SSH config
# Find what is blocking an unmount
lsof +f -- /mnt/data
fuser -vm /mnt/data
fuser -km /mnt/data                # kill blocking processes (careful)

# Check boot-time mount failures
journalctl -b | grep -i "mount\|fstab\|failed"
sudo dmesg | grep -i "mount\|failed\|error"
systemctl status <unit>.mount

Part 4 – Files, Permissions, and Metadata

Every command here operates on inodes. An inode is the kernel’s internal record for a file: it stores permissions, ownership, size, timestamps, and pointers to data blocks. The filename is just a directory entry mapping a human-readable string to an inode number.

1. File and Directory Basics

pwd                        # absolute path of current directory
pwd -P                     # resolve symlinks (physical path)

cd /etc/nginx              # absolute path
cd nginx                   # relative path
cd ..                      # parent directory
cd -                       # previous directory
cd ~                       # home directory

ls – List Directory Contents

ls                         # basic listing
ls -l                      # long format: permissions, owner, size, date
ls -la                     # include hidden files
ls -lh                     # human-readable sizes
ls -lt                     # sort by modification time, newest first
ls -ltr                    # sort by modification time, oldest first
ls -lS                     # sort by size, largest first
ls -i                      # show inode numbers
ls -F                      # append type indicators

Long format column order: permissions linkcount owner group size date name

-rwxr-xr-x 1 user users 4096 Apr 09 12:00 script.sh
│└──┴──┴──   │  │    │    │
│ u   g   o  │  │   group size
│            │  owner
└ file type (- file, d dir, l link, c char dev, b block dev, p pipe, s socket)

tree – Visual Directory Tree

tree                           # current directory
tree /etc/nginx
tree -a                        # include hidden files
tree -d                        # directories only
tree -L 2                      # max depth of 2 levels
tree -h                        # human-readable sizes
tree -p                        # show permissions
tree -I "*.log|*.tmp"          # exclude patterns

Creating and Copying Files

touch file.txt                                 # create empty file or update timestamp
touch -t 202601011200 file.txt                 # set specific timestamp (YYYYMMDDHHMM)

mkdir -p /a/b/c/d                              # create full path, no error if exists
mkdir -m 755 mydir                             # set permissions at creation
mkdir -p ~/projects/{web,api,docs,scripts}     # brace expansion

echo "hello" > file.txt                        # create/overwrite
echo "more" >> file.txt                        # append
cat > file.txt << 'EOF'
Line one
Line two
EOF

cp source.txt dest.txt
cp -r sourcedir/ destdir/                     # recursive
cp -a sourcedir/ destdir/                     # archive: preserve all metadata (preferred)
cp -u source.txt dest.txt                     # copy only if source is newer
cp -i source.txt dest.txt                     # interactive (ask before overwrite)

-a is equivalent to -dR --preserve=all and preserves symlinks, permissions, ownership, and timestamps.

mv and rm

mv oldname.txt newname.txt
mv file.txt /path/to/directory/
mv -i file.txt /destination/                  # interactive
mv -n file.txt /destination/                  # no-clobber

# Rename multiple files
rename 's/\.txt$/.md/' *.txt                  # Perl rename
rename -n 's/old/new/' *                      # dry run

rm file.txt
rm -r directory/                              # recursive
rm -rf directory/                             # force recursive -- no undo
rm -i file.txt                                # interactive (ask before each)

rm -rf is permanent and irreversible. Always echo a path before deleting it. Quote variables to prevent glob expansion.

rm -rf "$tmpdir"                              # safe (quoted variable)
find /path -name "*.log" -ls                  # preview what matches
find /path -name "*.log" -delete              # then delete

What an Inode Is

An inode stores everything about a file except its name: permissions and ownership, file size, timestamps (access, modification, change), and pointers to data blocks. The filename is stored in a directory entry – a mapping of name to inode number.

ls -i file.txt                         # show inode number
stat file.txt                          # full inode details
df -i                                  # inode usage across filesystems

Inode exhaustion is a real failure mode: a filesystem can run out of inodes before running out of disk space. This happens when many small files are created (mail spools, cache directories). If df -i shows IUse% near 100%, you are out of inodes even if df -h shows space available.

A hard link is a second directory entry pointing to the same inode. The data is not duplicated.

ln source.txt hardlink.txt
ls -li source.txt hardlink.txt         # same inode number, linkcount = 2

Constraints: cannot cross filesystem boundaries, cannot link to directories.

A symlink is a file containing a path to another file. It can cross filesystems and point to directories.

ln -s /path/to/target symlink
ln -sf /new/target symlink             # force (overwrite existing)

ls -la symlink                         # shows -> target
readlink symlink                       # print target path
readlink -f symlink                    # print absolute, fully resolved path

# Find broken symlinks
find /path -xtype l

# Remove a symlink
rm symlink
# Do NOT use rm -r on a symlink pointing to a directory -- it recurses into the target

3. Ownership and Permissions

Permission Notation

-rwxr-xr--  1  user  group  4096  Apr 09  file.sh
│└──┴──┴──
│  u   g   o

r = 4 (read)
w = 2 (write)
x = 1 (execute for files; enter and list for directories)

chmod – Change Permissions

# Symbolic mode
chmod u+x file.sh
chmod g-w file.txt
chmod a+r file.txt                     # add read for all (a = ugo)
chmod u=rwx,g=rx,o= file

# Octal mode
chmod 644 file.txt                     # -rw-r--r--
chmod 755 script.sh                    # -rwxr-xr-x
chmod 600 private.key                  # -rw-------
chmod 700 ~/.ssh

Correct recursive approach – files and directories need different permissions:

find /var/www/html -type f -exec chmod 644 {} +
find /var/www/html -type d -exec chmod 755 {} +

chmod -R 644 /var/www/html is wrong – it sets directories to 644, removing execute, making them inaccessible.

chown and chgrp

sudo chown user file.txt
sudo chown user:group file.txt
sudo chown :www-data file.txt
sudo chown -R www-data:www-data /var/www/html

sudo chgrp docker /var/lib/myapp
sudo chgrp -R docker /var/lib/myapp

umask

umask defines the permission bits subtracted from newly created files and directories.

umask                          # show current umask (e.g. 022)
umask 027                      # new files = 640, new dirs = 750
umask 077                      # maximum restrictive: files = 600, dirs = 700

# Default creation permissions before umask:
# Files: 666 (no execute by default)
# Directories: 777

# With umask 022: files = 644, dirs = 755
# With umask 027: files = 640, dirs = 750

echo "umask 027" >> ~/.bashrc  # persist

4. Advanced Permission Controls

SUID, SGID, Sticky Bit

chmod u+s file                 # setuid: file runs as its owner, not the caller
chmod g+s directory/           # setgid: new files in directory inherit the group
chmod +t /tmp                  # sticky bit: only the file's owner can delete it
chmod 4755 file                # setuid + 755
chmod 2755 directory/          # setgid + 755
chmod 1777 /tmp                # sticky + rwxrwxrwx (standard /tmp permissions)

Find setuid/setgid files (security audit):

find / -perm -4000 -type f 2>/dev/null    # setuid files
find / -perm -2000 -type f 2>/dev/null    # setgid files

ACLs – Access Control Lists

ACLs extend standard permissions with per-user and per-group entries.

apk add acl
sudo apt install acl
sudo dnf install acl

getfacl file.txt                                # view ACL
setfacl -m u:alice:rw file.txt                  # give user alice read-write
setfacl -m g:devs:rx /srv/app                  # give group devs read-execute
setfacl -m d:u:alice:rw /srv/shared            # default ACL: new files inherit
setfacl -x u:alice file.txt                    # remove a specific entry
setfacl -b file.txt                            # remove all ACLs
setfacl -R -m u:alice:rw /srv/shared          # recursive
getfacl source.txt | setfacl --set-file=- dest.txt  # copy ACLs

Immutable and Append-Only Flags

sudo chattr +i file.txt             # immutable: cannot modify, delete, rename, or link
sudo chattr -i file.txt             # remove immutable flag
sudo chattr +a logfile.txt          # append-only: can only append, not overwrite
lsattr file.txt
lsattr -R directory/

5. Search and Space Usage

# Find by name
find /path -name "filename.txt"
find /path -name "*.conf"
find /path -iname "*.conf"                    # case-insensitive
find / -name "nginx.conf" 2>/dev/null         # suppress permission errors

# Find by type
find /path -type f                            # files only
find /path -type d                            # directories only
find /path -type l                            # symlinks only

# Find by size
find /path -size +100M
find /path -size -1k
find /path -empty

# Find by time
find /path -mtime -7                          # modified in last 7 days
find /path -mtime +30                         # modified more than 30 days ago
find /path -mmin -60                          # modified in last 60 minutes
find /path -newer reference.txt

# Find by ownership and permissions
find /path -user user
find /path -group www-data
find /path -perm -o+w                         # world-writable (security risk)
find /path -nouser                            # orphaned files

# Find with actions
find /path -name "*.log" -delete
find /path -name "*.py" -exec chmod 644 {} \;
find /path -name "*.tmp" -exec rm {} +        # batch (faster)
find /path -name "*.log" -print0 | xargs -0 rm

# Combining conditions
find /path -name "*.log" -mtime +30 -delete
find /path \( -name "*.log" -o -name "*.tmp" \)    # OR condition
find /path -name "*.conf" ! -name "default.conf"   # NOT condition
sudo updatedb                          # update the index
locate nginx.conf
locate -i nginx.conf                   # case-insensitive
locate -n 10 "*.conf"                  # limit results
locate -b "nginx.conf"                 # basename match only

du – Disk Usage

du -h file.txt
du -sh directory/                      # total size of directory
du -sh *                               # size of everything in current dir
du -sh * | sort -rh | head -20         # top 20 largest items
du --max-depth=1 -h /var               # one level deep
du -a /var | sort -rh | head -20

# ncdu -- interactive browser
ncdu /
ncdu /var

df – Disk Free

df -h                                  # human-readable sizes
df -hT                                 # include filesystem type
df -i                                  # inode usage
df -h /mnt/data

6. Quotas

Quotas limit disk usage by user or group at the filesystem level.

ext4 Quota Setup

# Add usrquota and/or grpquota to mount options in /etc/fstab
UUID=xxxx  /mnt/data  ext4  defaults,usrquota,grpquota  0  2

sudo mount -o remount /mnt/data

sudo quotacheck -cum /mnt/data         # -c create, -u users, -m force
sudo quotacheck -cgm /mnt/data         # -g groups
sudo quotaon /mnt/data

XFS Quota Setup

XFS handles quota accounting internally. Enable at mount time:

UUID=xxxx  /mnt/data  xfs  defaults,usrquota,grpquota,prjquota  0  2

No separate quotacheck needed.

Project quotas on XFS allow directory-level quotas (rather than per-user). They require /etc/projects and /etc/projid:

# /etc/projects -- maps project ID to directory
echo "1:/var/www/myapp" | sudo tee -a /etc/projects

# /etc/projid -- maps project ID to a name
echo "myapp:1" | sudo tee -a /etc/projid

# Initialise and set the quota
sudo xfs_quota -x -c 'project -s myapp' /mnt/data
sudo xfs_quota -x -c 'limit -p bsoft=5g bhard=6g myapp' /mnt/data

Setting Quotas

sudo edquota -u user                   # opens quota editor for user
sudo edquota -g devs                   # group quota

# Set quota non-interactively
# Note: block limits are in kilobytes, not megabytes or gigabytes.
# 1G = 1048576 KB. Do not use human-readable suffixes here.
sudo setquota -u user 1048576 1572864 0 0 /mnt/data
# format: <user> <soft-blocks-KB> <hard-blocks-KB> <soft-inodes> <hard-inodes> <filesystem>

Quota fields:

  • Soft limit – threshold that triggers a grace period warning.
  • Hard limit – absolute ceiling that cannot be exceeded.
  • Grace period – time allowed over soft limit before enforced as hard limit.

Viewing Quota Usage

sudo quota -u user
sudo quota -g devs
sudo repquota /mnt/data                # full quota report for a filesystem
sudo repquota -a                       # all quota-enabled filesystems

7. Archival and Synchronisation

tar – Tape Archive

# Create archives
tar -cvf archive.tar files/
tar -czvf archive.tar.gz files/        # gzip compressed
tar -cJvf archive.tar.xz files/        # xz compressed (best ratio)
tar -caf archive.tar.zst files/        # zstd (auto-detect from extension)

# Extract archives
tar -xvf archive.tar
tar -xzvf archive.tar.gz
tar -xJvf archive.tar.xz
tar -xvf archive.tar -C /destination/
tar -xvf archive.tar specific/file     # extract specific file

# List contents without extracting
tar -tvf archive.tar
tar -tzvf archive.tar.gz

# Exclude files
tar -czvf archive.tar.gz source/ --exclude="*.log" --exclude="*.tmp"

# Incremental backup
tar -czvf full.tar.gz --listed-incremental=backup.snar source/
tar -czvf inc.tar.gz  --listed-incremental=backup.snar source/   # next run = incremental

Incremental backup important notes: The snapshot file (backup.snar) must be preserved between runs. Losing it breaks the incremental chain – all subsequent runs become full backups. Restoring from an incremental set requires restoring the full backup first, then each incremental in creation order. Verify archive integrity regularly:

tar -tJvf archive.tar.xz > /dev/null && echo "OK" || echo "CORRUPT"

Compression Tools

# gzip
gzip file.txt                  # compress (replaces original)
gzip -k file.txt               # keep original
gzip -d file.txt.gz            # decompress
gunzip file.txt.gz
zcat file.txt.gz               # read without decompressing

# xz (best ratio, slowest)
xz file.txt
xz -k file.txt
xz -T 0 file.txt               # use all CPU cores

# zstd (recommended for speed+ratio balance)
zstd file.txt
zstd -d file.txt.zst
zstd -T 0 file.txt             # use all threads
zstd -19 file.txt              # maximum compression

# zip / unzip
zip archive.zip file1 file2
zip -r archive.zip directory/
unzip archive.zip
unzip archive.zip -d /dest/
unzip -l archive.zip           # list contents

rsync – Efficient Copy and Sync

rsync only transfers changed blocks. The correct tool for large file operations, remote copies, and backups.

rsync -av source/ dest/                         # archive + verbose
rsync -avz source/ user@remote:/dest/           # compress over network
rsync -avzP source/ user@remote:/dest/          # progress bar + partial resume
rsync -av --delete source/ dest/                # mirror (delete files absent from source)
rsync -avn source/ dest/                        # dry run (no changes)
rsync -av --exclude='*.log' source/ dest/
rsync -e "ssh -p 2222" source/ user@host:/dest/
rsync -av --bwlimit=1000 source/ dest/          # throttle to 1 MB/s
rsync -av --checksum source/ dest/              # compare by checksum not timestamp
rsync --link-dest=/backup/prev source/ /backup/new/  # incremental with hard links

Note on permissions: rsync with -a faithfully replicates source permissions, including incorrect ones. If the source has wrong permissions, rsync will replicate the error. rsync is not a permission-correction tool. Correct permissions before syncing, or apply a find + chmod pass after.

rsync flags reference:

-a    archive (recursive, links, perms, times, group, owner, device)
-r    recursive
-v    verbose
-z    compress
-P    progress + partial (resume interrupted transfers)
-n    dry run
-u    skip files newer on destination
-c    checksum comparison
-H    preserve hard links
-x    don't cross filesystem boundaries
--delete   delete destination files not in source
--link-dest incremental backup: hard-link unchanged files from previous backup

Backup Patterns

Simple timestamped backup:

tar -cJvf /backup/data-$(date +%F).tar.xz /mnt/data

Incremental rsync with hard links:

#!/bin/bash
PREV="/backup/current"
NEW="/backup/$(date +%F)"
rsync -avP --link-dest="$PREV" /mnt/data/ "$NEW/"
rm -f "$PREV"
ln -s "$NEW" "$PREV"

Restore verification:

sha256sum /original/file > checksums.txt
rsync /original/ /restored/
sha256sum -c checksums.txt

8. File Descriptors and Redirection

Every process has three standard file descriptors:

FDNameDefaultDescription
0stdinkeyboardInput to the process
1stdoutterminalStandard output
2stderrterminalError and diagnostic output
command > file.txt                     # redirect stdout (overwrite)
command >> file.txt                    # redirect stdout (append)
command 2> error.txt                   # redirect stderr
command 2>> error.txt                  # append stderr
command > out.txt 2>&1                 # redirect both stdout and stderr
command &> out.txt                     # bash shorthand
command 2>&1 | tee out.txt             # redirect both and also print to terminal

command > /dev/null                    # discard stdout
command > /dev/null 2>&1               # discard everything

command < input.txt
command << 'EOF'
inline input content
EOF

Pipes

command1 | command2                    # stdout of command1 → stdin of command2
command1 | tee output.txt | command2  # write to file and pass through
command1 | tee -a output.txt          # append to file

Process Substitution

diff <(sort file1) <(sort file2)       # diff two sorted files without temp files
wc -l <(find / -name "*.conf")         # count matched files from find

Combining Commands Safely

mkdir /mnt/target && mount /dev/sdb1 /mnt/target    # && : run second if first succeeds
mount /dev/sdb1 /mnt/target || echo "Mount failed"  # || : run second if first fails
command1 ; command2                                  # ; : run both regardless
(cd /some/dir && do_something)                       # subshell: changes don't affect shell

Part 5 – Text, Logs, and Stream Processing

Storage systems produce logs, configuration files, error output, and structured data. This part covers the tools to view, search, edit, parse, compare, and inspect all of it.

1. Viewing and Paging

cat – Concatenate and Print

cat file.txt
cat -n file.txt                        # with line numbers
cat -A file.txt                        # show non-printing characters
cat -s file.txt                        # squeeze multiple blank lines into one
cat file1.txt file2.txt > combined.txt # concatenate to a new file
tac file.txt                           # print in reverse (last line first)

less – Interactive Pager

less is the standard pager for navigating large files. It does not load the entire file into memory.

less file.txt
less +G file.txt                       # start at end of file
less +F file.txt                       # follow mode (like tail -f)
less -N file.txt                       # show line numbers
less -S file.txt                       # don't wrap long lines
less -i file.txt                       # case-insensitive search

# Inside less:
# j / k           -- scroll down / up by line
# d / u           -- scroll down / up by half-page
# f / b           -- scroll forward / back one full page
# g / G           -- jump to start / end
# /pattern        -- search forward
# ?pattern        -- search backward
# n / N           -- next / previous search match
# q               -- quit
# F               -- follow mode (like tail -f, quit with Ctrl+C)

Use less in preference to more – it is strictly more capable and available everywhere.

head and tail – File Ends

head file.txt                          # first 10 lines
head -n 20 file.txt
head -c 100 file.txt                   # first 100 bytes

tail file.txt                          # last 10 lines
tail -n 50 file.txt
tail -f file.txt                       # follow: print new lines as they are appended
tail -F file.txt                       # follow + reopen if file is rotated (good for logs)
tail -f -n 0 file.txt                  # follow from end only (no existing output)

# Print specific lines
head -n 20 file.txt | tail -n 10       # lines 11-20

wc – Word Count

wc file.txt                            # lines, words, bytes
wc -l file.txt                         # line count only
wc -c file.txt                         # byte count
wc -L file.txt                         # length of longest line
wc -l *.conf | sort -n                 # sort by line count

2. Searching Text

grep "pattern" file.txt
grep -r "pattern" /path/               # recursive
grep -i "pattern" file.txt             # case-insensitive
grep -v "pattern" file.txt             # invert (lines NOT matching)
grep -n "pattern" file.txt             # show line numbers
grep -c "pattern" file.txt             # count matching lines only
grep -l "pattern" *.conf               # list files with matches
grep -L "pattern" *.conf               # list files WITHOUT matches
grep -w "word" file.txt                # whole-word match
grep -A 3 "pattern" file.txt           # 3 lines after match
grep -B 3 "pattern" file.txt           # 3 lines before match
grep -C 3 "pattern" file.txt           # 3 lines context (before + after)
grep -m 5 "pattern" file.txt           # stop after 5 matches
grep -o "pattern" file.txt             # print only the matching part
grep -E "pat1|pat2" file.txt           # extended regex (OR)
grep -F "literal.string" file.txt      # fixed string (not regex) -- faster
grep -q "pattern" file.txt             # quiet mode (exit code only)

# Multiple patterns
grep -e "pattern1" -e "pattern2" file.txt

Useful patterns for storage/sysadmin work:

grep -r "UUID=" /etc/fstab
grep -i "fail\|error\|warn" /var/log/syslog
dmesg | grep -i "error\|fail\|reset"
journalctl -b | grep -i "failed"
grep -v "^#\|^$" /etc/nginx/nginx.conf        # config without comments/blanks

ripgrep (rg) – Faster grep

ripgrep is significantly faster than grep on large codebases and log directories. It respects .gitignore by default and uses parallelism.

rg "pattern"                          # search current directory recursively
rg "pattern" /path/
rg -i "pattern"                       # case-insensitive
rg -l "pattern"                       # files with matches only
rg -v "pattern"                       # invert match
rg -w "word"                          # whole-word match
rg -t py "import"                     # search only .py files
rg -T log "error"                     # exclude .log files
rg -A 3 "pattern"
rg -C 3 "pattern"
rg -U "multi.line.pattern"            # multiline matching
rg --hidden "pattern"                 # include hidden files
rg --no-ignore "pattern"              # ignore .gitignore
rg "pattern" --json                   # JSON output for scripting
rg -F "literal.string"                # fixed string
# This searches for any character followed by "log"
grep ".log" file.txt

# This searches literally for ".log"
grep -F ".log" file.txt

# Use -F when searching for file paths, IPs, URLs, or strings with metacharacters

3. Stream Editing with sed

sed processes text line by line, applying editing commands.

Substitution

sed 's/old/new/' file.txt              # replace first occurrence per line
sed 's/old/new/g' file.txt             # replace all occurrences (global)
sed 's/old/new/gi' file.txt            # global + case-insensitive
sed -i 's/old/new/g' file.txt          # in-place edit
sed -i.bak 's/old/new/g' file.txt      # in-place + create .bak backup

Line Selection and Deletion

sed -n '5p' file.txt                   # print only line 5
sed -n '1,10p' file.txt                # print lines 1-10
sed '5d' file.txt                      # delete line 5
sed '/pattern/d' file.txt              # delete lines matching pattern
sed '/^$/d' file.txt                   # remove blank lines
sed '/^#/d' file.txt                   # remove comment lines

Address Ranges

sed -n '/START/,/END/p' file.txt       # print between two patterns
sed '/START/,/END/d' file.txt          # delete between two patterns
sed '10,20s/old/new/g' file.txt        # substitute only on lines 10-20

Text Transformations

sed 's/^/  /' file.txt                 # indent every line
sed 's/^[ \t]*//' file.txt             # remove leading whitespace
sed 's/:space:*$//' file.txt       # remove trailing whitespace
sed '/^$/d' file.txt                   # remove blank lines

Practical sed Patterns

# Remove comments and blank lines from a config file
sed '/^#/d; /^$/d' nginx.conf

# Extract value from a key=value file
sed -n 's/^username=//p' config.txt

# Replace a specific line in a file
sed -i '5s/.*/new content for line 5/' file.txt

# Append a line after a match
sed '/pattern/a\new line to append' file.txt

4. Field and Column Processing

awk – Pattern Scanning and Processing

awk splits each line into fields on whitespace by default. Fields are $1, $2, …, $NF (last field).

awk '{print $1}' file.txt              # print first field
awk '{print $NF}' file.txt             # print last field
awk -F: '{print $1}' /etc/passwd       # custom field separator (colon)
awk -F, '{print $2}' data.csv          # CSV second column

# Row filtering
awk 'NR==5' file.txt                   # print line 5
awk 'NR>=5 && NR<=10' file.txt
awk '/pattern/' file.txt
awk '!/pattern/' file.txt              # NOT matching

# Arithmetic
awk '{sum += $2} END {print sum}' file.txt
awk '{sum += $2} END {print sum/NR}' file.txt   # average column 2
awk '$3 > 100' file.txt
awk '$1 == "ERROR"' file.txt
awk -F: '$3 >= 1000 {print $1}' /etc/passwd    # non-system users

Practical awk patterns:

# Sum sizes from ls -l output
ls -l | awk '{sum += $5} END {print sum/1024/1024 " MB"}'

# Process df output -- show filesystems over 80% full
df -h | awk 'NR>1 && $5+0 > 80 {print $1, "is", $5, "full"}'

# Extract top IPs from nginx access log
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Count failed SSH logins by IP
grep "Failed password" /var/log/auth.log | awk '{print $11}' | sort | uniq -c | sort -rn

cut – Column Extraction

cut -d: -f1 /etc/passwd                # delimiter=:, field 1
cut -d: -f1,3 /etc/passwd              # fields 1 and 3
cut -d, -f2,4 data.csv
cut -c1-10 file.txt                    # characters 1-10

tr – Translate Characters

tr 'a-z' 'A-Z' < file.txt             # lowercase to uppercase
tr -d '\r' < winfile.txt               # remove carriage returns (Windows → Unix)
tr -d ' ' < file.txt                   # remove all spaces
tr -s ' ' < file.txt                   # squeeze multiple spaces into one
tr '\n' ' ' < file.txt                 # join all lines into one

paste and column

paste file1.txt file2.txt              # merge side by side (tab-separated)
paste -d, file1.txt file2.txt          # comma-delimited
paste -s file.txt                      # merge all lines into one (serial)

column -t -s: /etc/passwd              # tabulate using : as separator
df -h | column -t                      # align df output
column -t -s, data.csv                 # tabulate CSV

5. Sorting, Comparing, and Deduplicating

sort

sort file.txt                          # alphabetical
sort -r file.txt                       # reverse
sort -n file.txt                       # numeric
sort -k2 -n file.txt                   # sort by second field numerically
sort -t: -k3 -n /etc/passwd            # colon-separated, sort by UID
sort -u file.txt                       # sort and remove duplicates
sort -h file.txt                       # human-readable size sort (1K, 5M, 2G)

# Sort du output
du -sh * | sort -rh                    # largest first, human-readable

uniq – Deduplicate Adjacent Lines

uniq only deduplicates adjacent identical lines. Always sort first.

sort file.txt | uniq                   # remove duplicates
sort file.txt | uniq -c               # count occurrences
sort file.txt | uniq -d               # show only duplicate lines
sort file.txt | uniq -u               # show only unique lines
sort file.txt | uniq -c | sort -rn   # most frequent lines first

comm – Compare Two Sorted Files

comm file1.txt file2.txt               # three columns: only-in-1, only-in-2, in-both
comm -23 file1.txt file2.txt           # lines only in file1
comm -13 file1.txt file2.txt           # lines only in file2
comm -12 file1.txt file2.txt           # lines in both files

Both files must be sorted.

diff – Line-by-Line Difference

diff file1.txt file2.txt               # standard diff output
diff -u file1.txt file2.txt            # unified diff (patch format -- most readable)
diff -y file1.txt file2.txt            # side-by-side
diff -r dir1/ dir2/                    # recursive directory diff
colordiff file1 file2                  # always coloured

# Generate and apply a patch
diff -u original.txt modified.txt > changes.patch
patch original.txt < changes.patch

# Visual diff
vimdiff file1.txt file2.txt

cmp – Byte-Level Comparison

cmp file1 file2                        # identical? (exit code)
cmp -l file1 file2                     # list all differing bytes
cmp -s file1 file2                     # silent (exit code only)

Use cmp for binary files where diff is not meaningful.

6. Logs and Structured Text

Log Filtering

# Traditional log files
tail -f /var/log/syslog
tail -f /var/log/auth.log
grep "ERROR" /var/log/syslog
grep -i "fail\|error\|warn" /var/log/syslog | tail -50

# journalctl (systemd)
journalctl                             # all logs
journalctl -b                          # current boot only
journalctl -b -1                       # previous boot
journalctl -u nginx                    # specific service
journalctl -f                          # follow (live)
journalctl -p err                      # errors only
journalctl -p warning..err             # warnings to errors
journalctl --since "2026-04-09 10:00"
journalctl --since "1 hour ago"
journalctl -n 50                       # last 50 lines
journalctl -o json-pretty              # readable JSON

Journal maintenance:

journalctl --disk-usage

# These commands affect only archived (rotated) journal files, not the active journal.
# The active journal is not touched until it is rotated.
journalctl --vacuum-size=500M          # trim archived journals to 500MB total
journalctl --vacuum-time=30d           # remove archived entries older than 30 days

Parsing Fields from Log Output

# Extract IPs from nginx access log
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -20

# Extract HTTP status codes
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn

# Find slowest requests
awk '{print $NF, $7}' /var/log/nginx/access.log | sort -rn | head -20

# Extract lines between two timestamps
awk '/2026-04-09 10:00/,/2026-04-09 11:00/' /var/log/syslog

Extracting Records from Mixed Output

# Extract a block between markers
sed -n '/BEGIN CERTIFICATE/,/END CERTIFICATE/p' bundle.pem

# Extract all IP addresses
grep -oE '\b([0-9]{1,3}\.){3}[0-9]{1,3}\b' file.txt

# Extract all UUIDs
grep -oE '[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}' file.txt

# Extract today's errors from syslog
grep "$(date +%Y-%m-%d)" /var/log/syslog | grep -i "error"

7. Binary and Hex Inspection

xxd – Hex Dump and Reverse

xxd file.txt                           # standard hex dump
xxd -l 64 file.txt                     # first 64 bytes only
xxd -s 100 file.txt                    # start at offset 100
xxd -r dump.hex > binary.bin           # reverse hex dump back to binary
xxd -p file.txt                        # plain hex

hexdump

hexdump -C file.txt                    # canonical hex + ASCII (most readable)
hexdump -C -n 64 file.txt
hexdump -v file.txt                    # verbose

Reading Non-Printable Data

file binaryfile                        # identify file type from magic bytes
file -i binaryfile                     # include MIME type
strings binaryfile                     # extract printable strings
strings -n 8 binaryfile                # strings at least 8 chars long

xxd binaryfile | head -20

File Signatures (Magic Bytes)

Hex BytesFile Type
89 50 4E 47PNG image
FF D8 FFJPEG image
25 50 44 46PDF (%PDF)
50 4B 03 04ZIP / docx / jar
1F 8Bgzip
FD 37 7A 58 5A 00xz
28 B5 2F FDzstd
7F 45 4C 46ELF binary
xxd binaryfile | head -2
file binaryfile

gzip -t archive.tar.gz && echo "OK" || echo "CORRUPT"
xz -t archive.tar.xz && echo "OK" || echo "CORRUPT"

Checksum Verification

sha256sum file.txt
sha512sum file.txt

# Generate checksum file
sha256sum file1 file2 file3 > checksums.sha256

# Verify
sha256sum -c checksums.sha256

# Compare checksums of two files
sha256sum file1.txt file2.txt