Incus

Incus is a modern container and virtual machine manager. It is the community-maintained fork of LXD (after Canonical took LXD proprietary). It manages system containers (via LXC) and full hardware virtual machines (via QEMU) under one unified CLI.

The distinction between Incus and Docker is fundamental: Docker containers share the host kernel and are designed to run a single application process. Incus system containers run a full Linux OS userspace – init system, systemd, networking, multiple services – as if they were a lightweight virtual machine, but sharing the host kernel for efficiency.

Think of it this way: Docker containers are cells (specialised, single-function); Incus containers are organisms (full systems with their own processes, metabolism, and lifecycle).


1. Installation

Arch Linux (Primary Platform)

Incus is available in the Arch community repositories.

sudo pacman -S incus

# Enable and start the Incus daemon
sudo systemctl enable --now incus.socket

# Add your user to the incus-admin group (manages without sudo)
sudo usermod -aG incus-admin $USER

# Apply the new group immediately without logging out
newgrp incus-admin

# Verify the daemon is running
incus version

Required kernel modules – Incus needs these for networking and container isolation:

# Check if they're loaded
lsmod | grep -E "br_netfilter|ip_tables|xt_conntrack"

# Load them
sudo modprobe br_netfilter ip_tables xt_conntrack

# Persist across reboots
echo -e "br_netfilter\nip_tables\nxt_conntrack" | sudo tee /etc/modules-load.d/incus.conf

Required sysctl settings:

sudo tee /etc/sysctl.d/99-incus.conf <<EOF
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
EOF
sudo sysctl --system

Ubuntu / Debian

Ubuntu ships LXD by default. Remove it first to avoid conflicts.

# Remove LXD snap if present
sudo snap remove lxd

# Install Incus via the Zabbly repository (official Incus maintainer's repo)
sudo apt install curl gpg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.zabbly.com/key.asc | sudo gpg --dearmor -o /etc/apt/keyrings/zabbly.gpg

sudo tee /etc/apt/sources.list.d/zabbly-incus-stable.sources <<EOF
Enabled: yes
Types: deb
URIs: https://pkgs.zabbly.com/incus/stable
Suites: $(. /etc/os-release && echo ${VERSION_CODENAME})
Components: main
Architectures: $(dpkg --print-architecture)
Signed-By: /etc/apt/keyrings/zabbly.gpg
EOF

sudo apt update
sudo apt install incus

# Enable and configure
sudo systemctl enable --now incus.socket
sudo usermod -aG incus-admin $USER
newgrp incus-admin

Rocky Linux / RHEL / Fedora

# Install Incus via Zabbly repository
sudo tee /etc/yum.repos.d/zabbly-incus-stable.repo <<EOF
[zabbly-incus-stable]
name=Zabbly Incus Stable
baseurl=https://pkgs.zabbly.com/incus/stable/el/\$releasever/\$basearch/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.zabbly.com/key.asc
EOF

sudo dnf install incus

# For Fedora, Incus may be in the default repos:
sudo dnf install incus incus-tools

sudo systemctl enable --now incus.socket
sudo usermod -aG incus-admin $USER
newgrp incus-admin

# SELinux: if you're running SELinux enforcing, you may need to set booleans
sudo setsebool -P container_manage_cgroup 1

2. Initial Setup: incus admin init

This is the most important step. incus admin init is an interactive wizard that configures three core subsystems: storage, networking, and optionally clustering and remote access. You run it once after installation.

incus admin init

Prompt 1: Clustering

Would you like to create a new local configuration? (yes/no) [default=yes]:

Answer: yes for a single-node setup (standard use case).

Do you want to configure a new Incus cluster? (yes/no) [default=no]:

Answer: no unless you are setting up a multi-server cluster. Clustering is for production deployments across multiple physical hosts where instances can be distributed and migrated.


Prompt 2: Existing Configuration

Do you want to use an existing bridge or host interface? (yes/no) [default=no]:

Answer: no unless you have a pre-existing network bridge (e.g., br0) you want Incus to attach containers to. For most setups, Incus creates its own bridge (incusbr0).


Prompt 3: Storage Backend

Do you want to configure a new local storage pool? (yes/no) [default=yes]:

Answer: yes. This is where container disk images are stored.

Name of the new storage pool [default=default]:

Answer: default (or name it something meaningful like fast or nvme if you’re using a specific disk).

Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]:

This is the most consequential choice. Here’s what each means:

BackendDescriptionBest For
zfsCopy-on-write filesystem with snapshots, cloning, and compressionBest performance – use if ZFS is available
btrfsCopy-on-write with snapshots and sub-volumesGood alternative if ZFS is unavailable; native on many distros
lvmLogical Volume Manager – allocates volumes per containerLarge-scale deployments; less overhead than ZFS
dirPlain directory storage – each container is a folderSimplest; no special kernel modules; slowest snapshots
cephDistributed storage clusterMulti-node clustering only; skip for single host

Recommendations:

  • Arch Linux: Install zfs-dkms and choose zfs, or use btrfs (already in the kernel).
  • Ubuntu: ZFS is supported natively; choose zfs.
  • Rocky Linux: btrfs or dir – ZFS on RHEL-based systems requires DKMS.
  • Low-powered / simple setup: dir – no special dependencies, always works.

If you choose zfs:

Create a new ZFS pool? (yes/no) [default=yes]:

Answer: yes

Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
  • no → Incus creates a ZFS pool backed by a file (stored in /var/lib/incus/). Good for testing.
  • yes → Incus uses an entire block device (e.g., /dev/sdb). Recommended for production. The entire disk is dedicated to Incus storage.
Size in GiB of the new loop device (1GiB minimum) [default=30GiB]:

For file-backed pools, set a sensible size. 30GB is fine for testing. For production, use a dedicated disk.

If you choose btrfs:
Similar questions – can use a file-backed pool or a dedicated block device.

If you choose dir:

Create a new directory pool? (yes/no) [default=yes]:

Answer: yes. No further disk questions – it just uses /var/lib/incus/storage-pools/default/.


Prompt 4: Remote Access (Incus Daemon over Network)

Would you like to connect to a MAAS server? (yes/no) [default=no]:

Answer: no – MAAS (Metal as a Service) is Ubuntu-specific bare-metal provisioning. Not relevant for standard setups.

Would you like to configure LXD Bridge? (yes/no) [default=yes]:

(In some versions this appears as Incus Bridge)

Answer: yes – This creates the incusbr0 network bridge. All containers get an IP on this bridge’s subnet and can reach the internet via NAT.

What IPv4 address should be used? (CIDR subnet notation, "auto" or "none") [default=auto]:

Answer: auto – Incus picks a private subnet (typically 10.x.x.1/24). Use none only if you don’t want IPv4 NAT networking.

What IPv6 address should be used? (CIDR subnet notation, "auto" or "none") [default=auto]:

Answer: auto or none. IPv6 is fine to enable; none simplifies networking if you don’t need it.


Prompt 5: Image Updates

Would you like stale cached images to be updated automatically? (yes/no) [default=yes]:

Answer: yes – Keeps your base images (Ubuntu, Alpine, Debian, etc.) current. Incus updates them in the background.


Prompt 6: YAML Summary

Would you like a YAML "init" preseed to be printed? (yes/no) [default=no]:

Answer: yes if you want to save the configuration as a file for repeatable deployments (e.g., scripting server provisioning). The output is a preseed YAML you can pipe into incus admin init --preseed < config.yaml on future machines.


Non-Interactive Preseed (For Automation)

Instead of the interactive wizard, you can provide a preseed file:

cat <<EOF | incus admin init --preseed
config: {}
networks:
  - name: incusbr0
    type: bridge
    config:
      ipv4.address: auto
      ipv4.nat: "true"
      ipv6.address: auto
      ipv6.nat: "true"
storage_pools:
  - name: default
    driver: btrfs
profiles:
  - name: default
    devices:
      eth0:
        name: eth0
        network: incusbr0
        type: nic
      root:
        path: /
        pool: default
        type: disk
EOF

Post-Init Verification

# Confirm the storage pool was created
incus storage list

# Confirm the network bridge was created
incus network list

# Check the default profile (should have root disk + eth0 nic)
incus profile show default

# Test by launching a container
incus launch images:alpine/3.19 test-init
incus exec test-init -- sh -c "ip addr && ping -c 2 1.1.1.1"
incus delete test-init --force

3. Instance Management

Remotes and Images

List configured remote image servers.

incus remote list

List all images available on the public remote.

incus image list images:

Search for a specific distro.

incus image list images: alpine
incus image list images: ubuntu
incus image list images: arch
incus image list images: debian arm64   # filter by architecture

Show full image info including fingerprint and size.

incus image info images:ubuntu/24.04

Launching Instances

Launch a system container.

incus launch images:ubuntu/24.04 <name>
incus launch images:alpine/3.19 <name>
incus launch images:archlinux <name>
incus launch images:debian/12 <name>

Launch a VM instead of a container.

incus launch images:ubuntu/24.04 <name> --vm

Create without starting.

incus init images:debian/12 <name>

Launch with resource limits set immediately.

incus launch images:ubuntu/24.04 <name> \
  --config limits.cpu=2 \
  --config limits.memory=2GiB

Launch with a specific storage pool.

incus launch images:ubuntu/24.04 <name> --storage fast

Launch with a profile.

incus launch images:ubuntu/24.04 <name> --profile default --profile my-profile

Instance Lifecycle

incus list                           # list all instances
incus start <name>
incus stop <name>
incus stop <name> --force            # force stop (like pulling the power)
incus restart <name>
incus restart <name> --force
incus pause <name>                   # freeze instance in memory
incus resume <name>
incus delete <name>                  # delete (must be stopped first)
incus delete <name> --force          # stop and delete

Rename an instance.

incus rename <old-name> <new-name>

Copy an instance (creates a clone).

incus copy <source> <destination>
incus copy <source> <destination> --instance-only   # without snapshots

Move an instance to a different storage pool.

incus move <name> <name> --storage <pool>

4. Shell Access and Command Execution

Open an interactive root shell in a container.

incus exec <name> -- bash     # for Debian/Ubuntu/Arch containers
incus exec <name> -- sh       # for Alpine (no bash by default)

Execute a single command.

incus exec <name> -- apt-get update
incus exec <name> -- systemctl status nginx
incus exec <name> -- cat /etc/os-release

Run as a specific user.

incus exec <name> --user 1000 -- bash

Set environment variables for the command.

incus exec <name> --env MY_VAR=hello -- bash

5. File Operations

Push a file into an instance.

incus file push /local/path/file.txt <name>/remote/path/

Push a directory recursively.

incus file push -r ./local-dir <name>/remote/path/

Pull a file from an instance.

incus file pull <name>/remote/path/file.txt /local/path/

Pull a directory recursively.

incus file pull -r <name>/remote/dir ./local-dir

Read a file directly to stdout.

incus file pull <name>/var/log/syslog -
incus file pull <name>/etc/nginx/nginx.conf - | less

Edit a file directly in-place.

incus file edit <name>/etc/nginx/nginx.conf

6. Snapshots

Snapshots are point-in-time copies of an instance’s disk state.

Create a snapshot.

incus snapshot create <name> <snapshot-name>
incus snapshot create web-server before-upgrade

List snapshots.

incus snapshot list <name>

Restore from a snapshot.

incus snapshot restore <name> <snapshot-name>

Delete a snapshot.

incus snapshot delete <name> <snapshot-name>

Create a new instance from a snapshot.

incus copy <name>/<snapshot-name> new-instance-name

7. Configuration and Resource Limits

View current configuration.

incus config show <name>

Set a memory limit.

incus config set <name> limits.memory 2GiB
incus config set <name> limits.memory.enforce soft   # soft limit (can exceed, with warning)

Set a CPU limit.

incus config set <name> limits.cpu 2
incus config set <name> limits.cpu.allowance 50%     # 50% of one core

Set disk I/O limits.

incus config set <name> limits.disk.io.read 100MB
incus config set <name> limits.disk.io.write 100MB

Configure autostart on host boot.

incus config set <name> boot.autostart true
incus config set <name> boot.autostart.delay 10      # seconds after host boot
incus config set <name> boot.autostart.priority 5    # higher = starts first

Unset a config key.

incus config unset <name> limits.memory

Pass an environment variable into the container permanently.

incus config set <name> environment.MY_APP_ENV production

8. Networking

List all networks.

incus network list

Show info about the default bridge.

incus network info incusbr0
incus network show incusbr0

List devices (containers) connected to a network.

incus network list-leases incusbr0

Create a new network bridge.

incus network create my-bridge \
  ipv4.address=192.168.100.1/24 \
  ipv4.nat=true \
  ipv6.address=none

Attach a container to a network.

incus network attach incusbr0 <name> eth0

Detach from a network.

incus network detach incusbr0 <name> eth0

Set a static IP for a container on a bridge.

incus config device set <name> eth0 ipv4.address 10.12.34.50

Port forwarding (proxy device):
Forward port 8080 on the host to port 80 inside the container.

incus config device add <name> proxy-http proxy \
  listen=tcp:0.0.0.0:8080 \
  connect=tcp:127.0.0.1:80

Forward SSH:

incus config device add <name> proxy-ssh proxy \
  listen=tcp:0.0.0.0:2222 \
  connect=tcp:127.0.0.1:22

Remove a proxy device:

incus config device remove <name> proxy-http

9. Storage

List storage pools.

incus storage list

Show info about a pool.

incus storage info default
incus storage show default

Create a new storage pool.

incus storage create fast btrfs source=/dev/sdb
incus storage create archive dir source=/mnt/archive

List volumes in a pool.

incus storage volume list default

Create a custom storage volume (for shared data between instances).

incus storage volume create default my-data

Attach a custom volume to an instance.

incus config device add <name> extra-storage disk \
  source=my-data \
  pool=default \
  path=/mnt/shared

10. Profiles

Profiles are reusable configuration templates – like a genotype you can apply to any instance.

List all profiles.

incus profile list

Show a profile’s configuration.

incus profile show default

Create a new profile.

incus profile create web-server

Edit a profile.

incus profile edit web-server

Apply a profile to an existing instance.

incus profile assign <name> default,web-server

Add a config key to a profile.

incus profile set web-server limits.memory 1GiB

Add a device to a profile.

incus profile device add web-server root disk pool=default path=/

11. Monitoring and Diagnostics

Show detailed instance info (state, IPs, resources).

incus info <name>

Monitor instance console output in real-time.

incus console <name>
incus console <name> --show-log   # show boot log for VMs

Show live resource usage across all instances.

incus monitor

Show the host’s resource usage attributed to Incus.

incus query /1.0/metrics

12. Cluster Operations (Multi-Node)

# List cluster nodes
incus cluster list

# Show info on a node
incus cluster show <node-name>

# Evacuate a node (migrate all instances away before maintenance)
incus cluster evacuate <node-name>

# Restore a node after maintenance
incus cluster restore <node-name>

13. Troubleshooting

ProblemCauseFix
Error: Failed to create socketDaemon not runningsudo systemctl start incus.socket
Error: Permission deniedNot in incus-admin groupsudo usermod -aG incus-admin $USER then re-login
Container has no networkBridge not configured or kernel modules missingCheck incus network list; verify ip_forward sysctl
VM won’t bootKVM not availableCheck kvm-ok or `lscpu
ZFS: pool not foundZFS module not loadedsudo modprobe zfs or install zfs-dkms
Image download stuckSlow connection or cache issueincus image delete <fingerprint> and retry

Check daemon logs:

sudo journalctl -u incus -f
sudo journalctl -u incus --since "1 hour ago"

Check a specific container’s systemd journal (for system containers):

incus exec <name> -- journalctl -xe