Homelab, Linux, JS & ABAP (~˘▾˘)~
 

[ZFS] Destroy snapshots

Snapshots in ZFS aren’t cumulative. They just include the difference between the filesystem at the time you took the snapshot and now.
Meaning if you have snapshots A, B and C, deleting A doesn’t impact the status of the remaining B and C. This is a common point of confusion when coming from other systems where you might have to consolidate snapshots to get to a consistent state.

This means, you can delete snapshots out of the middle of a list and not screw up snapshots before or after the one you deleted. So if you have:

pool/dataset@snap1 
pool/dataset@snap2 
pool/dataset@snap3 
pool/dataset@snap4 
pool/dataset@snap5

You can safely sudo zfs destroy pool/dataset@snap3 and 1, 2, 4, and 5 will all be perfectly fine afterwards.

You can estimate the amount of space reclaimed by deleting multiple snapshots by doing a dry run (-n) on zfs destroy like this:

sudo zfs destroy -nv pool/dataset@snap4%snap8
would destroy pool/dataset@snap4
would destroy pool/dataset@snap5
would destroy pool/dataset@snap6
would destroy pool/dataset@snap7
would destroy pool/dataset@snap8
would reclaim 25.2G

List your snapshots (for a specific dataset simply use grep):

sudo zfs list -rt snapshot | grep pool/dataset

If you need to free some space, you can sort zfs snapshots by size:

zfs list -o name,used -s used -t snap

[ZFS] Rollback LXC

Look for a specific snapshot of your LXC.

sudo zfs list -rt snapshot | grep data/lxc/subvol-101

I just want to rollback 2 hours, so I choose the snapshot with timestamp 2019-12-05-1117.

...
data/lxc/subvol-110-disk-0@zfs-auto-snap_hourly-2019-12-05-0917   11,7M      -     24,2G  -
data/lxc/subvol-110-disk-0@zfs-auto-snap_hourly-2019-12-05-1017   11,9M      -     24,2G  -
data/lxc/subvol-110-disk-0@zfs-auto-snap_hourly-2019-12-05-1117   11,7M      -     24,2G  -
data/lxc/subvol-110-disk-0@zfs-auto-snap_hourly-2019-12-05-1217   11,8M      -     24,2G  -
data/lxc/subvol-110-disk-0@zfs-auto-snap_hourly-2019-12-05-1317   12,1M      -     24,2G  -

If there are one or more snapshots between the current state and the snapshot you want to rollback to, you have to add -r (force deletion) to the rollback command.

sudo zfs rollback -r data/lxc/subvol-110-disk-0@zfs-auto-snap_hourly-2019-12-05-1117

[Jellyfin] Deleting files on a mounted dataset inside LXC

If you have installed Jellyfin inside LXC and have all your media mounted from a ZFS dataset inside your container, it’s possible that you are not able to delete files directly from the Jellyfin WebUi. In this case, you have to add the user “jellyfin” to a group with write access on your dataset. In my case, the group “nocin”.

usermod -a -G nocin jellyfin

[ZFS] Encryption

Native encryption in ZFS is supported since version 0.8.0. Check your current ZFS version with:

modinfo zfs                           

First activate the encryption feature on your pool:

zpool set feature@encryption=enabled pool_name

To get an overview of all pools with enabled encryption use the following command:

zpool get all | grep encryption

To create a new encrypted dataset with a passphrase:

zfs create -o encryption=aes-256-gcm -o keyformat=passphrase pool_name/dataset_name

Check the keystatus, the current encryption type and the mountpoint with the following commands:

zfs get keystatus pool_name/dataset_name
zfs get encryption pool_name/dataset_name
zfs list pool_name/dataset_name

Change the passphrase with:

zfs change-key pool_name/dataset_name

After a reboot you first have to load your key and then mount your dataset:

zfs load-key pool_name/dataset_name
zfs mount pool_name/dataset_name

Unmount and unload your key:

zfs umount pool_name/dataset_name
zfs unload-key pool_name/dataset_name

If you are sharing this dataset via NFS, it could be necessary to restart the NFS service after mounting. I just deactivate and activate again NFS on the dataset.

zfs set sharenfs=off pool_name/dataset_name
zfs set sharenfs=on pool_name/dataset_name

[Proxmox] Mount dataset into LXC

Open LXC config file in your favorite editor. In this case the container name is 101:

nano /etc/pve/lxc/101.conf

Append a single line for each mountpoint you want to add. The first mountpoint is “mp0”, the second “mp1” and so on.

mp0: /data/music,mp=/mnt/nfs/music

First the source (my zpool “data”, folowing the dataset name “music”), after that the destination inside the container beginning “mp=”.

[ZFS] Basic Commands

Documentation: https://github.com/zfsonlinux/zfs/wiki/Admin-Documentation
Manual Pages: https://zfs.datto.com/man/
Milestones: https://github.com/zfsonlinux/zfs/milestones

modinfo zfs                               //check current ZFS version
zfs list                                  //list pool with datasets
zfs list -r pool                          //show all datasets in a pool with size and mountpoint
zfs list -r -o name,mountpoint,mounted    //check if datasets are mounted   
zpool status (pool)
zpool list
zpool list -v
zpool iostat (pool 1)
zpool iostat -v

Activate NFS on dataset:

zfs set sharenfs=on pool/dataset
zfs get sharenfs pool/dataset

Usefull comands when replacing a failed disk:

ls -l /dev/disk/by-id/                // Disk ID's
zdb                                   // Display zpool debugging and consistency information
smartctl -a /dev/ada0                 // S.M.A.R.T info
wipefs -a new_hdd                     // remove ext4 filesystem 
zpool replace data old_hdd new_hdd    // Replace HDD

Scrub cronjob:

cat /etc/cron.d/zfsutils-linux 

Add and remove Log & L2ARC:

zpool add data log sda1
zpool add data cache sda2

zpool remove data log sda1
zpool remove data cache sda2