07 September 2025

ZFS Backups

Here is the process that I use to create local zfs backups.


Manual Backup Procedure


  1. Create Snapshots
    • zfs snapshot -r storage/containers@backup-20240115
    • -r: for recursive
  2. Create a RaidZ1 Pool as our backup target
    • zpool create backup-pool raidz1 [DISK1] [DISK2] [DISK3] [DISK4]
  3. Send the initial snapshot
    • zfs send -R --raw storage/containers@backup-20240115 | zfs receive -o readonly=on backup-pool/containers
    • -R: for recursive
    • --raw: so that encrypted data is not decrypted (not encrypted data will remain not encrypted)
    • -o readonly=on: so that the backups are not editable
  4. Send incremental snapshots
    • zfs send -R --raw -I storage/containers@backup-20240115 storage/containers@backup-20240210 | zfs receive backup-pool/containers
    • -I [prev-snapshot]: incremental data since specified snapshot


Automating Backup Procedure

  1. Created the below scripts to help automate in /root/backup_scripts/
    • snapshot.sh
    • snapshot-all.sh
    • backup.sh
    • backup-all.sh
  2. Set the snapshot-all.sh script on crontab to run monthly
    • crontab -e
    • 05 8 1 * * sh /root/backup_scripts/snapshot-all.sh
    • remember that the cron time is UTC
  3. Manually push the backups
    • nohup sh backup-all.sh >>all.log 2>&1 &


Scripts

snapshot.sh

#!/bin/sh

DATASET=$1
zfs snapshot -r ${DATASET}@backup-$(date "+%Y%m%d")


snapshot-all.sh

#!/bin/sh

sh /root/backup_scripts/snapshot.sh rpool
sh /root/backup_scripts/snapshot.sh storage


backup.sh

#!/bin/sh

if [ $# -ne 2 ] ; then
  echo "Requires 2 args source and destination"
  exit 1
fi

src=$1
dest=$2

snaplist=$(zfs list -t snapshot $src)
if [ $? -ne 0 ] ; then
  echo "No snapshots found for $src"
  exit 1
fi

bkpsnap=$(zfs list -t snapshot $dest 2>&1)
incremental=$?

if [ $incremental -eq 0 ] ; then
  incsnap=$(echo "$bkpsnap" | tail -n 1 | cut -f1 -d" " | cut -f2 -d"@")
  snap=$(echo "$snaplist" | grep "$incsnap" -A 1 | tail -n 1 | cut -f1 -d" " | cut -f2 -d"@")

  if [ "$incsnap" = "$snap" ] ; then
    echo "Backup is already up to date"
    exit 1
  fi

  echo "Sending incremental snap $src@$snap on top of $dest@$incsnap"
  echo "zfs send -R --raw -I $src@$incsnap $src@$snap | zfs receive $dest"
  zfs send -R --raw -I $src@$incsnap $src@$snap | zfs receive $dest
else
  snap=$(echo "$snaplist" | tail -n +2 | head -n 1 | cut -f1 -d" " | cut -f2 -d"@")
  echo "Sending initial snap $src@$snap to $dest"
  echo "zfs send -R --raw $src@$snap | zfs receive -o readonly=on $dest"
  zfs send -R --raw $src@$snap | zfs receive -o readonly=on $dest
fi

echo "Complete\n"


backup-all.sh

#!/bin/sh

echo "========================================"
echo "Backup on $(date +"%Y-%m-%d %H:%M:%S")"
echo ""

zpool import bpool

sh /root/backup_scripts/backup.sh rpool/data backup-pool/data
sh /root/backup_scripts/backup.sh storage/containers backup-pool/containers
sh /root/backup_scripts/backup.sh storage/encrypted backup-pool/encrypted
sh /root/backup_scripts/backup.sh storage/lists backup-pool/lists

echo "Complete at $(date +"%Y-%m-%d %h:%M:%S")"

echo "Performing scrub"
zpool scrub -w backup-pool
echo "Complete at $(date +"%Y-%m-%d %h:%M:%S")"

zpool status backup-pool
zpool export backup-pool

echo "Spinning down hard drives"
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFPX-68C5ZN0_WD-XXXXXXX
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFPX-68C5ZN0_WD-XXXXXXX
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-XXXXXXX
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-XXXXXXX
echo "Hard drives spun down"
echo "========================================"


Appendix

Sources


GiGaPlus GP-S25-0802P Review

My Netgear JGS516PE PoE switch died. It was still providing PoE, however, it was not routing and all the lights were lit up amber. As a temporary solution, I was able to set up one AP on a PoE injector and plug in the Switch Flex Mini using USB-C. This allowed me to get the network up and running while looking for a replacement.

Since newer wireless access points are coming with 2.5Gbps port and requiring PoE+, I wanted the replacement switch to support those 2 things as well as preferring a rack mountable solution.


Options


Decision

I went with the GiGaPlus because it wasn't much more expensive than the used Netgear and came with significant upgrades. It also was not managed so it has less of an attack vector and you don't have to worry about firmware updates from a smaller company.


My Experience

My original unit worked well and as expected until I tried switching the uplink to sfp+ using a 1 meter Sodola DAC cable. Then I started getting weird behavior like slack taking forever to load. I then tried speed tests and was not getting my expected speed and having large swings in speed during the test. I thought it might have to do with flow control, so I tried switching that on, but it did not help. I then tested the other sfp+ and these issues all went away.

Armed with this information, I contacted support. They asked me to test with iperf instead of an external speed test. I did this and sent them the results which were even worse than the external speed test. They then confirmed it was defective and asked if I wanted my money back. I then replied asking if they wanted their unit back and if they would provide a shipping label. I never received a response. I waited a few days and then went through Amazon returns as I was still in the 30 day return window.


Review

Pros:

  • Low cost
  • Provides everything I need now and future expandability
  • 2 x 10 Gbps SFP+ ports
    • One for Uplink
    • One to Daisy Chain or connect to server
  • Plug and play with no configuration necessary
  • Forwards VLAN tagged traffic (No ability to add/modify tags though)

Cons:

  • No management interface
    • cannot see how close to PoE budget max you are
    • no ability to cut power to a port without unplugging
  • LED Status Lights are rather lacking
    • Power LED and Ports 9 and 10 activity are on the far right
    • No light to indicate which ports are supplying PoE
    • Green light indicates 2.5 Gbps
    • Orange light indicates 1 Gbps / 100 Mbps / 10 Mbps
    • No light indicates no connection
  • The original unit only had one functional sfp+ port, the replacement works perfectly

Other Thoughts:

  • Support is only via email and only responded once a day to my emails likely due to the time difference
  • I would make sure that everything works in the 30 day Amazon return period so that you can go through Amazon returns instead of support.


Appendix

Research

Flow Control


06 September 2025

Replacing HDHomeRun tuner

Since my HDHomeRun Connect died in a lightning storm possibly due to static build-up. I bought a replacement HDHomeRun Flex Duo as well as a lightning arrester/suppressor with a ground.

Here are the ones that I went with:

  • $109.99 HDHomeRun Flex Duo: Link
  • $17.95 Proxicast Coaxial Lightning Arrester/Suppressor: Link

I had considered the 4 tuner with ATSC 3.0 support, but with the threat of the channels becoming encrypted and the increased price made me go with the duo.


Device Setup

  • Add the lightning arrester/suppressor between the antenna and TV tuner and attach a ground wire
  • Plug in antenna, ethernet, and power into the HDHomeRun
  • Assign the HDHomeRun a static IP in your router
  • Reboot the HDHomeRun
  • Update the firmware: http://hdhomerun.local


Preparing Docker

setup.yml should be a copy of the mythtv docker-compose.yml, but update the image to dheaps/mythbackend:setup and add VNC_PASS=<super_secret_pass> to the environment section. Some key portions are:

  • network_mode: host
    • otherwise mythtv-setup will not be able to detect your HDHomeRun
  • hostname: <your_hostname>
    • otherwise mythtv-setup will use whatever generated hostname docker feeds it
  • environment variable VNC_PASS
    • the password to connect to VNC


Starting Docker

First and most importantly make sure your mythbackend is NOT running. I did this with:

  • docker compose down
  • docker ps

Next is starting the docker containers for mythtv-setup:

  • docker compose --file setup.yml up


Running Setup

Connect vnc to <ip_address>:5900. Then you will get prompted with several questions.

  • Add user to mythtv group? NO
  • Ignore the error and continue? NO
  • Would you like to start the mythtv backend? NO
  • Would you like to run mythfilldatabase? NO

After answering the questions,  I got dumped to the terninal.

On the command line run:

  • mythtv-setup.real

When done with setup exit with ESC and make sure to save changes if prompted. Then close vnc and press ctrl-C on the terninal running the docker container.


Appendix

Troubleshooting

  • mythweb: Unable to connect to 127.0.0.1:6543
    • in setup make sure that under General -> IP v4 address is set to the correct IP

Sources