12 October 2025

Changing Proxmox Boot Devices

I wanted to use the Intel SSDs as the boot devices in my replacement server. This meant I needed to migrate back to the 2.5" HDDs.


Prep Work

Ensure that autoexpand is off

  • zpool get autoexpand rpool

NAME   PROPERTY    VALUE   SOURCE

rpool  autoexpand  off     default


Adding the 2.5" HDDs

For all the below examples /dev/sda is one of the existing boot drives and /dev/sde and /dev/sdf are the new drives.


Install the hard drives

  • Power-off the system first if you don't have hot swap


Copy the partition tables

  • sfdisk -d /dev/sda | grep -v last-lba > ssd_part_table
  • sed -e '/^label-id:/d' -e 's/,\s*uuid=[-0-9A-F]*//g' ssd_part_table | sfdisk /dev/sde
  • sed -e '/^label-id:/d' -e 's/,\s*uuid=[-0-9A-F]*//g' ssd_part_table | sfdisk /dev/sdf


Copy the BIOS data

  • dd if=/dev/sda1 of=/dev/sde1
  • dd if=/dev/sda1 of=/dev/sdf1


Copy the Boot

  • proxmox-boot-tool format /dev/sde2
  • proxmox-boot-tool format /dev/sdf2
  • proxmox-boot-tool init /dev/sde2
  • proxmox-boot-tool init /dev/sdf2


Add to the ZFS pool

  • zpool attach rpool ata-INTEL_SSDSC2BX800G4_XXXXXX-part3 /dev/disk/by-id/scsi-XXXXX1-part3
  • zpool attach rpool ata-INTEL_SSDSC2BX800G4_XXXXXX-part3 /dev/disk/by-id/scsi-XXXXX2-part3


Hardware Changes

  • turn off the server
  • remove the drives
  • ensure server still boots


Remove old drives from zpool

  • zpool status rpool
  • zpool detach rpool <new id>
  • zpool detach rpool <new id>


Remove old drives from boot

  • proxmox-boot-tool status
  • proxmox-bool-tool clean
  • proxmox-boot-tool status



Lenovo TS140 Hardware Upgrades

I decided to use my old server (Lenovo TS140) to replace my present one (Dell R730XD), There were two factors in my decision (1) it was more stable (2) it consumed less power at ~50W vs ~99W. However, before I could do that I wanted to make two key upgrades: (1) more than 4 cores/4 threads and (2) faster than gigabit networking.


Hardware Options

Processor:

  • E3-1225 v3 (Current Processor)
    • Links: Intel / Tech Power Up
    • 4 core 4 thread
    • Base: 3.2 GHz
    • Max Turbo: 3.6 GHz
    • IGP: Intel HD P4600
    • TDP: 84W
    • Single/Multi: 2014/5322
  • E3-1245 v3
    • Links: Intel / Tech Power Up
    • 4 core 8 thread
    • Base: 3.4 GHz
    • Mz Turbo: 3.8 GHz
    • IGP: Intel HD P4600
    • TDP: 84W
    • Single/Multi: 2150/7044
  • E3-1275 v3
    • Links: Intel / Tech Power Up
    • 4 core 8 thread
    • Base: 3.5 GHz
    • Max Turbo: 3.9 GHz
    • IGP: Intel HD P4600
    • TDP: 84W (listed as 95W at places)
    • Single/Multi: 2198/7203


NIC:
  • Used Intel X710-DA2
    • roughly $30 from eBay
    • 10 Gbps SFP+
    • Requires a PCIe x8 slot
  • Realtek RTL8125B NIC
    • roughly $17-20 from Amazon
    • 2.5 Gbps
    • Requires a PCIe x1 slot (PCIe 2.0 x1 provides up to 5 Gbps of bandwidth)


Decision

I went with the E3-1275 v3 as it was roughly the same price as the E3-1245 v3 and was slightly faster.

I chose the Realtek RTL8125B from GiGaPlus as it would keep the x16 slot free in my TS140 for future expandability. I also only had a single free SFP+ port on my switch,


Appendix

Sources


04 October 2025

Clean-Up filenames created by Handbrake

 I wanted a script to help me rename files that Handbrake creates.


Issues

There are mainly 2 issues that I come across:

  • Spaces that I want to replace with underscores
  • Remove Unprintable characters
    • I determined that these were ASCII characters 001-037 (in octal)
    • ls -b directory_with_octal_chars_in_filename


Solution

Script (rename.sh)

#!/usr/bin/bash
directory=${1}
if [ -z ${directory} ] ; then
  echo "Requires a path"
  exit 1
fi

# replace spaces with underscore
find ${directory} -type f -name "* *" |
while read file; do
  mv "${file}" "${file// /_}"
done

# remove unprintable ASCII chars
find ${directory} -type f -name "*"[$'\001'-$'\037']"*" |
while read file; do
  new_file=$(echo "$file" | sed -e 's/[\o001-\o037]//g')
  mv "$file" "$new_file"
done


Permissions

chmod +x rename.sh


Usage

./rename.sh Directory_To_Rename_Files_In


Appendix

Sources


07 September 2025

ZFS Backups

Here is the process that I use to create local zfs backups.


Manual Backup Procedure


  1. Create Snapshots
    • zfs snapshot -r storage/containers@backup-20240115
    • -r: for recursive
  2. Create a RaidZ1 Pool as our backup target
    • zpool create backup-pool raidz1 [DISK1] [DISK2] [DISK3] [DISK4]
  3. Send the initial snapshot
    • zfs send -R --raw storage/containers@backup-20240115 | zfs receive -o readonly=on backup-pool/containers
    • -R: for recursive
    • --raw: so that encrypted data is not decrypted (not encrypted data will remain not encrypted)
    • -o readonly=on: so that the backups are not editable
  4. Send incremental snapshots
    • zfs send -R --raw -I storage/containers@backup-20240115 storage/containers@backup-20240210 | zfs receive backup-pool/containers
    • -I [prev-snapshot]: incremental data since specified snapshot


Automating Backup Procedure

  1. Created the below scripts to help automate in /root/backup_scripts/
    • snapshot.sh
    • snapshot-all.sh
    • backup.sh
    • backup-all.sh
  2. Set the snapshot-all.sh script on crontab to run monthly
    • crontab -e
    • 05 8 1 * * sh /root/backup_scripts/snapshot-all.sh
    • remember that the cron time is UTC
  3. Manually push the backups
    • nohup sh backup-all.sh >>all.log 2>&1 &


Scripts

snapshot.sh

#!/bin/sh

DATASET=$1
zfs snapshot -r ${DATASET}@backup-$(date "+%Y%m%d")


snapshot-all.sh

#!/bin/sh

sh /root/backup_scripts/snapshot.sh rpool
sh /root/backup_scripts/snapshot.sh storage


backup.sh

#!/bin/sh

if [ $# -ne 2 ] ; then
  echo "Requires 2 args source and destination"
  exit 1
fi

src=$1
dest=$2

snaplist=$(zfs list -t snapshot $src)
if [ $? -ne 0 ] ; then
  echo "No snapshots found for $src"
  exit 1
fi

bkpsnap=$(zfs list -t snapshot $dest 2>&1)
incremental=$?

if [ $incremental -eq 0 ] ; then
  incsnap=$(echo "$bkpsnap" | tail -n 1 | cut -f1 -d" " | cut -f2 -d"@")
  snap=$(echo "$snaplist" | grep "$incsnap" -A 1 | tail -n 1 | cut -f1 -d" " | cut -f2 -d"@")

  if [ "$incsnap" = "$snap" ] ; then
    echo "Backup is already up to date"
    exit 1
  fi

  echo "Sending incremental snap $src@$snap on top of $dest@$incsnap"
  echo "zfs send -R --raw -I $src@$incsnap $src@$snap | zfs receive $dest"
  zfs send -R --raw -I $src@$incsnap $src@$snap | zfs receive $dest
else
  snap=$(echo "$snaplist" | tail -n +2 | head -n 1 | cut -f1 -d" " | cut -f2 -d"@")
  echo "Sending initial snap $src@$snap to $dest"
  echo "zfs send -R --raw $src@$snap | zfs receive -o readonly=on $dest"
  zfs send -R --raw $src@$snap | zfs receive -o readonly=on $dest
fi

echo "Complete\n"


backup-all.sh

#!/bin/sh

echo "========================================"
echo "Backup on $(date +"%Y-%m-%d %H:%M:%S")"
echo ""

zpool import bpool

sh /root/backup_scripts/backup.sh rpool/data backup-pool/data
sh /root/backup_scripts/backup.sh storage/containers backup-pool/containers
sh /root/backup_scripts/backup.sh storage/encrypted backup-pool/encrypted
sh /root/backup_scripts/backup.sh storage/lists backup-pool/lists

echo "Complete at $(date +"%Y-%m-%d %h:%M:%S")"

echo "Performing scrub"
zpool scrub -w backup-pool
echo "Complete at $(date +"%Y-%m-%d %h:%M:%S")"

zpool status backup-pool
zpool export backup-pool

echo "Spinning down hard drives"
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFPX-68C5ZN0_WD-XXXXXXX
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFPX-68C5ZN0_WD-XXXXXXX
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-XXXXXXX
hdparm -y /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-XXXXXXX
echo "Hard drives spun down"
echo "========================================"


Appendix

Sources


GiGaPlus GP-S25-0802P Review

My Netgear JGS516PE PoE switch died. It was still providing PoE, however, it was not routing and all the lights were lit up amber. As a temporary solution, I was able to set up one AP on a PoE injector and plug in the Switch Flex Mini using USB-C. This allowed me to get the network up and running while looking for a replacement.

Since newer wireless access points are coming with 2.5Gbps port and requiring PoE+, I wanted the replacement switch to support those 2 things as well as preferring a rack mountable solution.


Options


Decision

I went with the GiGaPlus because it wasn't much more expensive than the used Netgear and came with significant upgrades. It also was not managed so it has less of an attack vector and you don't have to worry about firmware updates from a smaller company.


My Experience

My original unit worked well and as expected until I tried switching the uplink to sfp+ using a 1 meter Sodola DAC cable. Then I started getting weird behavior like slack taking forever to load. I then tried speed tests and was not getting my expected speed and having large swings in speed during the test. I thought it might have to do with flow control, so I tried switching that on, but it did not help. I then tested the other sfp+ and these issues all went away.

Armed with this information, I contacted support. They asked me to test with iperf instead of an external speed test. I did this and sent them the results which were even worse than the external speed test. They then confirmed it was defective and asked if I wanted my money back. I then replied asking if they wanted their unit back and if they would provide a shipping label. I never received a response. I waited a few days and then went through Amazon returns as I was still in the 30 day return window.


Review

Pros:

  • Low cost
  • Provides everything I need now and future expandability
  • 2 x 10 Gbps SFP+ ports
    • One for Uplink
    • One to Daisy Chain or connect to server
  • Plug and play with no configuration necessary
  • Forwards VLAN tagged traffic (No ability to add/modify tags though)

Cons:

  • No management interface
    • cannot see how close to PoE budget max you are
    • no ability to cut power to a port without unplugging
  • LED Status Lights are rather lacking
    • Power LED and Ports 9 and 10 activity are on the far right
    • No light to indicate which ports are supplying PoE
    • Green light indicates 2.5 Gbps
    • Orange light indicates 1 Gbps / 100 Mbps / 10 Mbps
    • No light indicates no connection
  • The original unit only had one functional sfp+ port, the replacement works perfectly

Other Thoughts:

  • Support is only via email and only responded once a day to my emails likely due to the time difference
  • I would make sure that everything works in the 30 day Amazon return period so that you can go through Amazon returns instead of support.


Appendix

Research

Flow Control


06 September 2025

Replacing HDHomeRun tuner

Since my HDHomeRun Connect died in a lightning storm possibly due to static build-up. I bought a replacement HDHomeRun Flex Duo as well as a lightning arrester/suppressor with a ground.

Here are the ones that I went with:

  • $109.99 HDHomeRun Flex Duo: Link
  • $17.95 Proxicast Coaxial Lightning Arrester/Suppressor: Link

I had considered the 4 tuner with ATSC 3.0 support, but with the threat of the channels becoming encrypted and the increased price made me go with the duo.


Device Setup

  • Add the lightning arrester/suppressor between the antenna and TV tuner and attach a ground wire
  • Plug in antenna, ethernet, and power into the HDHomeRun
  • Assign the HDHomeRun a static IP in your router
  • Reboot the HDHomeRun
  • Update the firmware: http://hdhomerun.local


Preparing Docker

setup.yml should be a copy of the mythtv docker-compose.yml, but update the image to dheaps/mythbackend:setup and add VNC_PASS=<super_secret_pass> to the environment section. Some key portions are:

  • network_mode: host
    • otherwise mythtv-setup will not be able to detect your HDHomeRun
  • hostname: <your_hostname>
    • otherwise mythtv-setup will use whatever generated hostname docker feeds it
  • environment variable VNC_PASS
    • the password to connect to VNC


Starting Docker

First and most importantly make sure your mythbackend is NOT running. I did this with:

  • docker compose down
  • docker ps

Next is starting the docker containers for mythtv-setup:

  • docker compose --file setup.yml up


Running Setup

Connect vnc to <ip_address>:5900. Then you will get prompted with several questions.

  • Add user to mythtv group? NO
  • Ignore the error and continue? NO
  • Would you like to start the mythtv backend? NO
  • Would you like to run mythfilldatabase? NO

After answering the questions,  I got dumped to the terninal.

On the command line run:

  • mythtv-setup.real

When done with setup exit with ESC and make sure to save changes if prompted. Then close vnc and press ctrl-C on the terninal running the docker container.


Appendix

Troubleshooting

  • mythweb: Unable to connect to 127.0.0.1:6543
    • in setup make sure that under General -> IP v4 address is set to the correct IP

Sources


25 June 2025

Kodi and Jellyfin Intro Skipper

 I wanted to be able to skip the longer intro sequences like you can on many streaming services. In my search, I found a plugin and addon to accomplish the task.


Installation

  • Add Intro-Skip Plugin to Jellyfin
    • Add the Repository
      • Jellyfin Web Interface -> Hamburger menu -> Dashboard
      • Click on "Catalog" under Plugins
      • Click on the settings wheel at the top
      • Click on the "+"
      • https://intro-skipper.org/manifest.json
    • Install the Addon
      • On the Catalog page scroll down to the Intro-Skipper section and install "Intro Skipper"
      • Restart Jellyfin to finish the installation
  • Add the Jellyskip Addon to Kodi

    Appendix

    Sources