04 January 2026

Portainer and Docker 29

Portainer was not able to access the local environment with the following error: "Failed loading environment The environment named local is unreachable". Then I found out that it was not compatible with Docker 29. Here are the steps that I used to downgrade Docker to a compatible version.


Downgrade Docker

  • Check the version installed
    • apt list --installed docker-ce
  • Check for available version
    • apt-cache policy docker-ce | head -n 30
  • Downgrade
    • DOCKER_VERSION="5:28.5.2-1~debian.12~bookworm"
    • apt-get install docker-ce-cli="$DOCKER_VERSION" docker-ce="$DOCKER_VERSION" docker-ce-rootless-extras="$DOCKER_VERSION"
  • Restart
    • shutdown -r now


Appendix

Sources


02 January 2026

Proxmox SATA DVD drive passthrough

I had a DVD movie that was only readable by my SATA DVD drive. This meant that I needed to pass it through to my VM. However, it appears that it is not possible at this time without doing an iSCSI workaround. Since I only needed to do this for a single DVD, I didn't want to do something that complicated. What I decided to do was to use an LXC to create an ISO that I could use in the VM.


LXC DVD drive passthrough

  1. Create a new LXC (I used the debian version and upped memory to 2 GiB and storage to 8 GiB)
    • bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/debian.sh)"
  2. Passthrough the DVD drive
    1. Container -> Resources
    2. Add -> Device Passthrough
    3. Device Path: /dev/sr0 (or whatever number your DVD drive is)
    4. Click Add
  3. Install dvdbackup, libdvdcss2, and ddrescue
    1. apt-get install dvdbackup libdvdcss2 gddrescue
    2. dpkg-reconfigure libdvd-pkg
  4. Create the ISO
    1. dvdbackup -i /dev/sr0 -I  # unlock the DVD
    2. ddrescue -b2048 -d -r500 /dev/sr0 disc.iso disc.mapfile  # Create the ISO


Appendix

Sources



07 November 2025

Jellyfin LXC iGPU passthrough

As the server my Jellyfin LXC is running on now has an iGPU, I wanted to see if I could use it to enable hardware acceleration for transcoding.


Map the video and render groups

  • Proxmox root node
    • Find the group ids for video and render
      • cat /etc/group
      • For me it was 44 and 104
    • Allow the mapping of those 2 gids
      • vi /etc/subgid
      • Add
        • root:44:1
        • root:104:1
    • Determine the name of the card and render device
      • ls -al /dev/dri
      • For example mine are
        • /dev/dri/card1
        • /dev/dri/renderD128
  • Jellyfin container
    • Note what its ID is. For my example I will use XXX
    • Find the group ids for video and render
      • cat /etc/group
      • For me it was 44 and 104 as well
    • Make sure jellyfin user is part of the video and render groups
      • groups jellyfin
  • Proxmox root node
    • Map the group ids
      • vi /etc/pve/lxc/XXX.conf
      • Add the following:
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 58
lxc.idmap: g 104 104 1
lxc.idmap: g 105 100105 65431


Device Passthrough

  • Map the device
    • Proxmox -> jellyfin -> Resources
    • Add -> Device Passthrough
      • Device: /dev/dri/card1
      • Advanced: check
      • gid: 44
      • Click "Add"
    • Add -> Device Passthrough
      • Device: /dev/dri/renderD128
      • Advanced: check
      • gid: 104
      • Click: "Add"


Finalize changes

  • Restart the Jellyfin container
  • Turn on hardware acceleration in the Jellyfin UI
    • Dashboard -> Playback -> Transcoding
    • Hardware Acceleration: "Intel Quicksync (QSV)"


Results

Unfortunately, my Haswell processor (Xeon E3-1275 v3) doesn't support Intel Quicksync (QSV) in Jellyfin. It does support Video Acceleration API (VAAPI), but the quality was horrible and doesn't completely remove the load from the processor. With this in mind, I just kept hardware acceleration to None and video transcoding disabled on the users.


Appendix

Sources




Migrating Proxmox Containers to a Different Server

 Here are the steps that I used to migrate from my Proxmox containers from my Dell r730xd to my Lenovo TS140


New (old) Server Setup


Migrate LXC containers and ZPool drives

As I was going to move my drives containing my main ZPool, I decided that the easiest way to migrate my containers was to back them up to the drives themselves. This way after moving the drives over and importing the array, I can simply restore them from the backups on the new server.


Backup Existing LXC containers

  1. Create a destination for proxmox backups on the drives to move over:
    • zfs create storage/encrypted/proxmox
    • mkdir /storage/encrypted/proxmox/backups
  2. Add this backup destination to Proxmox:
    • Datacenter -> Storage
    • Add -> Directory
    • Set "Content" to "Backups"
  3. Create backup of each LXC that you want to migrate
    • Choose the LXC
    • Click "Shutdown"
    • Select "Backup"
    • Click "Backup now"
    • Select your backup directory
    • Click "Backup"
    • Make note of any bind mount points that you will need to recreate
  4. Disable Auto-Start
    • Select "Options"
    • Select "Start at boot"
    • Click "Edit"
    • Uncheck and click "OK"
  5. Repeat 3 + 4 for each LXC that you are migrating
  6. Unmount and export the ZPool
    • zpool export storage


Recreate Users and Groups

For file permissions to transfer seamlessly, it is best to have the same users and groups on both servers

  • Old Server
    • View current users
      • cat /etc/passwd
    • View current groups
      • cat /etc/group
    • Note the user/group name as well as its ID
    • View the mapping files
      • cat /etc/subuid
      • cat /etc/subgid
  • New Server
    • For each needed group run:
      • groupadd -g <gid_number> <group_name>
    • For each needed user run:
      • useradd -u <uid_number> -g <gid_number> <username>
    • Set users primary group
      • usermod -g <primary_group> <username>
    • Add user to group(s)
      • usermod -aG <group1>,<group2> <username>
    • Update mapping files
      • vi /etc/subuid
      • vi /etc/subgid


Physical Changes

  1. Shutdown both the old and new server
  2. Move the HDDs over to the new server
  3. Turn on the new server


Restore LXC containers

  1. Import the ZPool
    • zpool import storage
  2. Add the backup destination to Proxmox
    • Datacenter -> Storage
    • Add -> Directory
  3. Restore the LXC
    1. Select the LXC backup that you want to restore
    2. Click "Restore"
    3. Edit the CT ID if desired
    4. Click "Restore"
  4. Repeat step 3 for each LXC


Results

The containers spun up without issue and I didn't have to change any client configs!


Appendix

Sources



12 October 2025

Changing Proxmox Boot Devices

I wanted to use the Intel SSDs as the boot devices in my replacement server. This meant I needed to migrate back to the 2.5" HDDs.


Prep Work

Ensure that autoexpand is off

  • zpool get autoexpand rpool

NAME   PROPERTY    VALUE   SOURCE

rpool  autoexpand  off     default


Adding the 2.5" HDDs

For all the below examples /dev/sda is one of the existing boot drives and /dev/sde and /dev/sdf are the new drives.


Install the hard drives

  • Power-off the system first if you don't have hot swap


Copy the partition tables

  • sfdisk -d /dev/sda | grep -v last-lba > ssd_part_table
  • sed -e '/^label-id:/d' -e 's/,\s*uuid=[-0-9A-F]*//g' ssd_part_table | sfdisk /dev/sde
  • sed -e '/^label-id:/d' -e 's/,\s*uuid=[-0-9A-F]*//g' ssd_part_table | sfdisk /dev/sdf


Copy the BIOS data

  • dd if=/dev/sda1 of=/dev/sde1
  • dd if=/dev/sda1 of=/dev/sdf1


Copy the Boot

  • proxmox-boot-tool format /dev/sde2
  • proxmox-boot-tool format /dev/sdf2
  • proxmox-boot-tool init /dev/sde2
  • proxmox-boot-tool init /dev/sdf2


Add to the ZFS pool

  • zpool attach rpool ata-INTEL_SSDSC2BX800G4_XXXXXX-part3 /dev/disk/by-id/scsi-XXXXX1-part3
  • zpool attach rpool ata-INTEL_SSDSC2BX800G4_XXXXXX-part3 /dev/disk/by-id/scsi-XXXXX2-part3


Hardware Changes

  • turn off the server
  • remove the drives
  • ensure server still boots


Remove old drives from zpool

  • zpool status rpool
  • zpool detach rpool <new id>
  • zpool detach rpool <new id>


Remove old drives from boot

  • proxmox-boot-tool status
  • proxmox-bool-tool clean
  • proxmox-boot-tool status



Lenovo TS140 Hardware Upgrades

I decided to use my old server (Lenovo TS140) to replace my present one (Dell R730XD), There were two factors in my decision (1) it was more stable (2) it consumed less power at ~50W vs ~99W. However, before I could do that I wanted to make two key upgrades: (1) more than 4 cores/4 threads and (2) faster than gigabit networking.


Hardware Options

Processor:

  • E3-1225 v3 (Current Processor)
    • Links: Intel / Tech Power Up
    • 4 core 4 thread
    • Base: 3.2 GHz
    • Max Turbo: 3.6 GHz
    • IGP: Intel HD P4600
    • TDP: 84W
    • Single/Multi: 2014/5322
  • E3-1245 v3
    • Links: Intel / Tech Power Up
    • 4 core 8 thread
    • Base: 3.4 GHz
    • Mz Turbo: 3.8 GHz
    • IGP: Intel HD P4600
    • TDP: 84W
    • Single/Multi: 2150/7044
  • E3-1275 v3
    • Links: Intel / Tech Power Up
    • 4 core 8 thread
    • Base: 3.5 GHz
    • Max Turbo: 3.9 GHz
    • IGP: Intel HD P4600
    • TDP: 84W (listed as 95W at places)
    • Single/Multi: 2198/7203


NIC:
  • Used Intel X710-DA2
    • roughly $30 from eBay
    • 10 Gbps SFP+
    • Requires a PCIe x8 slot
  • Realtek RTL8125B NIC
    • roughly $17-20 from Amazon
    • 2.5 Gbps
    • Requires a PCIe x1 slot (PCIe 2.0 x1 provides up to 5 Gbps of bandwidth)


Decision

I went with the E3-1275 v3 as it was roughly the same price as the E3-1245 v3 and was slightly faster.

I chose the Realtek RTL8125B from GiGaPlus as it would keep the x16 slot free in my TS140 for future expandability. I also only had a single free SFP+ port on my switch,


Appendix

Sources


04 October 2025

Clean-Up filenames created by Handbrake

 I wanted a script to help me rename files that Handbrake creates.


Issues

There are mainly 2 issues that I come across:

  • Spaces that I want to replace with underscores
  • Remove Unprintable characters
    • I determined that these were ASCII characters 001-037 (in octal)
    • ls -b directory_with_octal_chars_in_filename


Solution

Script (rename.sh)

#!/usr/bin/bash
directory=${1}
if [ -z ${directory} ] ; then
  echo "Requires a path"
  exit 1
fi

# replace spaces with underscore
find ${directory} -type f -name "* *" |
while read file; do
  mv "${file}" "${file// /_}"
done

# remove unprintable ASCII chars
find ${directory} -type f -name "*"[$'\001'-$'\037']"*" |
while read file; do
  new_file=$(echo "$file" | sed -e 's/[\o001-\o037]//g')
  mv "$file" "$new_file"
done


Permissions

chmod +x rename.sh


Usage

./rename.sh Directory_To_Rename_Files_In


Appendix

Sources