17 April 2024

r730xd Server Part 1: Initial Build

In order to support more than the 2 hard drives I could in the TS140, I needed a new system. Here were my requirements:

  • price < $500
  • reasonable noise level
  • more than 4 cores
  • ECC RAM (preferably at least 128GB)
  • 8 or more spots for hard drives

I was looking at some older SuperMicro and Dell systems. I read several posts about the power supplies being louder on the SuperMicro so I decided to go with the Dell r730xd that supports 12 x 3.5" drives in the front and 2 x 2.5" drives in the rear.


Hardware

  • r730xd used - $350 + tax
    • 2 x e5-2670v3 12 cores each
    • 128 GB ECC DDR4 (8 x 16GB)
    • PERC H730p Mini raid controller
    • 1 x 2.5" caddie with 1.2TB 10K SAS drive
    • iDRAC 8 Enterprise license
  • 12 x 3.5" used caddies - $60 + tax
    • SAS or SATA caddie both work for holding sata drives
  • 1 x 2.5" caddie with 1.2 TB 10K SAS drive - $24 + tax
    • this is a second drive for a boot mirror
  • total: $434 + tax


TLDR; Pros/Cons

  • Pros
    • Many many cores
    • Loads of RAM with expandability
    • Thermal solution works quite well even under full load with fans at 6-8% (2640-2880 rpm) on auto keeping the processors around 77C
    • iDRAC is quite useful especially the HTML5 virtual console (requires Enterprise license)
  • Cons
    • No hardware reset for iDRAC
  • Somewhere in the middle
    • idle power usage of 94-96W (with a single 2.5" drive) is higher than I would like
      • Fans @ 4% (2280 rpm)
      • 18W for iDRAC even when powered down
    • installing hard drive
      • when installing an older drive (250GB WD) fans spun up to 40% (7320 rpm), this may have been because it was unable to read temp or was failing
      • installing a 3TB WD Red drive caused the fans to go to 6% (2680 rpm)
    • Fan noise
      • 45-50% (8880-9840 rpm) is noticeable with the basement door closed
      • 30% (6480 rpm) is noticeable with the basement door open
      • 15-20% (4080-4920 rpm) about as loud as a running fridge
      • 10% (3240 rpm) a low constant hum
      • 6% (2640 rpm) quieter than seeking dvd drives


BIOS/iDRAC reset and setup

First I did a BIOS reset by moving the jumper. Unfortunately, this did not reset the iDRAC, and there does not seem to be a way to do this with hardware. For this you will need a VGA monitor and keyboard. I followed this video to do it: https://www.youtube.com/watch?v=R5pYfqDtzQw

Reset BIOS

  • F2 for System Setup
  • System BIOS
  • Default
  • Finish and Confirm
  • Reboot

BIOS Settings to change from default

  • Boot Settings -> Boot Mode -> UEFI
  • System Profile Settings -> System Profile -> Performance per Watt (OS)
  • Finish and Confirm
  • Reboot

Reset iDRAC

  • F2 for System Setup
  • iDRAC Settings
  • Reset iDRAC configurations to defaults
  • Finish and Confirm
  • Reboot

Then I was able to configure iDRAC from its default 192.168.0.120 to an unused IP on my network.

  • F2 for System Setup
  • iDRAC Settings
  • Network
  • Update Static IP Address (and Subnet/Gateway if applicable)
  • Back
  • Finish and Confirm
  • Reboot


Memory Testing

While memtest86 isn't the best memory tester available, it is free and doesn't require an OS to be installed. I used version 10.7, which I downloaded the ISO for loading onto my Ventoy USB from here: https://www.memtest86.com/downloads/memtest86-iso.zip

It took about 24 hours to do all 4 passes of the test patterns.


Update iDRAC

Here is where I downloaded the iDRAC update from: https://www.dell.com/support/home/en-us/product-support/product/poweredge-r730xd/drivers. You should download the version for windows for uploading through the Web UI. Then apply the updates from the iDRAC interface: Login to iDRAC -> iDRAC Settings -> Update and Rollback -> upload

At first I download the win64 version of the latest available (2.85.85.85), but the update would fail due to it not validating. Here is the update path that I went with: 2.61.60.60 -> 2.63.60.61 (win32) -> 2.75.100.75 (win32) -> 2.85.85.85 (win64)

After the first update the iDRAC appeared to be unresponsive, so I let it sit for ~20 minutes and it was still unresponsive, however I could ping it. At this point I held in the "i" button for ~20s to restart the iDRAC. This brought it back to life.

Here is a video that I found helpful: https://www.youtube.com/watch?v=dea-PA1na0c


Update the rest of the firmware

I always update the BIOS separately

Update BIOS/UEFI:

  • Login to iDRAC -> iDRAC Settings -> Update and Rollback -> HTTPS
  • HTTPS Address: downloads.dell.com
  • Check for Updates
  • Select the BIOS update and click "Install and Reboot"
  • The system will then reboot and install the new BIOS

Update Raid/Network:

  • Login to iDRAC -> iDRAC Settings -> Update and Rollback -> HTTPS
  • HTTPS Address: downloads.dell.com
  • Check for Updates
  • Select the Raid and Network update and click "Install and Reboot"
  • The system will then reboot and install the new BIOS


Set Raid card to HBA mode

This will pass the drives directly to the OS to manage the software raid.

Login to iDRAC -> Storage -> Controllers -> Setup -> Controller Mode -> Action -> HBA


Proxmox install

I downloaded Proxmox 8.1 from: https://www.proxmox.com/en/downloads

I was using a Ventoy USB to do my Proxmox 8.1 install, but it was giving me an error. However, I found that you need to use Ventoy >= 1.0.97 (https://github.com/ventoy/Ventoy/releases/tag/v1.0.97). So I updated my Ventoy USB and then the install proceeded.

During the setup here are some of the values that I chose:

  • Target -> Options
    • If you only have 1/2 of your boot drives
      • ZFS (RAID0)
      • Harddisk 0 - 1.09TB
      • Harddisk 1 - Do not use
    • If you have both go ahead and setup the mirror
      • ZFS (RAID1)
      • Harddisk 0 - 1.09TB
      • Harddisk 1 - 1.09TB
  • Timezone: America/New_York
  • Password and Email
  • Management
    • Make sure to select the correct management interface (easier if network is hooked up before starting setup)
    • Hostname: r730xd.home.arpa
    • IP Address: pick a free IP
    • Gateway/DNS

After the setup, login to the web ui: https://<IP>:8006

  • Disable Proxmox Enterprise repos as they require a license
    • Click on the hostname (r730xd)
    • Remove the subscription update repository
      • Updates -> Repositories
      • Highlight "pve pve-enterprise" and click "Disable"
      • Highlight "ceph-quincy enterprise" and click "Disable"
    • Add the No Subscript update repository
  • Update your install
    • Updates
    • Click "Refresh"
    • Close the window when it is done
    • Click "Upgrade"
    • Type "y" and enter


Add the rpool mirror (if not configured at install)

  • Determine if using grub or systemd-boot
    • proxmox-boot-tool status
    • efibootmgr -v
  • Copy the partition table over (removing disk label-id and uuids so sfdisk generates new ones)
    • sfdisk -d /dev/sda > part_table
    • sed -e '/^label-id:/d' -e 's/,\s*uuid=[-0-9A-F]*//g' part_table | sfdisk /dev/sdb
  • Copy the bios boot
    • dd if=/dev/sda1 of=/dev/sdb1
  • Prepare EFI partition
    • dd if=/dev/sda2 of=/dev/sdb2 Don't do this as it keeps the UUID
    • OR
    • proxmox-boot-tool format /dev/sdb2
  • Initialize the EFT partition
    • proxmox-boot-tool init /dev/sdb2
    • OR if using grub
    • proxmox-boot-tool init /dev/sdb2 grub
  • Get current zpool drive
    • zpool status
  • Add the mirror
    • zpool attach rpool scsi-XXXXXXXXX-part3 /dev/disk/by-id/scsi-YYYYYYYYY-part3
  • ZFS will then resilver the drive, to check if done run
    • zpool status


intel_pstate

Trying to lower the idle power usage, I wanted to try a different cpu governor driver.
  • check whether it is intel_pstate or intel_cpufreq
    • apt install cpufrequtils
    • cpufreq-info | head -n 13
  • Determine if using grub or systemd-boot
    • proxmox-boot-tool status
    • efibootmgr -v
  • to enable intel_pstate for grub boot:
    • edit /etc/default/grub
    • add intel_pstate=active to GRUB_CMDLINE_LINUX_DEFAULT
      • example: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_pstate=active"
    • proxmox-boot-tool refresh
    • wait for grub to finish updating and then reboot
  • to enable intel_pstate for systemd-boot
    • edit /etc/kernel/cmdline
    • add intel_pstate=active to the end
    • proxmox-boot-tool refresh
    • wait for it to update and then reboot
  • ensure it is now intel_pstate
    • cpufreq-info | head -n 13
  • unfortunately this did not change idle power usage


Fan Information

Here are the fan values from my r730xd

  •  4% -> 2280 rpm -> 10 cfm
  •  6% -> 2640 rpm -> 12 cfm
  •  9% -> 3000 rpm -> 18 cfm
  • 12% -> 3600 rpm -> 21 cfm
  • 15% -> 4080 rpm -> 24 cfm
  • 20% -> 4920 rpm -> 30 cfm
  • 25% -> 5640 rpm -> 36 cfm
  • 30% -> 6480 rpm -> 41 cfm
  • 40% -> 7320 rpm -> 53 cfm
  • 50% -> 8160 rpm -> 64 cfm


Appendix

Sources

04 April 2024

Checking if two video files are the same aside from metadata

As I wanted to make sure that two video files were identical aside from metadata, I decided to write a script to help me. The script will create a hash from the audio and video data without the header data.

Caveats:

  • There is a chance for hash collisions, so double check the results manually before deleting
  • This script doesn't flag videos as duplicates if they are the same video but have different resolution, bitrate, audio data, etc


Requirements

  • ffmpeg
  • md5sum


Shell Version

Initial shell script to compare to 2 files: 

#!/bin/sh
file1="$1"
file2="$2"

flags="-fflags +bitexact -flags:v +bitexact -flags:a +bitexact -c copy -f matroska"

file1hash=$(ffmpeg -i "$file1" $flags -c copy -f matroska -loglevel error - | md5sum | cut -f1 -d" ")
file2hash=$(ffmpeg -i "$file2" $flags -c copy -f matroska -loglevel error - | md5sum | cut -f1 -d" ")

echo "$file1hash $file1"
echo "$file2hash $file2"


Example Shell

$./diffvideo.sh file1.m4v file2.m4v
cb31XXXXXXXXXXXXXXXXXXXXXXXXXXXX file1.m4v
cb31XXXXXXXXXXXXXXXXXXXXXXXXXXXX file2.m4v


Python Version

Expanded python script to compare more files:

#!/usr/bin/env python3

import argparse
import shlex
import subprocess


def setup_cli():
    parser = argparse.ArgumentParser(
        prog='',
        description='',
        epilog='',
    )
    parser.add_argument('filenames', nargs='*')
    return parser


def check_files(filenames):
    hashes = {'not_a_video': []}
    for file in filenames:
        flags = '-fflags +bitexact -flags:v +bitexact -flags:a +bitexact'
        cmd = f'ffmpeg -i "{file}" {flags} -c copy -f matroska -'
        ff_proc = subprocess.run(shlex.split(cmd), capture_output=True)
        if ff_proc.returncode != 0:
            hashes['not_a_video'].append(file)
            continue
        hash_proc = subprocess.run('md5sum', capture_output=True, input=ff_proc.stdout)
        filehash = hash_proc.stdout.decode().split()[0]
        if filehash not in hashes:
            hashes[filehash] = []
        hashes[filehash].append(file)

    return hashes


def print_results(hashes):
    not_a_video = hashes.pop('not_a_video')
    singles = {}
    dupes = {}
    for key, value in hashes.items():
        if len(value) > 1:
            dupes[key] = value
        else:
            singles[key] = value

    if not_a_video:
        print('\nNot a video:')
    for each in not_a_video:
        print(f'    {each}')

    if dupes:
        print('\nDuplicates found:')
    for key, value in dupes.items():
        print(f'    {key}')
        for v in value:
            print(f'        {v}')

    if singles:
        print('\nNo Duplicates for these files:')
    for key, value in singles.items():
        print(f'    {value[0]}')


if __name__ == "__main__":
    parser = setup_cli()
    args = parser.parse_args()
    hashes = check_files(args.filenames)
    print_results(hashes)


Example Python

$./diffvideo.py *

Not a video:
    test.txt
    file.docx

Duplicates found:
    cb31XXXXXXXXXXXXXXXXXXXXXXXXXXXX
        file1.m4v
        file2.m4v
    ab76XXXXXXXXXXXXXXXXXXXXXXXXXXXX
        file5.m4v
        file6.m4v

No duplicates for these files:
    file3.m4v
    file4.m4v


Appendix

Sources

23 March 2024

Dockerize my LAMP webserver

As my main SSD is running low on estimated life remaining, I am attempting to containerize my projects so that they can be easily moved. The primary one is my Linux Apache MySQL PHP (LAMP) webserver that I use for lists and MythWeb.

For this project I will be using docker compose to put a nginx reverse proxy in front of Apache so that it can direct the direct the traffic, handle ssl encrpytion, and authentication. We will first be setting it up on some dev ports (9080/9443) so that we can test before replacing the existing servers.


Move docker data onto zfs

  • This will keep the images and logs on zfs instead of my root drive
  • It will also require any image to be downloaded again and rebuilt
  • Stop dockerd
    • sudo systemctl stop docker
    • sudo systemctl stop docker.socket
  • Move the docker data
    • sudo mkdir /storage/containers/dockerd
    • sudo rsync -avh --progress /var/lib/docker/ /storage/containers/dockerd
    • sudo mv /var/lib/docker /var/lib/docker.old
  • edit /etc/docker/daemon.json
{
    "data-root": "/storage/containers/dockerd",
    "storage-driver": "zfs"
}
  • Restart dockerd
    • sudo systemctl start docker.socket
    • sudo systemctl start docker


General Setup

  • Create a place to store all the files
    • this should be on your zfs dataset
    • sudo mkdir -p /storage/containers/webserver 
  • We will use this as the root directory for all of the below configs
  • Create the needed subdirs
    • cd /storage/containers/webserver
    • sudo mkdir -p letsencrypt/etc letsencrypt/data letsencrypt/logs
    • sudo mkdir -p nginx/www/html
    • sudo mkdir -p lists/build lists/mysql lists/www/html/lists


First setup nginx

  • Setup valid users for authentication
    • sudo htpasswd -c nginx/www/htpasswd username
  • Copy the existing letsencrypt certs
    • sudo mkdir -p letsencrypt/etc/live/this.example.com
    • sudo cp /etc/letsencrypt/live/this.example.com/* letsencrypt/etc/live/this.example.com/
  • create a docker-compose.yml with the following contents:
# Begin docker-compose.yml
version: '3.4'

services:
    nginx:
        container-name: 'nginx-proxy'
        hostname: 'nginx-proxy'
        image: nginx:latest
        ports:
            - "9080:80"
            - "9443:443"
        volume:
            - ./prod.conf:/etc/nginx/conf.d/default.conf
            - ./nginx/www:/www
            - ./letsencrypt/etc:/etc/letsencrypt
            - ./letsencrypt/data:/data/letsencrypt
# End docker-compose.yml
  • create a prod.conf with the following contents:
# Begin prod.conf
server {
    listen      80;
    listen [::]:80;
    server_name this.example.com;

    location / {
        rewrite ^ https://$host:9443$request_uri? permanent;
    }

    # for cerbot challenge
    location /.well-known/acme-challenge {
        allow all;
        root /data/letsencrypt;
    }
}

server {
    listen      443 ssl;
    listen [::]:443 ssl;
    http2 on;
    server_name this.example.com;

    ssl_certificate     /etc/letsencrypt/live/this.example.com/full chain.pem;
    ssl_certificate_key /etc/letsencrypt/live/this.example.com/privkey.pem;

    auth_basic "Your Server Message";
    auth_basic_user_file /www/htpasswd;

    location / {
        root /www/html;
    }
}
# End prod.conf

  • create a nginx/www/html/index.html that will link to our actual contents, here is my example:
<html> 
<body>
    <p>
        <a href="lists/">Lists</a>
    </p>
    <p>
        <a href="mythweb/">MythWeb</a>
    </p>
</body>
</html>

  • Now test the server
    • sudo docker-compose up --build
  • Visit your site in a browser
  • Ctrl+C to stop the server


Setup MythWeb

  • Add the mythweb section to docker-compose.yml so that it looks like:
# Begin docker-compose.yml
version: '3.4'

services:
    nginx:
        container-name: 'nginx-proxy'
        hostname: 'nginx-proxy'
        image: nginx:latest
        ports:
            - "9080:80"
            - "9443:443"
        volume:
            - ./nginx/prod.conf:/etc/nginx/conf.d/default.conf
            - ./nginx/www:/www
            - ./letsencrypt/etc:/etc/letsencrypt
            - ./letsencrypt/data:/data/letsencrypt
    mythweb:
        container-name: 'myth-http'
        hostname: 'myth-http'
        image: dheaps/mythbackend:mythweb
        ports:
            - "7080:80"
        environment:
            - DATABASE_HOST=localhost
            - DATABASE_NAME=mythconverg
            - DATABASE_USER=mythtv
            - DATABASE_PASSWORD=YourSuperSecretPassword
            - TZ=America/New_York
        volumes:
            # This will have mysql connect over sockets instead ports
            /var/run/mysqld/mysql.sock:/var/run/mysqld/mysql.sock
# End docker-compose.yml

  • Add the mythweb sections to prod.conf so that it looks like this:
# Begin nginx/prod.conf
server {
    listen      80;
    listen [::]:80;
    server_name this.example.com;

    location / {
        rewrite ^ https://$host:9443$request_uri? permanent;
    }

    # for cerbot challenge
    location /.well-known/acme-challenge {
        allow all;
        root /data/letsencrypt;
    }
}

server {
    listen      443 ssl;
    listen [::]:443 ssl;
    http2 on;
    server_name this.example.com;

    ssl_certificate     /etc/letsencrypt/live/this.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/this.example.com/privkey.pem;

    auth_basic "Your Server Message";
    auth_basic_user_file /www/htpasswd;

    location / {
        root /www/html;
    }

    location /mythweb {
        # Use this to preserve port number
        return 301 $scheme://$http_host/mythweb/;
    }
    location /mythweb/ {
        proxy_pass http://myth-http:80/mythweb/;
        proxy_buffering off;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto $scheme;
        # using $http_host so that the links will include port
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header X-Forwarded-Port $server_port;
    }
}
# End nginx/prod.conf
  • Now test the server
    • sudo docker-compose up --build
  • Visit your site in a browser
  • Ctrl+C to stop the server


Setup MySQL and Apache

  • Note: we are using MYSQL_HOST environment variable so that the PHP can easily switch what MySQL instance to connect to
  • Create a directory to hold related files
    • mkdir -p lists/build
    • mkdir -p lists/mysql
  • Copy your HTML/PHP files into lists/www/html/
    • sudo mkdir -p lists/www/html/lists
    • sudo cp /var/www/html/lists/* lists/www/html/lists/
  • Add the MySQL and Apache sections to docker-compose.yml
# Begin docker-compose.yml
version: '3.4'

services:
    nginx:
        container-name: 'nginx-proxy'
        hostname: 'nginx-proxy'
        image: nginx:latest
        ports:
            - "9080:80"
            - "9443:443"
        volume:
            - ./nginx/prod.conf:/etc/nginx/conf.d/default.conf
            - ./nginx/www:/www
            - ./letsencrypt/etc:/etc/letsencrypt
            - ./letsencrypt/data:/data/letsencrypt
    mythweb:
        container-name: 'myth-http'
        hostname: 'myth-http'
        image: dheaps/mythbackend:mythweb
        ports:
            - "7080:80"
        environment:
            - DATABASE_HOST=localhost
            - DATABASE_NAME=mythconverg
            - DATABASE_USER=mythtv
            - DATABASE_PASSWORD=YourSuperSecretPassword
            - TZ=America/New_York
        volumes:
            # This will have mysql connect over sockets instead ports
            - /var/run/mysqld/mysql.sock:/var/run/mysqld/mysql.sock
    lists-www:
        container-name: 'lists-www'
        hostname: 'lists-www'
        build: './lists/build'
        environment:
            - MYSQL_HOST=lists-mysql
        volumes:
            - ./lists/http.conf:/etc/apache2/httpd.conf
            - ./lists/www:/var/www
            # This is just where I keep my lists and isn't required
            - /home/user/Documents/lists:/home/user/Documents/lists
    lists-mysql:
        container-name: 'lists-mysql'
        hostname: 'lists-mysql'
        image: mysql:latest
        ports:
            - "4306:3306"
        environment:
            - MYSQL_ROOT_PASSWORD=AnotherSuperSecretPassword
        volumes:
            - ./lists/mysql:/var/lib/mysql
# End docker-compose.yml

  • Create lists/build/Dockerfile
# Begin lists/build/Dockerfile
FROM php:apache
RUN  apt-get update && docker-php-ext-install mysqli pdo pdo_mysql
# End lists/build/Dockerfile
  • Create lists/http.conf
# Begin lists/http.conf
<VirtualHost>
    ServerName this.example.com
    ServerAdmin admin@this.example.com
    DocumentRoot /var/www/html
</VirtualHost>
# End lists/http.conf
  • Now test the server
    • sudo docker-compose up --build
  • While the test server is up load data into your MySQL instance
    • Note: Don't use localhost or MySQL will ignore the port and use sockets
    • sudo mysql -p -h 127.0.0.1 --port=4306
  • Visit your site in a browser
  • Ctrl+C to stop the server


Move to production

  • Edit the configs
    • in docker-compose.yml replace 9080 with 80 and 9443 with 443
    • in nginx/prod.conf replace 9443 with 443
  • Stop the normal apache and keep it from starting at boot
    • sudo systemctl stop apache2
    • sudo systemctl disable apache2
  • Stop the normal certbot
    • sudo systemctl disable certbot.timer
  • Have docker compose start at startup
    • /etc/systemd/system/docker-compose-webserver.service
# Begin /etc/systemd/system/docker-compose-webserver.service
# Only include mysql.service if dependent on it for mythweb
[Unit]
Description=Docker Compose Webserver Service
Requires=docker.service
Wants=mysql.service
After=docker.service mysql.service

[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/storage/containers/webserver
ExecStart=/usr/bin/docker-compose up --build -d
ExecStop=/usr/bin/docker-compose down
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target
# End /etc/systemd/system/docker-compose-webserver.service
    • Load and start docker-compose-webserver
      • sudo systemctl daemon-reload
      • sudo systemctl enable docker-compose-webserver
      • sudo systemctl start docker-compose-webserver


Setup the certificate renewal

  • Unfortunately, the site needs to be live before we can test/setup certificate renewal, so make sure you did the steps above
  • Remove the certs that we copied in, as certbot needs a blank folder or it will add -0001 to our hostname directory
    • sudo rm -rf ./letsencrypt/etc/*
  • Use staging (the test environment) to check the commands
sudo docker run -it --rm \
    -v ./letsencrypt/data:/data/letsencrypt \
    -v ./letsencrypt/etc:/etc/letsencrypt \
    -v ./letsencrypt/logs:/var/logs/letsencrypt \
    certbot/certbot \
    certonly --webroot \
    --register-unsafely-without-email --agree-tos \
    --webroot-path=/data/letsencrypt \
    --staging \
    -d this.example.com
  • If the above worked then we can try a live renewal (which is rate limited)
sudo docker run -it --rm \
    -v ./letsencrypt/data:/data/letsencrypt \
    -v ./letsencrypt/etc:/etc/letsencrypt \
    -v ./letsencrypt/logs:/var/logs/letsencrypt \
    certbot/certbot \
    certonly --webroot \
    --email youremail@domain.com --agree-tos --no-eff-email \
    --webroot-path=/data/letsencrypt \
    -d this.example.com
  • If the above worked then we schedule automatic renewal
  • Create if does not exist /lib/systemd/system/cerbot.service
# Begin certbot.timer
[Unit]
Description=Run certbot twice daily

[Timer]
OnCalendar=*-*-* 00,12:00:00
RandomizedDelaySec=43200
Persistent=true

[Install]
WantedBy=timers.target
# End certbot.timer
  • Edit/create /lib/systemd/system/cerbot.service
# Begin certbot.service
[Unit]
Description=Certbot
Documentation=file:///usr/share/doc/python-certbot-doc/html/index.html
Documentation=letsencrypt.readthedocs.io/en/latest

[Service]
Type=oneshot
# ExecStart=/usr/bin/certbot -q renew
WorkingDirectory=/storage/containers/webserver
ExecStart=docker run --rm \
    -v ./letsencrypt/data:/data/letsencrypt \
    -v ./letsencrypt/etc:/etc/letsencrypt \
    -v ./letsencrypt/logs:/var/logs/letsencrypt \
    certbot/certbot \
    renew --quiet --webroot \
    --email youremail@domain.com --agree-tos --no-eff-email \
    --webroot-path=/data/letsencrypt
PrivateTmp=true
# End certbot.service
  • Enable the service
    • sudo systemctl daemon-reload
    • sudo systemctl enable certbot.timer


Debug

  • If you see errors like `Cannot create container for service`
    • view all containers:
      • docker ps -a
    • you can remove the offending container with:
      • docker rm <container-name>


Next Steps

  • Put mythbackend and its MySQL instance in docker


Appendix

Sources


02 February 2024

New VNC client for ChromeOS

As RealVNC discontinued their ChromeOS version and it was giving me issues with disconnections, I decided to look for a replacement.


The search

  • xtightvncviewer
    • Bad connection dialog
    • Does not handle ChromeOS scaling properly
    • No client window scaling without changing host resolution
  • ssvnc
    • Bad connection dialog
    • Does not handle ChromeOS scaling properly
    • No client window scaling without changing host resolution
    • Puts 2 icons on the dock
    • Supports ssh/ssl encryption
  • tigervnc-viewer
    • Acceptable connection dialog
    • Does not handle ChromeOS scaling properly
    • No client window scaling without changing host resolution
    • Support TLS encryption
  • vinagre
    • Acceptable connection dialog
    • Handles ChromeOS scaling properly
    • Suppports client window scaling without changing host resolution
    • Dock icon didn't load properly
    • Touchpad scrolling does not work
    • No longer maintained, superseded by Gnome Connections
  • Gnome Connections
    • Slick looking connections page
    • Handles ChromeOS scaling properly
    • Supports client window scaling without changing host resolution
    • Supports TLS encryption
    • Touchpad scrolling does not work
    • Does not remember/resize window when connecting
After my search I decided to go with TigerVNC viewer, but will keep Gnome Connections installed as it may eventually overtake it. Below is how I installed each

Installing TigerVNC viewer

  • Launch terminal
    • sudo apt install tigervnc-viewer
  • Configure it to scale properly
    • First determine your Chromebooks scaling
      • Settings -> Displays -> Display size
    • Test to make sure it is what you like, where .8 == 80% from above
      • /usr/bin/sommelier -X --scale=.8 /usr/bin/xtigervncviewer
    • Edit xtightvncviewer.desktop
      • cp /usr/share/applications/xtigervncviewer.desktop ${HOME}/.local/share/applications/
      • vi ${HOME}/.local/share/applications/xtigervncviewer.desktop
      • find "Exec"
      • and set it to the command that you tested
  • Launch the App
    • Search Key -> TigerVNC
  • Connect
    • 192.168.1.XXX:1

Installing Gnome Connections

  • Resize your linux storage size
    • Settings -> Advanced -> Linux development environment -> Disk size "Change"
    • I set it to 16 GB
  • Launch terminal
    • Make sure apt is up to date
      • sudo apt update
      • sudo apt upgrade
    • Install flatpak
      • sudo apt install flatpak
      • flatpak --user remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
      • Restart the linux container by right clicking on your terminal icon and select "Shut down Linux"
    • Install and start Gnome Connections
      • flatpak install flathub org.gnome.Connections
      • flatpak run org.gnome.Connections
    • Add your VNC server
      • 192.168.1.XXX:5901
      • Make sure to select VNC

Appendix

Sources:

26 January 2024

Miscellaneous Minecraft Matters

I wanted to host a Minecraft server for me and my daughter to be able to play on. After some research, using a docker container surfaced as the easiest way to do this. Also I wanted to use Steam Link to be able to play, so I needed to add a shortcut to Minecraft in Steam.

Below are the steps that I used to accomplish this


Minecraft Bedrock Docker Server

  • https://github.com/itzg/docker-minecraft-bedrock-server
  • find your players XUID
    • I did this by starting the server connecting and looking at the server output
  • create a docker-compose.yml file where OPS has your players XUIDs in a comma separated list and ALLOW_LIST_USERS has the player names and XUIDs that you want to be able to login
version: '3.4'

services:
    bds:
        image: itzg/minecraft-bedrock-server
        environment:
            EULA: "TRUE"
            GAMEMODE: creative
            DIFFICULTY: peaceful
            SERVER_NAME: "Our World"
            OPS: "1234,5678"
            ALLOW_CHEATS: "true"
            ALLOW_LIST: "true"
            ALLOW_LIST_USERS: "player1:1234,player with spaces:5678"
        ports:
            - "19132:19132/udp"
        volumes:
            - /storage/containers/minecraft/world1:/data
        stdin_open: true
        tty: true

  • start the container
    • docker-compose up
  • A permissions.json file will be created giving the specified players ops powers
  • Note: even though your server is local the Playstation/Xbox/Switch version will not be able to connect without a PS Plus/Xbox Live/Nintendo Online subscription

Adding a shortcut to Minecraft in Steam

  • Find out where Minecraft was stored
    • Paste the following into an explorer window:
      • %LocalAppData%\Packages\
    • Find the folder like:
      • Microsoft.MinecraftUWP_<seemingly_random_letters_and_numbers>
    • The seemingly random letters and numbers are the app id, we will need them for later
  • In Steam go to your Library and click "ADD A GAME" and then "Add a Non-Steam Game..."
  • Navigate to C:\Windows and select explorer.exe
  • You will see a new entry in your library called explorer
  • Right click on it -> Properties
  • Choose an appropriate icon
  • Rename it
  • Click "SET LAUNCH OPTIONS"
  • type/paste in the following:
    • shell:appsFolder\<your-app-id>!App
  • Click "OK"
  • Click "CLOSE"
  • You should now be able to launch Minecraft from Steam


Appendix

Sources:

04 January 2024

Changing a zpool from ashift=9 to ashift=12

I wanted the additional write speed on my nas drives that come from aligning the ashift value with physical sector size of my hard drives (ashift=9 is 512 bytes and ashift=12 is 4KB). Unfortunately, you cannot change ashift on an existing zpool, so you will have to backup the data, destroy the pool, recreate it, and then restore the data.


Prereq

  • pv (to monitor the process/speed)
    • sudo apt-get install pv
  • encrypted zfs data with the "wrong" ashift value that you want to migrate

Process to move

  1. Stop any process that writes to your storage that you want to move
  2. Setup temporary storage location
    • sudo zpool create external-storage mirror /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-XXXXXXX /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-XXXXXXX
    • sudo zfs create -o encryption=aes-256-gcm -o keylocation=prompt -o keyformat=passphrase external-storage/encrypted
  3. Create a snapshot
    • sudo zfs snapshot storage/encrypted@migrate-20231229
    • sudo zfs list -t snapshot
  4. Copy snapshot over
    • sudo bash -c 'zfs send storage/encrypted@migrate-20231229 | pv | zfs recv external-storage/encrypted/backup'
  5. Ensure that all files have been backed up
  6. Unmount the datasets
    • sudo zfs unmount storage/encrypted
  7. Destroy the old zpool
    • sudo zpool destroy storage
  8. Create the new zpool
    • sudo zpool create storage mirror /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-XXXXXXX ata-WDC_WD60EFZX-68B3FN0_WD-XXXXXXX
  9. Ensure is setup with the correct ashift value
    • sudo zdb -C storage | grep ashift
  10. Create a temporary file to contain your passphrase because since zfs recv is using stdin to pull in the data it cannot prompt for it
    • echo "super-secret" > /home/example/passphrase.txt
  11. Copy snapshot back
    • sudo bash -c 'zfs send external-storage/encrypted/backup@migrate-20231229 | pv | zfs recv -o encryption=aes-256-gcm -o keylocation=file:///home/example/passphrase.txt -o keyformat=passphrase storage/encrypted'
  12. Change from a file to prompt for password
    • sudo zfs change-key -o keylocation=prompt storage/encrypted
  13. Remove the temp passphrase
    • rm /home/example/passphrase.txt
  14. Check that all your files are back in place
  15. Now if you want you can destroy the backup or export it and keep the backup
    • sudo zpool destroy external-storage
    • OR
    • sudo zpool export external-storage

Appendix

If you see an error like:

  • cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one
  • That means that you cannot copy to the encrypted dataset. What I did to get around this was to instead copy to a child of the encrypted dataset.

Sources: