02 September 2022

Saving data off a scratched DVD

I had a DVD with files on it that I needed. Unfortunately, it was giving me errors when trying to pull them off. Here is what I did to try to recover as much data as possible.

  • Install ddrescue and a gui for visualizing its progress
    • sudo apt-get install gddrescue ddrescueview
  • Run a first pass with no retries to get as much info as possible
    • ddrescue -dn -b2048 /dev/sr0 disk.img disk.mapfile
  • Retry failed blocks
    • ddrescue -d -r5 -b2048 /dev/sr0 disk.img disk.mapfile
  • Description of the options:
    • /dev/sr0: the dvd drive
    • disk.img: the file to create
    • disk.mapfile: the mapfile that keeps track of the status of the recovery
    • -d: Use direct disc access for input (aka, no caching)
    • -n: Skip the scraping phase
    • -b2048: block size of 2048, use for cdroms and dvd
      • defaults to 512
    • -r5: retry 5 passes
      • -1 for infinite passes

Sources:

28 June 2022

New 27" 1440p monitor VX2758-2KP-MHD

 With the falling prices of monitors and the aging small 20" 1680x1050 LCD that I was using I decided it was time for an upgrade.


The Search

The types and models I was looking at were

  • 24" 1080p 144Hz (6bit + 2 dithering)
    • AOC 24G2 ~179.99
    • Viewsonic XG2405 ~179.99
  • 24" 1200p 75Hz
    •  Asus Proart PA248QV ~199.99
  • 27" 1440p 144Hz (8bit + 2 dithering)
    • Viewsonic VX2758-2KP-MHD ~219.99
    • LG 27GL850-B ~346.99

 The prices that I listed above are the lowest that I have seen. I went with the Viewsonic VX2758-2KP-MHD due to the increased resolution and screen real estate over a 24" for the relatively small sum of $40 especially since the prices on the 24" monitors were generally higher at the time.


Review

Pros
  • Monitor assembled easy
  • No dead pixels
  • Good color reproduction
  • Freesync support with up to 144Hz
Cons
  • OSD does not work without input
    • doesn't allow you to change inputs
    • is somewhat remedied by it auto selecting the active input
  • OSD uses the older style non-joystick style button inputs
  • Monitor occasionally does not register displayport input and requires a power cord unplug to fix
  • Driver install does not work properly (see below)
Known downsides before purchase and other thoughts
  • No height adjustment, but I did not need this particular feature and is not expected at this price point
  • 1ms MPRT/3ms GTG pixel response is a bit of an exaggeration and would only happen at highest overdrive setting which comes with lots of inverse ghosting between 60-120Hz: https://www.youtube.com/watch?v=ukKev6cPZhY
  • Buttons are on the right side, so not optimal for dual side by side monitors as one monitor will always have its buttons blocked
 

Monitor Driver install



11 April 2022

Ubiquiti network setup

I wanted to follow the common security guidance of having 3 wireless networks/VLANs: Normal, IOT, and Guest

  • Normal would contain the TV, printer, computers, and google devices for casting to TV
  • IOT would contain the smart outlets, garage door sensor, and other smart devices
  • Guest would be just for visitors

Ubiquiti has a nice easy default for isolating a guest network, so I just used that.

However, I needed to add a rule to prevent the IOT and Normal networks from comunicating, because applying a similar isolation policy to the IOT network prevented the Belkin smart switches from communicating.


Add the networks

Settings -> Networks -> Add New Network

  • IOT
    • Network Name: IOT
    • Advanced
      • VLAN ID: 2
  • Guest
    • Network Name: Guest
    • Advanced
      • VLAN ID: 3
      • Device Isolation: True

Add the wireless network

Settings -> WiFi-> Add New WiFi Network
  • Add a 2.4 GHz and a 5 GHz wireless network for each of the new networks

Add the firewall rule

I followed this guide, but the screens have changed in newer version: https://help.ui.com/hc/en-us/articles/115010254227-UniFi-USG-Firewall-How-to-Disable-InterVLAN-Routing

Settings -> Traffic & Security -> Global Threat Management -> Firewall -> Create New Rule

  • Type: LAN In
  • Description: Isolate IOT from LAN
  • Enabled: True
  • Rule Applied: Before Predefined Rules
  • Action: Drop
  • IPv4 Protocol: All
  • Source
    • Source Type: Network
    • Network: IOT
    • Network Type: IPv4 Subnet
  • Destination
    • Destination Type: Network
    • Network: LAN
    • Network Type: IPv4 Subnet


12 March 2022

Cleaning up old vmlinuz and initrd files in /boot

 I noticed that updates were taking a long time on the kernel step. So I expanded what it was doing and it was attempting to build really old versions (3.13)

Purge from apt-get

  • Figure out which package to remove
    • dpkg -S /boot/vmlinuz-3.13.0-36-generic
  • Remove the package
    • sudo apt-get purge linux-image-3.13.0-36-generic
  • If you have many to remove, you can use bash expansion like so:
    • sudo apt-get purge linux-image-3.13.0-{36,37,39}-generic
  • Regenerate initrd
    • sudo update-initramfs -u -k all
  • Update grub
    • sudo update-grub
  • If initramfs is still generating stuff for old kernel, you can remove like so
    • sudo update-initramfs -d -k 3.13.0-36-generic
    • Note: bash expansion does not work for this command

Sources

Replacing drives in ZFS

 As my two 3TB WD Red drives (purchased before the SMR controversy) in Raid 1 were starting to fill up and the price of bigger drives came down, it was time for an upgrade. I purchased two 6TB WD Red Plus drives to replace them (~$105 each before tax). I went with this particular drive and size because of the speed (5640 rpm) to keep the drives quiet and cool. Most NAS drives are going to the higher rpm of 7200.

Checking the drives

I used an external enclosure to test each of the drives before adding them to the ZFS Pool
  • Determine the drive
    • For these examples I used /dev/sdX, but you need to use the device letter applicable to you
    • ls /dev/sd*
  • Check initial SMART attributes
    • sudo smartctl -A /dev/sdX
  • SMART short test
    • sudo smartctl --test=short /dev/sdX
  • Check SMART test status
    • sudo smartctl -c /dev/sdX
  • Check SMART attributes
    • sudo smartctl -A /dev/sdX
  • Write Zeros to the entire disk
    • Warning: This will erase the entire disk and you will lose anything stored on it
    • sudo dd if=/dev/zero of=/dev/sdX bs=1M status=progress
  • Check SMART attributes
    • sudo smartctl -A /dev/sdX
  • SMART extended test
    • I had problems with this test being stuck at 90% remaining due to the drive stopping spinning
    • The solution it to make sure the drive is spinning and then kicking off the test and a command to keep the drive active
    • sudo dd if=/dev/sdX of=/dev/null count=1  # Read one 512 byte block from drive
    • sudo smartctl --test=long /dev/sdX  # Kick off the test
    • sudo watch -d -n 60 smartctl -a /dev/sdX  # Make sure the drive doesn't spin down. Will also show test status
  • Power Off the drive
    • sudo udisksctl power-off -b /dev/sdX
  • Disconnect the drive
  • Repeat for the next drive

Adding the drives to the pool

At this point I hooked both drives up to the SATA connectors inside the server, but had to disconnect the front panel USB to access the open SATA port and disconnect the DVD drive. Now it was time to add them to the pool to get data replicated to them.
  • Get current status
    • sudo zpool status
  • Get the drive ids
    • ls -l /dev/disk/by-id/
  • Attach them to a current drive to put them in the same mirror
    • Run this command with the values substituted for both drives:
      • sudo zpool attach [poolname] [original drive to be mirrored] [new drive]
    • If you get an error like: devices have different sector alignment
      • than add: -o ashift=9 
      • There are performance penalties for running a 4K drive as a 512B one, but supposedly there are storage size advantages
    • Here is what I ran (serial 1 and 2 are the existing drives while 3 and 4 are the new ones:
      • sudo zpool attach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#1 /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-SERIAL#3 -o ashift=9
      • sudo zpool attach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#2 /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-SERIAL#4 -o ashift=9
  • Check on the status:
    • sudo zpool status
  • Wait for the resilvering to complete
    • Estimated time was ~33 hours
    • Actual runtime was ~18 hours copying around 2TB of data
  • Optional scrub to verify all data
    • sudo zpool scrub [POOLNAME]
  • Wait for it to finish
    • Estimated time was ~13 hours
    • Actual time was ~10 hours

Remove the old drives

  • Remove the old disks from the pool:
    • sudo zpool detach [POOLNAME] [DISKNAME]
  • What I ran:
    • sudo zpool detach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#1
    • sudo zpool detach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#2
  • Check on the status
    • sudo zpool status
  • Shutdown the server and remove the drives

Expand the Quotas

  • Check current usage and quotas
    • df -k /storage/mythtv /storage/.encrypted
    • sudo zfs get quota storage/mythtv
    • sudo zfs get quota storage/.encrypted
  • Set new quotas, total storage available was 5.45T
    • sudo zfs set quota=1.5T storage/mythtv
    • sudo zfs set quota=3.9T storage/.encrypted
  • While editing my zfs containers, I decided to turn off compression since i wasn't seeing any gains
    • sudo zfs set compression=off storage/.encrypted

Sources

23 December 2021

Copying Linux HDD to a new SSD

I wanted to upgrade my old 250GB WD Blue HDD to a newer 512GB Crucial SSD, but was having problems getting the data transfered over. I tried clonezilla to no avail probably probably because the HDD had sector size of 512 bytes and the SSD uses 4k bytes. In the end, I resorted to copying everything over manually. Here are the steps that I used For these steps /dev/sdX is the existing drive and /dev/sdZ is the new one, which became one letter less after the existing drive is removed (sdY).

Prequisites

  • Have a linux live USB to be able to boot with
  • Install grub-efi-amd64:
    • sudo apt-get install grub-efi-amd64


Linux Live USB Steps

Boot off of your linux live USB so that the data on the drive you are copying is static.


Existing Partition Formats

Here is what my existing drive looks like, and I was going to keep the same structure.

  • sdX1 - EFI - 512MiB - flags hidden
  • sdX2 - root - 95.37GiB
  • sdX3 - home - 129.48GiB
  • sdX4 - swap - 7.54GiB


Create new partitions using gparted

Using gparted I created the following partitions. Description is not a field in gparted, but I listed it below for clarification.
  • sdZ1
    • description: EFI
    • Free space preceding: 1MiB (This is the default that came up for me and from what I read should be used for aligning the physical sectors)
    • size: 512MiB
    • type: fat32
    • flags: hidden, esd, boot
  • sdZ2
    • description: root
    • size: 153,600MiB
    • type: ext4
  • sdZ3
    • description: home
    • size: 257,291MiB
    • type: ext4
  • sdZ4
    • description: swap
    • size: 65,536MiB (I went with 64GiB as the server has 20GiB of RAM and I am thinking of upgrading to 32GiB and this would put me at 2x of that)
    • type: linux-swap


Mount

Mount all the drives to get ready to copy them

  • sudo mount /dev/sdX1 /mnt/hdd1
  • sudo mount /dev/sdX2 /mnt/hdd2
  • sudo mount /dev/sdX3 /mnt/hdd3
  • sudo mount /dev/sdZ1 /mnt/ssd1
  • sudo mount /dev/sdZ2 /mnt/ssd2
  • sudo mount /dev/sdZ3 /mnt/ssd3


Copy the data

Description of rsync options:
  • a - archive mode; equals -rlptgoD (no -H,-A,-X)
  • A - preserve ACLs (implies -p)
  • H - preserve hard links
  • X - preserve extended attributes
  • x - don't cross filesystem boundaries (this is very important)
  • v - increase verbosity
Perform the actual copies, depending on file size this can take a while.

  • cd /mnt/hdd1
  • sudo rsync -aAHXxv --stats --progress . /mnt/ssd1
  • cd /mnt/hdd2
  • sudo rsync -aAHXxv --stats --progress . --exclude={"dev/*","proc/*","sys/*","tmp/*","run/*","mnt/*","media/*","lost+found"} /mnt/ssd2
  • cd /mnt/hdd3
  • sudo rsync -aAHXxv --stats --progress . /mnt/ssd3 --exclude={"lost+found"}


Edit the UUIDs copied

Determine the new UUIDs to use

  • sudo blkid /dev/sdZ2  # for /
  • sudo blkid /dev/sdZ3  # for /home
  • sudo blkid /dev/sdZ4  # for swap
Replace the UUIDs in fstab (NOT PARTUUID) for / and /home and swap
  • sudo vi /mnt/ssd2/etc/fstab

Replace the UUID for swap in initramfs resume

  • sudo vi /mnt/ssd2/etc/initramfs-tools/conf.d/resume


Change booting to use new drive

GRUB

  • boot using old install
    • sudo update-grub
    • check /boot/grub/grub.cfg to ensure it is using the correct UUID for sdZ2
  • boot using new install
    • sudo mkdir /boot/efi
    • sudo mount /dev/sdZ1 /boot/efi
    • sudo grub install --target=x86_64-efi /dev/sdZ
    • sudo update-grub
  • shutdown
  • remove old hd
  • boot using new install
    • sudo update-grub


Bonus Section

Because I did not get grub installed on the new drive before removing the old one, I got stuck at the grub prompt so the steps that I used to boot were:

  • ls
  • set root=(hd2,2)
  • linux /boot/vmlinuz<tab complete to most recent version> root=/dev/sdY2
  • initrd /boot/init<tab complete to same version as above>
  • boot

TRIM

Ensure that trim is enabled and works:

  • Check that it is enabled to run weekly
    • systemctl status fstrim.timer
  • Check that trimming actually works
    • sudo fstrim -v /

Update 2022-07-26: NOATIME

As the ssd was seeing more writes then I would like, I realized that it was writing the access times, which I really didn't need. To fix this we add it in /etc/fstab to this line and then reboot:
  • UUID=XXXX   /   ext4   defaults,noatime   0   1

Sources

03 December 2021

Xubuntu Server Ubgrade 16.04 to 18.04 to 20.04

As I have been busy the past few months, I failed to notice that 16.04 went past it's support window and hasn't been getting updates. Here are the steps that I went through to get current and fix the issues that I encountered

Perform the Upgrade

I upgraded using the GUI to walk me thru the steps
  • upgrade to 18.04
    • sudo do-release-upgrade -f DistUpgradeViewGtk3
  • reboot
  • upgrade to 20.04
    • sudo do-release-upgrade -f DistUpgradeViewGtk3
  • When prompted for debian-sys-maint password find it here:
    • sudo cat /etc/mysql/debian.cnf


Getting Unifi Web working again

127.0.0.1:8443 was giving a 404 error

  • Check the logs
    • tail -n 50 /usr/lib/unifi/logs/server.log
      • [2021-10-24TXX:XX:XX,XXX] <db-server> INFO  db     - DbServer stopped
    • tail -n 50 /usr/lib/unifi/logs/mongod.log
      • 2021-10-24TXX:XX:XX.XXX-XXXX F CONTROL  [initandlisten] ** IMPORTANT: UPGRADE PROBLEM: The data files need to be fully upgraded to version 3.4 before attempting an upgrade to 3.6; see http://dochub.mongodb.org/core/3.6-upgrade-fcv for more details.

So I needed to upgrade the mongo data files, but can't do that in version 3.6 so I used a combination of the following posts

Here are the actual steps that I used:

  • sudo apt-get install docker.io
  • sudo docker run --name mongo-migrate -v /var/lib/unifi/db:/data/db -p 127.0.0.1:27888:27017 -d mongo:3.4
  • mongo localhost:27888
    • db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
    • db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
    • exit
  • sudo docker stop mongo-migrate
  • sudo chown -R unifi:unifi /var/lib/unifi/db
  • sudo systemctl restart unifi


Fixing MythTV

I was seeing the following Fatal Error when accessing mythweb:

!!NoTrans: You are missing a php extension for mysql interaction. Please install php-mysqli or similar!!

I solved this error with the following steps:

  • apt-get install php7.4
  • sudo a2dismod php7.0
  • sudo a2enmod php7.4
  • sudo systemctl restart apache2

It then became this error:

!!NoTrans: MasterServerIP or MasterServerPort not found! You may need to check your mythweb.conf file or re-run mythtv-setup!!

I determined this to be an error with mysql

  • tail -n 100 /var/log/mysql/error.log
    • 2021-10-24TXX:XX:XX.XXXXXXZ 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_limit=1M'.
    • 2021-10-24TXX:XX:XX.XXXXXXZ 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_size=16M'.
  • sudo vi /etc/mysql/my.cnf
    • commented out the 2 lines regarding query_cache
  • sudo systemctl restart mysql

This got mysql running but, mythweb still had the same error. I tried a reboot of the server, but that didn't seem to help. It turns out that I needed to change the bind_address

  • sudo vi /etc/mysql/my.cnf
    • bind-address            = 127.0.0.1,192.168.X.X


VNC

I needed to fix my vnc setup and came across this:

  • vnc4server is no longer in the repo
  • tightvncserver works with the exception of window buttons. This is due to it not supporting Render: https://bugs.launchpad.net/ubuntu/+source/xfwm4/+bug/1860921
  • Solution: Use tigervncserver

Here are the steps that I used:

  • sudo apt-get install tigervnc-standalone-server
  • touch ${HOME}/.Xresources
  • tigervncserver :1 -geometry 1366x768 -depth 24 -localhost no


Let's Encrypt

Update 2021-12-14: Apparently, the upgrade broke my Let's Encrypt certificate, but fixing it was easy enough, I just needed to install 2 packages: sudo apt-get install certbot python3-certbot-apache