12 March 2022

Cleaning up old vmlinuz and initrd files in /boot

 I noticed that updates were taking a long time on the kernel step. So I expanded what it was doing and it was attempting to build really old versions (3.13)

Purge from apt-get

  • Figure out which package to remove
    • dpkg -S /boot/vmlinuz-3.13.0-36-generic
  • Remove the package
    • sudo apt-get purge linux-image-3.13.0-36-generic
  • If you have many to remove, you can use bash expansion like so:
    • sudo apt-get purge linux-image-3.13.0-{36,37,39}-generic
  • Regenerate initrd
    • sudo update-initramfs -u -k all
  • Update grub
    • sudo update-grub
  • If initramfs is still generating stuff for old kernel, you can remove like so
    • sudo update-initramfs -d -k 3.13.0-36-generic
    • Note: bash expansion does not work for this command

Sources

Replacing drives in ZFS

 As my two 3TB WD Red drives (purchased before the SMR controversy) in Raid 1 were starting to fill up and the price of bigger drives came down, it was time for an upgrade. I purchased two 6TB WD Red Plus drives to replace them (~$105 each before tax). I went with this particular drive and size because of the speed (5640 rpm) to keep the drives quiet and cool. Most NAS drives are going to the higher rpm of 7200.

Checking the drives

I used an external enclosure to test each of the drives before adding them to the ZFS Pool
  • Determine the drive
    • For these examples I used /dev/sdX, but you need to use the device letter applicable to you
    • ls /dev/sd*
  • Check initial SMART attributes
    • sudo smartctl -A /dev/sdX
  • SMART short test
    • sudo smartctl --test=short /dev/sdX
  • Check SMART test status
    • sudo smartctl -c /dev/sdX
  • Check SMART attributes
    • sudo smartctl -A /dev/sdX
  • Write Zeros to the entire disk
    • Warning: This will erase the entire disk and you will lose anything stored on it
    • sudo dd if=/dev/zero of=/dev/sdX bs=1M status=progress
  • Check SMART attributes
    • sudo smartctl -A /dev/sdX
  • SMART extended test
    • I had problems with this test being stuck at 90% remaining due to the drive stopping spinning
    • The solution it to make sure the drive is spinning and then kicking off the test and a command to keep the drive active
    • sudo dd if=/dev/sdX of=/dev/null count=1  # Read one 512 byte block from drive
    • sudo smartctl --test=long /dev/sdX  # Kick off the test
    • sudo watch -d -n 60 smartctl -a /dev/sdX  # Make sure the drive doesn't spin down. Will also show test status
  • Power Off the drive
    • sudo udisksctl power-off -b /dev/sdX
  • Disconnect the drive
  • Repeat for the next drive

Adding the drives to the pool

At this point I hooked both drives up to the SATA connectors inside the server, but had to disconnect the front panel USB to access the open SATA port and disconnect the DVD drive. Now it was time to add them to the pool to get data replicated to them.
  • Get current status
    • sudo zpool status
  • Get the drive ids
    • ls -l /dev/disk/by-id/
  • Attach them to a current drive to put them in the same mirror
    • Run this command with the values substituted for both drives:
      • sudo zpool attach [poolname] [original drive to be mirrored] [new drive]
    • If you get an error like: devices have different sector alignment
      • than add: -o ashift=9 
      • There are performance penalties for running a 4K drive as a 512B one, but supposedly there are storage size advantages
      • Update 2023-12-10: Apparently it is a quite large performance difference: 35-45MB/s vs 70-72MB/s
    • Here is what I ran (serial 1 and 2 are the existing drives while 3 and 4 are the new ones:
      • sudo zpool attach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#1 /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-SERIAL#3 -o ashift=9
      • sudo zpool attach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#2 /dev/disk/by-id/ata-WDC_WD60EFZX-68B3FN0_WD-SERIAL#4 -o ashift=9
  • Check on the status:
    • sudo zpool status
  • Wait for the resilvering to complete
    • Estimated time was ~33 hours
    • Actual runtime was ~18 hours copying around 2TB of data
  • Optional scrub to verify all data
    • sudo zpool scrub [POOLNAME]
  • Wait for it to finish
    • Estimated time was ~13 hours
    • Actual time was ~10 hours

Remove the old drives

  • Remove the old disks from the pool:
    • sudo zpool detach [POOLNAME] [DISKNAME]
  • What I ran:
    • sudo zpool detach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#1
    • sudo zpool detach storage /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-SERIAL#2
  • Check on the status
    • sudo zpool status
  • Shutdown the server and remove the drives

Expand the Quotas

  • Check current usage and quotas
    • df -k /storage/mythtv /storage/.encrypted
    • sudo zfs get quota storage/mythtv
    • sudo zfs get quota storage/.encrypted
  • Set new quotas, total storage available was 5.45T
    • sudo zfs set quota=1.5T storage/mythtv
    • sudo zfs set quota=3.9T storage/.encrypted
  • While editing my zfs containers, I decided to turn off compression since i wasn't seeing any gains
    • sudo zfs set compression=off storage/.encrypted

Sources