23 December 2021

Copying Linux HDD to a new SSD

I wanted to upgrade my old 250GB WD Blue HDD to a newer 512GB Crucial SSD, but was having problems getting the data transfered over. I tried clonezilla to no avail probably probably because the HDD had sector size of 512 bytes and the SSD uses 4k bytes. In the end, I resorted to copying everything over manually. Here are the steps that I used For these steps /dev/sdX is the existing drive and /dev/sdZ is the new one, which became one letter less after the existing drive is removed (sdY).


  • Have a linux live USB to be able to boot with
  • Install grub-efi-amd64:
    • sudo apt-get install grub-efi-amd64

Linux Live USB Steps

Boot off of your linux live USB so that the data on the drive you are copying is static.

Existing Partition Formats

Here is what my existing drive looks like, and I was going to keep the same structure.

  • sdX1 - EFI - 512MiB - flags hidden
  • sdX2 - root - 95.37GiB
  • sdX3 - home - 129.48GiB
  • sdX4 - swap - 7.54GiB

Create new partitions using gparted

Using gparted I created the following partitions. Description is not a field in gparted, but I listed it below for clarification.
  • sdZ1
    • description: EFI
    • Free space preceding: 1MiB (This is the default that came up for me and from what I read should be used for aligning the physical sectors)
    • size: 512MiB
    • type: fat32
    • flags: hidden, esd, boot
  • sdZ2
    • description: root
    • size: 153,600MiB
    • type: ext4
  • sdZ3
    • description: home
    • size: 257,291MiB
    • type: ext4
  • sdZ4
    • description: swap
    • size: 65,536MiB (I went with 64GiB as the server has 20GiB of RAM and I am thinking of upgrading to 32GiB and this would put me at 2x of that)
    • type: linux-swap


Mount all the drives to get ready to copy them

  • sudo mount /dev/sdX1 /mnt/hdd1
  • sudo mount /dev/sdX2 /mnt/hdd2
  • sudo mount /dev/sdX3 /mnt/hdd3
  • sudo mount /dev/sdZ1 /mnt/ssd1
  • sudo mount /dev/sdZ2 /mnt/ssd2
  • sudo mount /dev/sdZ3 /mnt/ssd3

Copy the data

Description of rsync options:
  • a - archive mode; equals -rlptgoD (no -H,-A,-X)
  • A - preserve ACLs (implies -p)
  • H - preserve hard links
  • X - preserve extended attributes
  • x - don't cross filesystem boundaries (this is very important)
  • v - increase verbosity
Perform the actual copies, depending on file size this can take a while.

  • cd /mnt/hdd1
  • sudo rsync -aAHXxv --stats --progress . /mnt/ssd1
  • cd /mnt/hdd2
  • sudo rsync -aAHXxv --stats --progress . --exclude={"dev/*","proc/*","sys/*","tmp/*","run/*","mnt/*","media/*","lost+found"} /mnt/ssd2
  • cd /mnt/hdd3
  • sudo rsync -aAHXxv --stats --progress . /mnt/ssd3 --exclude={"lost+found"}

Edit the UUIDs copied

Determine the new UUIDs to use

  • sudo blkid /dev/sdZ2  # for /
  • sudo blkid /dev/sdZ3  # for /home
  • sudo blkid /dev/sdZ4  # for swap
Replace the UUIDs in fstab (NOT PARTUUID) for / and /home and swap
  • sudo vi /mnt/ssd2/etc/fstab

Replace the UUID for swap in initramfs resume

  • sudo vi /mnt/ssd2/etc/initramfs-tools/conf.d/resume

Change booting to use new drive


  • boot using old install
    • sudo update-grub
    • check /boot/grub/grub.cfg to ensure it is using the correct UUID for sdZ2
  • boot using new install
    • sudo mkdir /boot/efi
    • sudo mount /dev/sdZ1 /boot/efi
    • sudo grub install --target=x86_64-efi /dev/sdZ
    • sudo update-grub
  • shutdown
  • remove old hd
  • boot using new install
    • sudo update-grub

Bonus Section

Because I did not get grub installed on the new drive before removing the old one, I got stuck at the grub prompt so the steps that I used to boot were:

  • ls
  • set root=(hd2,2)
  • linux /boot/vmlinuz<tab complete to most recent version> root=/dev/sdY2
  • initrd /boot/init<tab complete to same version as above>
  • boot


Ensure that trim is enabled and works:

  • Check that it is enabled to run weekly
    • systemctl status fstrim.timer
  • Check that trimming actually works
    • sudo fstrim -v /

Update 2022-07-26: NOATIME

As the ssd was seeing more writes then I would like, I realized that it was writing the access times, which I really didn't need. To fix this we add it in /etc/fstab to this line and then reboot:
  • UUID=XXXX   /   ext4   defaults,noatime   0   1

Update 2023-02-03: Swappiness

Updated the swappiness setting to hopefully save more ssd writes
  • in /etc/sysctl.conf add:
    • vm.swappiness = 10
  • reboot
  • check the setting:
    • cat /proc/sys/vm/swappiness



03 December 2021

Xubuntu Server Ubgrade 16.04 to 18.04 to 20.04

As I have been busy the past few months, I failed to notice that 16.04 went past it's support window and hasn't been getting updates. Here are the steps that I went through to get current and fix the issues that I encountered

Perform the Upgrade

I upgraded using the GUI to walk me thru the steps
  • upgrade to 18.04
    • sudo do-release-upgrade -f DistUpgradeViewGtk3
  • reboot
  • upgrade to 20.04
    • sudo do-release-upgrade -f DistUpgradeViewGtk3
  • When prompted for debian-sys-maint password find it here:
    • sudo cat /etc/mysql/debian.cnf

Getting Unifi Web working again was giving a 404 error

  • Check the logs
    • tail -n 50 /usr/lib/unifi/logs/server.log
      • [2021-10-24TXX:XX:XX,XXX] <db-server> INFO  db     - DbServer stopped
    • tail -n 50 /usr/lib/unifi/logs/mongod.log
      • 2021-10-24TXX:XX:XX.XXX-XXXX F CONTROL  [initandlisten] ** IMPORTANT: UPGRADE PROBLEM: The data files need to be fully upgraded to version 3.4 before attempting an upgrade to 3.6; see http://dochub.mongodb.org/core/3.6-upgrade-fcv for more details.

So I needed to upgrade the mongo data files, but can't do that in version 3.6 so I used a combination of the following posts

Here are the actual steps that I used:

  • sudo apt-get install docker.io
  • sudo docker run --name mongo-migrate -v /var/lib/unifi/db:/data/db -p -d mongo:3.4
  • mongo localhost:27888
    • db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
    • db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
    • exit
  • sudo docker stop mongo-migrate
  • sudo chown -R unifi:unifi /var/lib/unifi/db
  • sudo systemctl restart unifi

Fixing MythTV

I was seeing the following Fatal Error when accessing mythweb:

!!NoTrans: You are missing a php extension for mysql interaction. Please install php-mysqli or similar!!

I solved this error with the following steps:

  • apt-get install php7.4
  • sudo a2dismod php7.0
  • sudo a2enmod php7.4
  • sudo systemctl restart apache2

It then became this error:

!!NoTrans: MasterServerIP or MasterServerPort not found! You may need to check your mythweb.conf file or re-run mythtv-setup!!

I determined this to be an error with mysql

  • tail -n 100 /var/log/mysql/error.log
    • 2021-10-24TXX:XX:XX.XXXXXXZ 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_limit=1M'.
    • 2021-10-24TXX:XX:XX.XXXXXXZ 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_size=16M'.
  • sudo vi /etc/mysql/my.cnf
    • commented out the 2 lines regarding query_cache
  • sudo systemctl restart mysql

This got mysql running but, mythweb still had the same error. I tried a reboot of the server, but that didn't seem to help. It turns out that I needed to change the bind_address

  • sudo vi /etc/mysql/my.cnf
    • bind-address            =,192.168.X.X


I needed to fix my vnc setup and came across this:

  • vnc4server is no longer in the repo
  • tightvncserver works with the exception of window buttons. This is due to it not supporting Render: https://bugs.launchpad.net/ubuntu/+source/xfwm4/+bug/1860921
  • Solution: Use tigervncserver

Here are the steps that I used:

  • sudo apt-get install tigervnc-standalone-server
  • touch ${HOME}/.Xresources
  • tigervncserver :1 -geometry 1366x768 -depth 24 -localhost no

Let's Encrypt

Update 2021-12-14: Apparently, the upgrade broke my Let's Encrypt certificate, but fixing it was easy enough, I just needed to install 2 packages: sudo apt-get install certbot python3-certbot-apache