23 December 2021

Copying Linux HDD to a new SSD

I wanted to upgrade my old 250GB WD Blue HDD to a newer 512GB Crucial SSD, but was having problems getting the data transfered over. I tried clonezilla to no avail probably probably because the HDD had sector size of 512 bytes and the SSD uses 4k bytes. In the end, I resorted to copying everything over manually. Here are the steps that I used For these steps /dev/sdX is the existing drive and /dev/sdZ is the new one, which became one letter less after the existing drive is removed (sdY).

Prequisites

  • Have a linux live USB to be able to boot with
  • Install grub-efi-amd64:
    • sudo apt-get install grub-efi-amd64


Linux Live USB Steps

Boot off of your linux live USB so that the data on the drive you are copying is static.


Existing Partition Formats

Here is what my existing drive looks like, and I was going to keep the same structure.

  • sdX1 - EFI - 512MiB - flags hidden
  • sdX2 - root - 95.37GiB
  • sdX3 - home - 129.48GiB
  • sdX4 - swap - 7.54GiB


Create new partitions using gparted

Using gparted I created the following partitions. Description is not a field in gparted, but I listed it below for clarification.
  • sdZ1
    • description: EFI
    • Free space preceding: 1MiB (This is the default that came up for me and from what I read should be used for aligning the physical sectors)
    • size: 512MiB
    • type: fat32
    • flags: hidden, esd, boot
  • sdZ2
    • description: root
    • size: 153,600MiB
    • type: ext4
  • sdZ3
    • description: home
    • size: 257,291MiB
    • type: ext4
  • sdZ4
    • description: swap
    • size: 65,536MiB (I went with 64GiB as the server has 20GiB of RAM and I am thinking of upgrading to 32GiB and this would put me at 2x of that)
    • type: linux-swap


Mount

Mount all the drives to get ready to copy them

  • sudo mount /dev/sdX1 /mnt/hdd1
  • sudo mount /dev/sdX2 /mnt/hdd2
  • sudo mount /dev/sdX3 /mnt/hdd3
  • sudo mount /dev/sdZ1 /mnt/ssd1
  • sudo mount /dev/sdZ2 /mnt/ssd2
  • sudo mount /dev/sdZ3 /mnt/ssd3


Copy the data

Description of rsync options:
  • a - archive mode; equals -rlptgoD (no -H,-A,-X)
  • A - preserve ACLs (implies -p)
  • H - preserve hard links
  • X - preserve extended attributes
  • x - don't cross filesystem boundaries (this is very important)
  • v - increase verbosity
Perform the actual copies, depending on file size this can take a while.

  • cd /mnt/hdd1
  • sudo rsync -aAHXxv --stats --progress . /mnt/ssd1
  • cd /mnt/hdd2
  • sudo rsync -aAHXxv --stats --progress . --exclude={"dev/*","proc/*","sys/*","tmp/*","run/*","mnt/*","media/*","lost+found"} /mnt/ssd2
  • cd /mnt/hdd3
  • sudo rsync -aAHXxv --stats --progress . /mnt/ssd3 --exclude={"lost+found"}


Edit the UUIDs copied

Determine the new UUIDs to use

  • sudo blkid /dev/sdZ2  # for /
  • sudo blkid /dev/sdZ3  # for /home
  • sudo blkid /dev/sdZ4  # for swap
Replace the UUIDs in fstab (NOT PARTUUID) for / and /home and swap
  • sudo vi /mnt/ssd2/etc/fstab

Replace the UUID for swap in initramfs resume

  • sudo vi /mnt/ssd2/etc/initramfs-tools/conf.d/resume


Change booting to use new drive

GRUB

  • boot using old install
    • sudo update-grub
    • check /boot/grub/grub.cfg to ensure it is using the correct UUID for sdZ2
  • boot using new install
    • sudo mkdir /boot/efi
    • sudo mount /dev/sdZ1 /boot/efi
    • sudo grub install --target=x86_64-efi /dev/sdZ
    • sudo update-grub
  • shutdown
  • remove old hd
  • boot using new install
    • sudo update-grub


Bonus Section

Because I did not get grub installed on the new drive before removing the old one, I got stuck at the grub prompt so the steps that I used to boot were:

  • ls
  • set root=(hd2,2)
  • linux /boot/vmlinuz<tab complete to most recent version> root=/dev/sdY2
  • initrd /boot/init<tab complete to same version as above>
  • boot

TRIM

Ensure that trim is enabled and works:

  • Check that it is enabled to run weekly
    • systemctl status fstrim.timer
  • Check that trimming actually works
    • sudo fstrim -v /

Update 2022-07-26: NOATIME

As the ssd was seeing more writes then I would like, I realized that it was writing the access times, which I really didn't need. To fix this we add it in /etc/fstab to this line and then reboot:
  • UUID=XXXX   /   ext4   defaults,noatime   0   1

Update 2023-02-03: Swappiness

Updated the swappiness setting to hopefully save more ssd writes
  • in /etc/sysctl.conf add:
    • vm.swappiness = 10
  • reboot
  • check the setting:
    • cat /proc/sys/vm/swappiness

 

Sources

03 December 2021

Xubuntu Server Ubgrade 16.04 to 18.04 to 20.04

As I have been busy the past few months, I failed to notice that 16.04 went past it's support window and hasn't been getting updates. Here are the steps that I went through to get current and fix the issues that I encountered

Perform the Upgrade

I upgraded using the GUI to walk me thru the steps
  • upgrade to 18.04
    • sudo do-release-upgrade -f DistUpgradeViewGtk3
  • reboot
  • upgrade to 20.04
    • sudo do-release-upgrade -f DistUpgradeViewGtk3
  • When prompted for debian-sys-maint password find it here:
    • sudo cat /etc/mysql/debian.cnf


Getting Unifi Web working again

127.0.0.1:8443 was giving a 404 error

  • Check the logs
    • tail -n 50 /usr/lib/unifi/logs/server.log
      • [2021-10-24TXX:XX:XX,XXX] <db-server> INFO  db     - DbServer stopped
    • tail -n 50 /usr/lib/unifi/logs/mongod.log
      • 2021-10-24TXX:XX:XX.XXX-XXXX F CONTROL  [initandlisten] ** IMPORTANT: UPGRADE PROBLEM: The data files need to be fully upgraded to version 3.4 before attempting an upgrade to 3.6; see http://dochub.mongodb.org/core/3.6-upgrade-fcv for more details.

So I needed to upgrade the mongo data files, but can't do that in version 3.6 so I used a combination of the following posts

Here are the actual steps that I used:

  • sudo apt-get install docker.io
  • sudo docker run --name mongo-migrate -v /var/lib/unifi/db:/data/db -p 127.0.0.1:27888:27017 -d mongo:3.4
  • mongo localhost:27888
    • db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
    • db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
    • exit
  • sudo docker stop mongo-migrate
  • sudo chown -R unifi:unifi /var/lib/unifi/db
  • sudo systemctl restart unifi


Fixing MythTV

I was seeing the following Fatal Error when accessing mythweb:

!!NoTrans: You are missing a php extension for mysql interaction. Please install php-mysqli or similar!!

I solved this error with the following steps:

  • apt-get install php7.4
  • sudo a2dismod php7.0
  • sudo a2enmod php7.4
  • sudo systemctl restart apache2

It then became this error:

!!NoTrans: MasterServerIP or MasterServerPort not found! You may need to check your mythweb.conf file or re-run mythtv-setup!!

I determined this to be an error with mysql

  • tail -n 100 /var/log/mysql/error.log
    • 2021-10-24TXX:XX:XX.XXXXXXZ 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_limit=1M'.
    • 2021-10-24TXX:XX:XX.XXXXXXZ 0 [ERROR] [MY-000067] [Server] unknown variable 'query_cache_size=16M'.
  • sudo vi /etc/mysql/my.cnf
    • commented out the 2 lines regarding query_cache
  • sudo systemctl restart mysql

This got mysql running but, mythweb still had the same error. I tried a reboot of the server, but that didn't seem to help. It turns out that I needed to change the bind_address

  • sudo vi /etc/mysql/my.cnf
    • bind-address            = 127.0.0.1,192.168.X.X


VNC

I needed to fix my vnc setup and came across this:

  • vnc4server is no longer in the repo
  • tightvncserver works with the exception of window buttons. This is due to it not supporting Render: https://bugs.launchpad.net/ubuntu/+source/xfwm4/+bug/1860921
  • Solution: Use tigervncserver

Here are the steps that I used:

  • sudo apt-get install tigervnc-standalone-server
  • touch ${HOME}/.Xresources
  • tigervncserver :1 -geometry 1366x768 -depth 24 -localhost no


Let's Encrypt

Update 2021-12-14: Apparently, the upgrade broke my Let's Encrypt certificate, but fixing it was easy enough, I just needed to install 2 packages: sudo apt-get install certbot python3-certbot-apache


13 August 2021

Raspberry Pi 4 Print Server: Part 3

Time for the final setup steps

General Setup

Changing the hostname:

  • hostnamectl set-hostname <new hostname>
Broadcasting the hostname:
  • sudo apt install samba
  • sudo apt install libnss-winbind
  • install wsdd
    • git clone
    • group: nogroup
    • /usr/local/bin/wsdd
    • cp etc/systemd/wsdd.service /etc/systemd/system/
    • sudo systemctl daemon-reload
    • sudo systemctl enable wsdd
    • sudo systemctl status wsdd
Controlling the fan and Power button:


Printer Setup

Installing CUPS and Printer Drivers for SCX-4623f

Unfortunately CUPS requires a user/pass combination and the default is for root, so I set a password for root
  • Run this on the Raspberry Pi:
    • sudo su -
    • passwd
Forward the CUPS UI to your laptop
  • Run this on your laptop
    • ssh -L 6631:localhost:631 ubuntu@192.168.1.XX
You can now open a browser and go to http://localhost:6631 on your laptop for adding the printer and checking "Share printers connected to this system" on the "Administration" tab. For the user use "root" and use the password that you set above. Optionally, you can check "Allow remote administration" to skip the ssh command in the future and go directly to http://192.168.1.XX:631

You should now be able to see the printer on your network!

Scanner Setup

Raspberry Pi Setup

  • Install sane
    • sudo apt install sane
  • Check for the scanner
    • sudo sane-find-scanner
    • sudo scanimage -L
    • cd /tmp
    • sudo -u saned scanimage -x 100 -y 100 -d xerox_mfp:libusb:XXX:XXX --format=png -p > image.png
    • if you hear it scanning than it works and proceed, if not add saned user to lp group and try again
      • sudo usermod -a -G lp saned
  • Have saned listen to local network
    • edit /etc/sane.d/net.conf
      • add the local network (eg. 192.168.1.0/24)
  • Configure saned socket to load (you do not need the saned service itself)
    • sudo systemctl start saned.socket
    • sudo systemctl enable saned.socket

Laptop Setup

  • install xsane
    • sudo apt install xsane
  • edit /etc/sane.d/net.conf
    • add the ip address of the Raspberry Pi (eg 192.168.1.XX)
Start xsane and happy scanning!

10 August 2021

Rasberry Pi 4 Print Server: Part 2

These are the steps that I went thru to update the firmware and get Ubuntu Server 20.04 LTS installed on my Raspberry Pi 4 2GB from my Chromebook

Prerequisites:

Firmware Update Steps:
  1. Download the latest firmware from: https://github.com/raspberrypi/rpi-eeprom/releases
  2. Insert your SD Card into your Chromebook
  3. Copy the Contents of the Zip file onto the SD Card
  4. Uses Files to eject your SD Card and then remove from Chromebook
  5. Boot your Raspberry Pi using the SD Card
Ubuntu Steps:

  1. Download 64-bit Ubuntu Server for Rasberry Pi from here: https://ubuntu.com/download/raspberry-pi
  2. In Files: Move it from Downloads to your Linux Files
  3. On the Linux Command Prompt run:
    • xz -d ubuntu-20.04.1-preinstalled-server-arm64+raspi.img.xz
  4. In Files: Right click the IMG file and click Zip selection
  5. Insert your SD Card or USB Drive you want to use
  6. Write the image to your removable media:
    1. Open Chromebook Recovery Utility
    2. Click the Setting Wheel next to the close window button
    3. Click "Use local image"
    4. Use files to select the zip file you just created
    5. Wait for the progress bar to complete TWICE (once for unpacking, once for writing)
    6. Exit Chromebook Recovery Utility
  7. Ubuntu 20.04 preinstalled servers uses cloud-init on first boot, you can find out more here:
  8. Setup your WiFi
    1. Open /system-boot/network-config in a text editor
    2. Uncomment (remove the # from the front of the line) the wifis section and update it with your wireless network and password
    3. wifis:
        wlan0:
          dhcp4: true
          optional: true
          access-points:
            "YourWifiNameHere":
              password: "YourPasswordHere"

    4. Or for a fixed IP
    5. wifis:
        wlan0:
          addresses: [192.168.1.XX/24]
          gateway4: 192.168.1.1
          nameservers:
            addresses: [192.168.1.1]
          dhcp4: false
          optional: true
          access-points:
            "YourWifiNameHere":
              password: "YourPasswordHere"

    6. Save the File
  9. If you are using a SD Card, you are finished and can:
    1. Eject the removable media using the Files app
    2. Remove it from the laptop
If you are using a USB Flash Drive, follow these additional steps:
  1. Decompress vmlinux
    1. Share the system-boot partition with Linux
      • In the files app, right click it and click "Share with Linux"
    2. Open the terminal
      1. cd /mnt/chromeos/removable/system-boot
      2. zcat vmlinuz > vmlinux
  2. Update /system-boot/config.txt to have the pi4 section look like:
    • [pi4]
      max_framebuffers=2
      dtoverlay=vc4-fkms-v3d
      boot_delay
      kernel=vmlinux
      initramfs initrd.img followkernel
  3. Install a script that will do this after installing updates (currently this cannot be done on a Chromebook because the writable partition is mounted read-only --oh the irony--)
    1. Create a file on /system-boot (/system-boot/firmware if logged into the pi) called auto_decompress_kernel and paste the code that is at the end of the blog
    2. Create a file in /writable/etc/apt/apt.conf.d called 999_decompress_rpi_kernel and paste the following single line into it:
      • DPkg::Post-Invoke {"/bin/bash /boot/auto_decompress_kernel"; };
    3. Share the writable partition with Linux
      • In the files app, right click it and click "Share with Linux"
    4. Open the terminal
      1. cd /mnt/chromeos/removable/writable/etc/apt/apt.conf.d
      2. chmod +x 999_decompress_rpi_kernel
You can now install the SD card or USB drive into the Raspberry Pi and start it up. You will need to login and change the password and then reboot (sudo reboot now) before it will connect to WiFi


auto_decompress_kernel
#!/bin/bash -e

#Set Variables
BTPATH=/boot/firmware
CKPATH=$BTPATH/vmlinuz
DKPATH=$BTPATH/vmlinux

#Check if compression needs to be done.
if [ -e $BTPATH/check.md5 ]; then
    if md5sum --status --ignore-missing -c $BTPATH/check.md5; then
    echo -e "e[32mFiles have not changed, Decompression not needede[0m"
    exit 0
    else echo -e "e[31mHash failed, kernel will be compressede[0m"
    fi
fi

#Backup the old decompressed kernel
mv $DKPATH $DKPATH.bak

if [ ! $? == 0 ]; then
    echo -e "e[31mDECOMPRESSED KERNEL BACKUP FAILED!e[0m"
    exit 1
else    echo -e "e[32mDecompressed kernel backup was successfule[0m"
fi

#Decompress the new kernel
echo "Decompressing kernel: "$CKPATH".............."

zcat $CKPATH > $DKPATH

if [ ! $? == 0 ]; then
    echo -e "e[31mKERNEL FAILED TO DECOMPRESS!e[0m"
    exit 1
else
    echo -e "e[32mKernel Decompressed Succesfullye[0m"
fi

#Hash the new kernel for checking
md5sum $CKPATH $DKPATH > $BTPATH/check.md5

if [ ! $? == 0 ]; then
    echo -e "e[31mMD5 GENERATION FAILED!e[0m"
    else echo -e "e[32mMD5 generated Succesfullye[0m"
fi

#Exit
exit 0

Sources and Thanks to:


Raspberry Pi Print Server: Part 1

As I still have our perfectly functional Samsung SCX-4623f laser printer and scanner, but wanted to be able to place it away from my server. I looked at a few different options, but decided on a Raspberry Pi to be my best option. Below are the several options that I considered.

The Choices

IOGear GPSU21 or other print server

  • Price: $40-50
  • Does not list SCX-4623f as compatible
  • Requires Ethernet connection so not as flexible
  • Printer Only, No Scanner function

Raspberry Pi

  • Price: Varies based on options
  • Requires more time to setup
  • A learning experience/project
  • Wouldn't need to find new owner for existing printer

Repurposing an old laptop

  • Price: Free*
  • Not power or space efficient

Hooking a laptop up every time we want to print

  • Inconvenient
  • Can't print from phones

New Printer/Scanner with built-in wireless

  • Price: ~$200

Parts


Here are the parts that I went with:
  • Raspberry Pi 4 2GB ($30-35 at Microcenter)
    • Another option was the Raspberry Pi 3 B+ ($25-30 at Microcenter), but I wanted the faster processor and more memory
  • Raspberry Pi Power Supply ($7-10 at Microcenter)
  • GeeekPi Raspberry Pi 4 Aluminum Heatsinks 20PCS ($8 Amazon)
    • This is 5 sets of 4 different sized heatsinks, so i now have plenty of spares
  • Argon One v2 ($20-25 Amazon)
  • Samsung Fit 64GB USB ($11-12 Amazon)
Total price before tax: $82

Raspberry Pi 4 with 2GB of RAM because it was not much more than either the 1GB version or Raspberry Pi 3B.

You can get cheaper cases, but I liked the Argon the best. I considered many full coverage heatsink cases, but those left the GPIO pins and the sides of the board exposed and I didn't want the chance of those getting shorted or something getting in. There were other full aluminum housing, but I couldn't find any reviews about whether those affected the wifi range. So it was basically between the Argon One and Flirc. I chose the Argon over the FLIRC because it redirects all the connections to the back and has a fan with a controller.

Even though the Argon One has heatsinks for the CPU and RAM, it did not have any for the Ethernet, USB, or Power Chip. Since the Argon One applies the power thru the GPIO headers and not the USB C port on the Pi, I figured that the power chip would not need cooled. However, since I am unsure about how much of a load scanning might put on the USB chip, I wanted a heatsink for that. Several other brand of heatsinks that I looked at had questionable reviews about how strong the thermal adhesive was.

I went with a USB drive over a SD card because of the faster speed and increased reliability, but looking back I think I would change that. The USB boot feature of the Raspberry Pi and Ubuntu 20.04 aren't quite there.

Assembly

First take the top of the Argon and see the exposed underside of the GPIO pins along the back (top of pic)

I used electrical tape to cover the underside of the GPIO pins so my additional heatsinks will not accidentally short them

Install the Argon daughter board onto the Pi

I then placed the heatsinks on the ethernet and USB controller

Place the thermal pads on the Argon lid and then place the Pi and daughter board into the top of the Argon case

Use the short screws to keep them in place

Install the long screws and the rubber feet


16 January 2021

Brother MFC-7360N and Windows 10 64bit

A friend of mine recently got a new Windows 10 laptop and was trying to install the drivers for their Brother MFC-7360N from Brother's website (https://support.brother.com/g/b/downloadlist.aspx?c=us&lang=en&prod=mfc7360n_all&os=10013), but the setup would always get stuck saying to connect the printer with USB when they already had.

First step was to check Device Manager, which had a USB Composite Device with a yellow triangle that said something about an "operation failed". I immediately thought this was the problem and had them uninstall it. However, it always came back with the same yellow triangle no matter what we did.

I then googled "install brother printer on windows 10 usb composite device" and came across this post (https://appuals.com/brother-printer-usb-composite-device-error-code-10/) and it said that the printer required a firmware update to work with Windows 10 via USB. The other option would be to strictly use it as a network printer.

We then used their old laptop with Windows 7 on it to install the updated firmware from the same brother site.

After the printer was updated, installing the driver in Windows 10 worked like normal!