03 April 2026

Minecraft for Chromebook Sign-in Issues

Minecraft for Chromebook would constantly give me the following error when trying to sign in with my Microsoft account and password:

Please retry with a different device or other authentication method


The Issue

Turns out that it doesn't support password + 2 factor authentication


The Fix

To login, I had to temporarily disable 2 factor:

I was then able to sign in to Minecraft. However, instead of prompting for a password, it texted me a code.

I then re-enabled 2 factor authentication where Microsoft tried to get me to use their authentication app.


Appendix

Sources



01 April 2026

Lenovo Chromebook Plus 14" OLED Review

My 2020 HP Chromebook X360 14c was annoying me with the following behaviors:

  • the trackpad would like to get stuck in a clicked state
  • the fan would spin up at random times and be quite noisy
  • chrome would lag doing some simple tasks like opening an email in gmail
This meant I was in the market for a new laptop.


Use Case

I mainly use the laptop for browsing the web, light gaming (eg. Minecraft), and VNCing into my server. I will also occasionally stream games from my desktop.


Criteria

Current Laptop specs

  • HP Chromebook 14C-CA0053DX
  • i3-10110U 2 cores and 4 threads
  • 8 GiB of memory
  • 14" 16:9 aspect ratio screen

Requirements

  • USB-C on either side for easy charging
  • Backlit keyboard
  • 14" 16:10 aspect ratio screen
  • at least 16 GiB of RAM
Preferred
  • No fan noise
  • 2-in-1 (for versatility)
  • Touchscreen
  • Long battery life


Top Choices

I chose the first one as it felt like the best fit for my requirements.


First Impressions

Packaging was superb and felt premium

  • Came wrapped in brown shrink wrap
  • The outside box was a standard brown box with the laptop specs on the side
  • Inside contained the charger in a brown box and a second shrink wrapped box containing the laptop suspended in the middle by cardboard packing spacers
  • The inside box contained the laptop inside a cloth feeling bag
  • Opening the laptop there was a pamphlet and cloth screen protector inside

Looks

  • The seashell color is a tasteful mix of gold and silver
  • Minimal logos on the cover
    • A centered reflective "Lenovo" logo
    • A printed "chromebook plus" on the top left

In hand feel

  • Laptop is lighter (2.75 vs 3.64 pounds) and thinner (0.62 vs 0.7 inches) than my previous, likely due to not being a 2-in-1
  • It is slightly deeper (8.63 vs 8.11 inches) and slightly less wide (12.37 vs 12.6 inches) than my previous
  • Top is metal and feels very rigid
  • Bottom while plastic also feels rigid and the waves on the bottom help make the laptop feel like it has more grip when carrying closed
  • Keyboard has a little bit of deck flex, but I don't notice it during normal usage
  • Originally the hinge was too tight to open the laptop with one-hand, but it is slowly getting better


Setup and Initial thoughts

Setup was fairly smooth

  • I had to plug the laptop in before it would turn on
  • Connected to WiFi
  • Updated the laptop
  • Logged in to Google account
  • Answered the simple setup questions

General Usage

  • Resumes from sleep before the screen is all the way open
  • Fingerprint reader works fast
  • Keyboard feels great to type on
  • Screen is nice and bright for indoor usage and I don't see any screen door effect unless I put my face inches from the screen
    • Side Note: I hate that they call 1920x1200 a 2K screen
  • Battery life is appearing to be excellent as right now it is at 57% and predicting between 7-9 hours (with screen at about 40%)
  • Minecraft plays well, but I can't get a FPS number


Other

Testing

  • ChromeOS built-in CPU test
    • CPU throttles quickly, but remains at 2.3 GHz even at nearly 100 C
    • During the test the UI does become laggy
    • After the test is complete the CPU temp drops quickly
  • Geekbench 6
    • Geekbench 6.6.0 for Android AArch64
    • For some reason the fastest core was disabled on my first set of runs
      • Single-Core scores: 1023, 1027, 1109, 1155
      • Multi-Core scores: 6536, 6482, 6500, 6467
    • After rebooting, the cores were properly detected
      • Single-Core scores: 2541, 2540, 2527
      • Multi-Cores scores: 7789, 7890, 7846
    • For reference, the Core i3-10110U
      • Single-Core scores: 1151, 1093, 1201
      • Multi-Core scores: 2011, 2049, 2071


Appendix

Sources



16 January 2026

Home Assistant add Matter support and devices

As I run Home Assistant in a LXC using Portainer, I couldn't just install the python-matter-server through Add-Ons. Here are the steps that I used to get it installed.


Create python-matter-server in Portainer

  • Create a Volume
    • Portainer -> Volumes -> Add volume
    • matter_config
  • Create a Container
    • Image
      • ghcr.io/home-assistant-libs/python-matter-server:stable
    • Advanced container settings
      • Commands & logging
        • We want to override the listen address to allow only local connections from Home Assistant
        • Command: --storage-path /data --paa-root-cert-dir /data/credentials --listen-address=127.0.0.1
      • Volumes
        • /data -> matter_config
      • Network
        • Network -> host
      • Restart policy
        • Restart Policy -> Unless stopped
    • Click "Deploy the container"


Add Matter integration to Home Assistant

  • Settings -> Devices & services
    • Add integration
      • Matter
      • Since they are on the same host, use the default address


Adding Matter Devices

  • Install the Home Assistant App on your phone
  • Login
  • Make sure your phone is on the WiFi that you want the matter device on (eg. your IOT network)
  • Follow these steps in the App:
    • Hamburger menu
    • Devices & services
    • Matter
    • Add Device
    • No, It's new
    • Scan the QR code on the device
    • Follow the steps

Appendix

Sources




04 January 2026

Portainer and Docker 29

Portainer was not able to access the local environment with the following error: "Failed loading environment The environment named local is unreachable". Then I found out that it was not compatible with Docker 29. Here are the steps that I used to downgrade Docker to a compatible version.


Downgrade Docker

  • Check the version installed
    • apt list --installed docker-ce
  • Check for available version
    • apt-cache policy docker-ce | head -n 30
  • Downgrade
    • DOCKER_VERSION="5:28.5.2-1~debian.12~bookworm"
    • apt-get install docker-ce-cli="$DOCKER_VERSION" docker-ce="$DOCKER_VERSION" docker-ce-rootless-extras="$DOCKER_VERSION"
  • Restart
    • shutdown -r now


Appendix

Sources


02 January 2026

Proxmox SATA DVD drive passthrough

I had a DVD movie that was only readable by my SATA DVD drive. This meant that I needed to pass it through to my VM. However, it appears that it is not possible at this time without doing an iSCSI workaround. Since I only needed to do this for a single DVD, I didn't want to do something that complicated. What I decided to do was to use an LXC to create an ISO that I could use in the VM.


LXC DVD drive passthrough

  1. Create a new LXC (I used the debian version and upped memory to 2 GiB and storage to 8 GiB)
    • bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/debian.sh)"
  2. Passthrough the DVD drive
    1. Container -> Resources
    2. Add -> Device Passthrough
    3. Device Path: /dev/sr0 (or whatever number your DVD drive is)
    4. Click Add
  3. Install dvdbackup, libdvdcss2, and ddrescue
    1. apt-get install dvdbackup libdvdcss2 gddrescue
    2. dpkg-reconfigure libdvd-pkg
  4. Create the ISO
    1. dvdbackup -i /dev/sr0 -I  # unlock the DVD
    2. ddrescue -b2048 -d -r500 /dev/sr0 disc.iso disc.mapfile  # Create the ISO


Appendix

Sources



07 November 2025

Jellyfin LXC iGPU passthrough

As the server my Jellyfin LXC is running on now has an iGPU, I wanted to see if I could use it to enable hardware acceleration for transcoding.


Map the video and render groups

  • Proxmox root node
    • Find the group ids for video and render
      • cat /etc/group
      • For me it was 44 and 104
    • Allow the mapping of those 2 gids
      • vi /etc/subgid
      • Add
        • root:44:1
        • root:104:1
    • Determine the name of the card and render device
      • ls -al /dev/dri
      • For example mine are
        • /dev/dri/card1
        • /dev/dri/renderD128
  • Jellyfin container
    • Note what its ID is. For my example I will use XXX
    • Find the group ids for video and render
      • cat /etc/group
      • For me it was 44 and 104 as well
    • Make sure jellyfin user is part of the video and render groups
      • groups jellyfin
  • Proxmox root node
    • Map the group ids
      • vi /etc/pve/lxc/XXX.conf
      • Add the following:
lxc.idmap: u 0 100000 65536
lxc.idmap: g 0 100000 44
lxc.idmap: g 44 44 1
lxc.idmap: g 45 100045 58
lxc.idmap: g 104 104 1
lxc.idmap: g 105 100105 65431


Device Passthrough

  • Map the device
    • Proxmox -> jellyfin -> Resources
    • Add -> Device Passthrough
      • Device: /dev/dri/card1
      • Advanced: check
      • gid: 44
      • Click "Add"
    • Add -> Device Passthrough
      • Device: /dev/dri/renderD128
      • Advanced: check
      • gid: 104
      • Click: "Add"


Finalize changes

  • Restart the Jellyfin container
  • Turn on hardware acceleration in the Jellyfin UI
    • Dashboard -> Playback -> Transcoding
    • Hardware Acceleration: "Intel Quicksync (QSV)"


Results

Unfortunately, my Haswell processor (Xeon E3-1275 v3) doesn't support Intel Quicksync (QSV) in Jellyfin. It does support Video Acceleration API (VAAPI), but the quality was horrible and doesn't completely remove the load from the processor. With this in mind, I just kept hardware acceleration to None and video transcoding disabled on the users.


Appendix

Sources




Migrating Proxmox Containers to a Different Server

 Here are the steps that I used to migrate from my Proxmox containers from my Dell r730xd to my Lenovo TS140


New (old) Server Setup


Migrate LXC containers and ZPool drives

As I was going to move my drives containing my main ZPool, I decided that the easiest way to migrate my containers was to back them up to the drives themselves. This way after moving the drives over and importing the array, I can simply restore them from the backups on the new server.


Backup Existing LXC containers

  1. Create a destination for proxmox backups on the drives to move over:
    • zfs create storage/encrypted/proxmox
    • mkdir /storage/encrypted/proxmox/backups
  2. Add this backup destination to Proxmox:
    • Datacenter -> Storage
    • Add -> Directory
    • Set "Content" to "Backups"
  3. Create backup of each LXC that you want to migrate
    • Choose the LXC
    • Click "Shutdown"
    • Select "Backup"
    • Click "Backup now"
    • Select your backup directory
    • Click "Backup"
    • Make note of any bind mount points that you will need to recreate
  4. Disable Auto-Start
    • Select "Options"
    • Select "Start at boot"
    • Click "Edit"
    • Uncheck and click "OK"
  5. Repeat 3 + 4 for each LXC that you are migrating
  6. Unmount and export the ZPool
    • zpool export storage


Recreate Users and Groups

For file permissions to transfer seamlessly, it is best to have the same users and groups on both servers

  • Old Server
    • View current users
      • cat /etc/passwd
    • View current groups
      • cat /etc/group
    • Note the user/group name as well as its ID
    • View the mapping files
      • cat /etc/subuid
      • cat /etc/subgid
  • New Server
    • For each needed group run:
      • groupadd -g <gid_number> <group_name>
    • For each needed user run:
      • useradd -u <uid_number> -g <gid_number> <username>
    • Set users primary group
      • usermod -g <primary_group> <username>
    • Add user to group(s)
      • usermod -aG <group1>,<group2> <username>
    • Update mapping files
      • vi /etc/subuid
      • vi /etc/subgid


Physical Changes

  1. Shutdown both the old and new server
  2. Move the HDDs over to the new server
  3. Turn on the new server


Restore LXC containers

  1. Import the ZPool
    • zpool import storage
  2. Add the backup destination to Proxmox
    • Datacenter -> Storage
    • Add -> Directory
  3. Restore the LXC
    1. Select the LXC backup that you want to restore
    2. Click "Restore"
    3. Edit the CT ID if desired
    4. Click "Restore"
  4. Repeat step 3 for each LXC


Results

The containers spun up without issue and I didn't have to change any client configs!


Appendix

Sources