10 June 2024

r730xd Server Part 3: Software

 I will be using Proxmox Helper Scripts (https://tteck.github.io/Proxmox/ or https://Helper-Scripts.com) to help configure the different LXCs and VMs that I want.


Disable nag screen

As I was tired of having to confirm that I didn't have a subscription, I ripped these commands from Proxmox VE Tools -> Proxmox VE Post Install (https://raw.githubusercontent.com/tteck/Proxmox/main/misc/post-pve-install.sh) and ran on the Shell command line

  • echo "DPkg::Post-Invoke { \"dpkg -V proxmox-widget-toolkit | grep -q '/proxmoxlib\.js$'; if [ \$? -eq 1 ]; then { echo 'Removing subscription nag from UI...'; sed -i '/.*data\.status.*{/{s/\!//;s/active/NoMoreNagging/}' /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js; }; fi\"; };" >/etc/apt/apt.conf.d/no-nag-script
  • apt --reinstall install proxmox-widget-toolkit


Setup Users

My philosophy was to put all actual users in the 1000s and system users in the 2000s
  • user1
    • useradd user1
    • usermod -g users user1
    • adduser user1 user1
    • id user1
  • user2
    • useradd user2
    • usermod -g users user2
    • adduser user2 user2
    • id user2
  • mythtv
    • groupadd -g 2001 mythtv
    • useradd -u 2001 -g 2001 mythtv
  • nginx
    • groupadd -g 2002 nginx
    • useradd -u 2002 -g 2002 nginx
  • edit /etc/subuid and add
    • root:1000:1000
    • root:2000:1000
  • edit /etc/subgid and add
    • root:100:1
    • root:1000:1000
    • root:2000:1000


Import ZFS

  • zpool import storage
  • I decided not to map the drive in Proxmox, but if you wanted to you would do that here
    • proxmox -> Datacenter -> Storage -> Add -> ZFS


Fix Directory/File Permissions

  • /storage/mythtv
    • cd /storage/mythtv
    • ls -al
    • find ./ -user <current owner> -print0 | xargs -0 chown -h mythtv
    • find ./ -group <current group> -print0 | xargs -0 chgrp -h mythtv
  • /storage/containers/mythtv
    • cd /storage/containers/mythtv
    • ls -al
    • find ./ -user <current owner> -print0 | xargs -0 chown -h mythtv
    • find ./ -group <current group> -print0 | xargs -0 chgrp -h mythtv
  • /storage/containers/webserver
    • cd /storage/containers
    • chown -R nginx webserver
    • chgrp -R nginx webserver


Create a Docker LXC

Now I needed a Docker LXC to run my webserver and MythTV

  • bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/docker.sh)"
  • I then edited the config to set a static IP and change the hostname
    • docker -> Network -> net0 -> Edit
    • docker -> DNS -> Hostname -> Edit
  • Add users
    • nas
      • groupadd -g 2000 nas
      • useradd -u 2000 -g 2000 nas
    • mythtv
      • groupadd -g 2001 mythtv
      • useradd -u 2001 -g 2001 mythtv
    • nginx
      • groupadd -g 2002 nginx
      • useradd -u 2002 -g 2002 nginx


MythTV

  • Edit /storage/containers/mythtv/docker-compose.yml
    • Change the User Ids and Groups Ids to 2001
  • Test
    • docker compose up -d
  • If it looks like API port changed from 6544 to 6744, then you need to fix the IPs
  • Check pin and update backend ip
    • apt-get install default-mysql-client
    • mysql -p -h 127.0.0.1 -P 3306 mythconverg
      • select * from settings where value like '%pin%'
      • update settings set data = '192.168.1.31' where data = '192.168.1.11' ;
      • quit;
  • Restart MythTV and Test
    • docker compose down && docker compose up -d
  • This time I put a copy of docker-mythtv.service in /storage/container/mythtv so I can easily copy it to /etc/systemd/system/ in the future
    • make sure to change `docker-compose` to `docker compose`
  • docker compose down
  • systemctl enable docker-mythtv
  • systemctl start docker-mythtv


Webserver

  • Test
    • docker compose up -d
    • docker compose down
  • This time I put a copy of docker-webserver.service, certbot.service, and certbot.timer in /storage/container/webserver/systemd so I can easily copy it to /etc/systemd/system/ in the future
    • make sure to change `docker-compose` to `docker compose`
  • cp certbot.service certbot.timer docker-webserver.server /etc/systemd/system/
  • systemctl enable docker-webserver
  • systemctl start docker-webserver
  • systemctl enable certbot.timer


Create a Debian LXC for File Sharing

  • following:
  • bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/debian.sh)"
  • Add users/groups
    • <container> -> Console
    • user1
      • useradd user1
      • usermod -g users user1
      • adduser user1 user1
      • id user2
    • user2
      • useradd user2
      • usermod -g users user2
      • adduser user2 user2
      • id user2
    • nas
      • groupadd -g 2000 nas
      • useradd nas -u 2000 -g 2000 -m -s /bin/bash
      • adduser nas sudo
      • passwd nas
  • shutdown the container
  • Set a static IP and change the hostname
    • <container> -> Network -> net0 -> Edit
    • <container> -> DNS -> Hostname -> Edit
  • Add more compute/memory (2 cores/1024MB)
    • <container> -> Resources -> Cores -> Edit
    • <container> -> Resources -> Memory -> Edit
  • Add our storage
    • edit /etc/pve/lxc/<container id>.conf
    • add a line for each of your datasets like the below examples:
      • mp0: /storage/folder1/dataset1,mp=/storage/folder1/dataset1
      • mp1: /storage/folder1/dataset2,mp=/storage/folder1/dataset2
      • mp2: /storage/dataset3,mp=/storage/dataset3
  • Map the users
    • Notes
      • /etc/subuid and /etc/subgid need to specify the user starting the lxc container (root)
      • /etc/pve/lxc/<container id>.conf needs to map all ids and not just the ones you want to remap
    • edit /etc/pve/lxc/<container id>.conf add these lines
      • lxc.idmap: u 0 100000 1000
      • lxc.idmap: u 1000 1000 1000
      • lxc.idmap: u 2000 102000 63535
      • lxc.idmap: g 0 100000 100
      • lxc.idmap: g 100 100 1
      • lxc.idmap: g 101 100101 899
      • lxc.idmap: g 1000 1000 1000
      • lxc.idmap: g 2000 102000 63535
  • Change the drive permissions
    • I couldn't get the container to boot with trying to remap root so don't do this step
    • cd /rpool/data/subvol-<container id>-disk-0
    • find ./ -user 100000 -print0 | xargs -0 chown -h 2000
    • find ./ -group 100000 -print0 | xargs -0 chgrp -h 2000
  • Start the container
  • Install cockpit
    • apt install cockpit --no-install-recommends
    • wget https://github.com/45Drives/cockpit-file-sharing/releases/download/v3.3.7/cockpit-file-sharing_3.3.7-1focal_all.deb
    •  wget https://github.com/45Drives/cockpit-navigator/releases/download/v0.5.10/cockpit-navigator_0.5.10-1focal_all.deb
    • wget https://github.com/45Drives/cockpit-identities/releases/download/v0.1.12/cockpit-identities_0.1.12-1focal_all.deb
    • apt install ./*.deb
    • rm *.deb
  • Configure cockpit
    • https://192.168.X.X:9090
    • use nas to login
    • enable administrative access
    • Identities
      • Set a Samba password for each of the users
    • File Sharing
      • Click "Fix Now"
      • Global Settings
        • Toggle Global MacOS Shares
        • Add `allow insecure wide links = yes` to Advanced
        • Apply
      • Add your shares
      • I used the following to ensure all users can access others files in advanced
        • create mask = 0664
        • force create mode = 0664
        • directory mask = 0775
        • force directory mode = 0775
      • I used the following to be able to follow symlinks
        • follow symlinks = yes
        • wide links = yes


Create a Ubuntu VM for VNC and Handbrake

I have read that x264 sees reduced performance with over 6 threads, so I gave the VM 12 virtual cores since I want to be able to process 2 discs at a time.

  • bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/vm/ubuntu2404-vm.sh)"
  • Edit Cloud-Init
    • ubuntu -> Cloud-Init
    • set user/pass
    • set static IP
  • Disable start at boot
    • ubuntu -> Options -> Start at boot -> Edit
    • uncheck and the hit OK
  • Change resources
    • 12 Cores
    • 24 GB of memory (2 GB per core)
    • added 16 GB of storage
  • Console
    • start VM
    • login
  • Enable ssh with password
    • /etc/ssh/sshd_config.d/10_users.conf
      • Match User username
        • PasswordAuthentication yes
    • sudo systemctl restart ssh
  • VNC server
    • sudo apt install tigervnc-standalone-server
  • A window manager and terminal to use inside VNC
    • sudo apt install xfce4 xfce4-terminal
  • Keep proxmox console as text
    • sudo systemctl set-default multi-user.target
  • Start VNC
    • tigervncserver :1 -geometry 1600x900 -depth 24 -localhost no -SecurityTypes VncAuth,TLSVnc -xstartup /usr/bin/startxfce4
  • Add handbrake
    • sudo apt install handbrake libdvd-pkg
    • sudo dpkg-reconfigure libdvd-pkg
  • Set timezone (Added 2024-0620)
    • timedatectl list-timezones
    • sudo timedatectl set-timezone America/New_York
  • Setup samba shares
    • sudo apt install cifs-utils
    • create a file to save samba credentials in (eg smbcredentials)
      • username=username
      • password=password
    • mount the shares
      • sudo mount -t cifs //192.168.1.XX/nas /storage/encrypted/nas -o credentials=/home/<username>/smbcredentials,uid=<username>,gid=users
  • Add qemu agent (added 2024-07-29)
    • sudo apt install qemu-guest-agent
    • sudo systemctl start qemu-guest-agent


Appendix

Error
  •  Failed to run lxc.hook.pre-start for container
    • are your zfs pools mounted?
  • make sure cifs-utils are installed
    • sudo apt install cifs-utils
  • samba files are all owned by root
    • make sure to add the uid and gid to the mount command
Sources

05 June 2024

r730xd Server Part 2: Lowering Idle Power

 As the plan is to have the server on 24/7, I wanted to lower the idle power usage. All power numbers were manually captured from a smart outlet.


Baseline

1 x e3-1225v3, 4 x 8GB, 1 SSD, 2 x WD Red Plus 6TB, 1 x JMB585

  • in Ubuntu no load
    • ~35W

2 x e5-2670v3, 8 x 16GB, 1 x 10K 2.5" SAS

  • in Proxmox no load
    • ~95W (93.6 to 100)
    • 4% fans
  • stress -c 48
    • 290W
    • 8% fans
    • 70C + 77C (inlet 16C, exhaust 39C)


Removing a processor

This will lower the PCIe connectivity, but should save power. To be able to remove a processor, I needed to buy a CPU socket cover and an airflow guide

  • Socket 2011-3 cover: $6.99 + tax
  • Dell CPU Blank airflow guide 21PJD: $11.02 + tax
    • Note: this will not allow you to install blanks into the memory slots so dust could be an issue
  • Updated total: $452 + tax

1 x e5-2670v3, 4 x 16GB, 1 x 10K 2.5" SAS

  • in Proxmox no load
    • ~81W (80.5 to 85.4)
    • 4% fans
  • stress -c 24
    • 166W
    • 8% fans
    • 67C (inlet 15C, exhaust 27C)

1 x e5-2670v3, 8 x 16GB, 1 x 10K 2.5" SAS

  • in Proxmox no load
    • ~82W (81.4 to 85.2)
    • 4% fans
  • stress -c 24
    • 174W
    • 8% fans
    • 68C (inlet 15C, exhaust 27C)

This means that the memory used only about ~1W at idle for all 4 sticks and ~8W at full load. This means that running a single processor will save me about 13W.


Adding a second drive

Adding the second 1.2TB SAS drive increased my idle power usage by ~4.5W

  • in Proxmox no load
    • ~86.5W
    • 4% fans


Making the jump from a Xeon v3 (22nm) to v4 (14nm)

For this upgrade I bought used processors

  • 2 x 2680 v4: $21.58 + tax
  • Updated total: $474 + tax

I moved from a 2670 v3 to a 2680 v4 in hopes that it would shave off a couple of watts at idle, unfortunately it uses ~1.5W more. So it seems that each core uses about 1W at idle.

  • in Proxmox no load
    • ~88W (87.4 to 89.1)
    • 4% fans
  • stress -c 28
    • 205-206W
    • 80C (inlet 17C, exhaust 30C)

While not improving the idle power usage, it will be a large boost in both single-core (13%) and multi-core performance (23%) https://www.cpubenchmark.net/compare/2779vs2337/Intel-Xeon-E5-2680-v4-vs-Intel-Xeon-E5-2670-v3


Setting governor to powersave

Tried changing the governor to save idle power, but did not notice any change.
  • apt-get install linux-cpupower
  • cpupower frequency-set -g powersave


Appendix

Sources