Homelab Proxmox + VyOS + Debian server configuration - WIP
Contents
Here I document my home server config. I’m trying to integrate the router in it by using a second USB ethernet as WAN port. Running proxmox I can run a Debian installation for my usual stuff and an OpenWRT router image for routing.
Goal
I’m looking for a small and energy efficient server with some storage capability.
- Small form factor (<5 liter)
- Low power consumption home server + router configuration
- Nice to have: virtualized router and home server (to satisfy above)
- Server requirements
- A few TB storage for home use (backup, pictures, etc.)
- Stable and secure Linux OS preferred
- Run media server
- Run home assistant
- Router requirements (from home networking setup)
- Should support 100 MBit WAN (NAT/firewalling requirement)
- Separate networks for internal, guest, and buggy IoT devices (VLAN-aware ethernet and WiFi)
- Ability to prevent buffer bloat (need decent QoS)
- Gigabit LAN
- Low-power (ideally <10W for full setup)
- LAN-wide adblocking (DNS-based pi-hole or related)
- Home VPN server (IKEv2 or Wireguard, >50Mbps)
Hardware
I’m looking for a small and energy efficient server with some storage capability. I’ve settled for using a NUC with an extra 2.5" bay, which suits my needs.
- NUC8i3BEH
- 2TB Samsung 970 EVO Plus M.2 80mm PCIE
- 4TB Samsung 860 2.5" SATA
- 16GB DDR4 RAM
- Conbee Zigbee USB dongle
- USB Ethernet dongle for split WAN/LAN network
- USB to smartmeter cable (FTD)
- USB to heatmeter cable (FTD)
- Optional: USB port to power ESP8266 water meter board
- Optional: Bluetooth USB dongle (for more range)
Target services & architecture
I considered the following virtualization software:
- Proxmox or XCP-NG + XO
- Proxmox is easier, uses less power. Only need to solve LVM on multiple disks
For routing, I considered the following platforms:
- OpenWRT or VyOS or pfSense or OPNsense
- OpenWRT has difficulties upgrading?
- VyOS seems nice, is linux, have experience, but rolling release either means upgrading a lot or having random stability
- Don’t know OPNsense/pfSense
In the end I settled for Proxmox + VyOS
- Proxmox (8GB storage + 2GB RAM)
- sharing common bulk storage to guests via mount points (OK)
- VyOS VM (8GB storage + 2GB RAM)
- dns adblock list (OK)
- wireguard (OK)
- regular router config (OK)
- Debian Stable LXC ‘proteus’ (256GB thinvol + 12GB RAM)
- Nginx (for website & reverse proxy) (OK)
- Letsencrypt (for SSL) (OK)
- Docker
- Nextcloud –> upgrade from snap package (for file sharing, could be nginx webdav?) (OK - migrate)
- bpatrik/pigallery2 (for personal photo sharing) (OK - migrate)
- Home Assistant –> upgrade to Docker under Supervised mode instead of current python virtual env (for monitoring) (OK - migrate)
- Collectd (for data generation/collection) (later)
- Home automation worker scripts (for data generation/collection)
- many
- Influxdb (for data storage) (OK - migrate)
- Grafana (for data visualization) (OK - migrate)
- Transmission (downloading torrents) (later)
- Mosquitto (glueing home automation) (OK - migrate)
- Plex/Jellyfin (HTPC) (later)
- smbd (for Time Machine backups) (later)
- Debian Stable LXC ‘unifi’ (8GB thinvol + 2GB RAM)
- unifi-controller installed natively
Software distribution
I went with Ubuntu Server 20.04-LTS before, but this had frequent updates (feels like daily, also non-security). For my next server I’m thinking of Debian Stable instead.
Installation & configuration
Proxmox
Installation
Install Proxmox as described on their wiki, reserving a small part for the OS and most disk space for VMs:
- hdsize full
- swapsize 4GB
- maxroot 8GB
- minfree 0.5GB
Post-install fixes, sourced from Proxmox Helper Scripts. Never run scripts from the internet.
# From https://raw.githubusercontent.com/tteck/Proxmox/main/misc/post-pve-install.sh
# Disable Enterprise Repository
sed -i "s/^deb/#deb/g" /etc/apt/sources.list.d/pve-enterprise.list
# Enable No-Subscription Repository
cat << 'EOF' >>/etc/apt/sources.list
deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
EOF
# Disable Subscription Nag
echo "DPkg::Post-Invoke { \"dpkg -V proxmox-widget-toolkit | grep -q '/proxmoxlib\.js$'; if [ \$? -eq 1 ]; then { echo 'Removing subscription nag from UI...'; sed -i '/data.status/{s/\!//;s/active/NoMoreNagging/}' /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js; }; fi\"; };" >/etc/apt/apt.conf.d/no-nag-script
apt --reinstall install proxmox-widget-toolkit &>/dev/null
# Upgrade proxmox now
apt-get update
apt-get dist-upgrade
Add regular user with sudo power:
adduser tim
usermod -aG sudo tim
mkdir -p ~tim/.ssh/
touch ~tim/.ssh/authorized_keys
chown -R tim:tim ~tim/.ssh
chmod og-rwx ~tim/.ssh/authorized_keys
cat << 'EOF' >>~tim/.ssh/authorized_keys
ssh-rsa AAAAB...
EOF
apt install sudo
Now forbid root login for SSH and forbid password authentication (use public key only):
sed -i "s/^.PermitRootLogin yes/PermitRootLogin no/g" /etc/ssh/sshd_config
grep PermitRootLogin /etc/ssh/sshd_config
sed -i "s/^.PasswordAuthentication yes/PasswordAuthentication no/g" /etc/ssh/sshd_config
grep PasswordAuthentication /etc/ssh/sshd_config
sshd -t
systemctl restart ssh
Optional: add (lower privileged) user to Proxmox VE:
pveum user add tim@pve -firstname "Tim"
pveum passwd tim@pve
pveum acl modify / -user tim@pve -role PVEVMAdmin
Enable colors in shell & vi:
sed -i "s/^# alias /alias /" ~/.bashrc
cat << 'EOF' >>~/.bashrc
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
EOF
apt install vim
Set ondemand cpu governer for power saving. Add intel_pstate=disable
to boot parameters (2019):
sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT="quiet/GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_pstate=disable/' /etc/default/grub
# vi /etc/default/grub
# GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_pstate=disable"
update-grub && reboot
Set scaling governer to ondemand
apt install cpufrequtils
echo ondemand > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
Alternatively try enabling speedstep in BIOS (2018) (not needed for my setup).
Measure power consumption:
- only network (no HDMI, keyboard, USB) : ~2.5W+-0.3W, spikes from 1.6-3.5W (ondemand governor, chi-by-eye PM231E)
- only network + ASUS USB NIC (w/o cable): ~2.8W+-0.3W (ondemand governor, chi-by-eye PM231E)
- only network + Achate ASIX USB3.0-C NIC (w/o cable): ~6.0W+-0.3W (ondemand governor, chi-by-eye PM231E)
Networking
Setup VLAN-aware networking on management interface. Resulting /etc/network/interfaces
config:
auto lo
iface lo inet loopback
iface eno1 inet manual
auto enx7c10c9194780
iface enx7c10c9194780 inet manual
auto vmbr0
iface vmbr0 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 10-30
#LAN port
auto vmbr1
iface vmbr1 inet manual
bridge-ports enx000ec6955446
#bridge_maxwait 40
bridge-stp off
bridge-fd 0
#WAN
auto vmbr0.10
iface vmbr0.10 inet static
address 172.17.10.4/24
gateway 172.17.10.1
#Mgmt interface
Storage
Expand Proxmox LVM over two disks. Sources:
- https://serverfault.com/questions/423544/how-would-i-add-a-second-physical-hard-drive-to-proxmox
- https://kenmoini.com/post/2018/10/quick-n-dirty-adding-disks-to-proxmox/
- https://forum.proxmox.com/threads/proxmox-v5-extend-pve-data.37571/
- https://bugzilla.proxmox.com/show_bug.cgi?id=1241
lsblk
# Create full-disk LVM partition
cfdisk /dev/sda
# Wipe disk of previous LVM config
pvremove -y -ff /dev/sda*
# Create new PV
pvcreate /dev/disk/by-id/ata-Samsung_SSD_860_EVO_4TB_S45JNB0M500432F-part1
vgextend pve /dev/disk/by-id/ata-Samsung_SSD_860_EVO_4TB_S45JNB0M500432F-part1
# Remove 'local-lvm' storage via GUI or pvesm
pvesm remove local-lvm
# Remove data LVM pool, recreate new one (could also extend probably)
lvremove pve/data
# Optional, to extend logical volume over full volume group
# lvextend --poolmetadatasize +30G pve/data` # too big
# lvextend -l +100%FREE pve/data
Set up storage via LVM thin pools / thin volumes
-
Data requirements/architecture: I have two disks of different speed. I want the VMs to run on the fast disk and bulk data to run on the slow disk. To ensure this I create two thinpools, the first one for VMs which resides on the fast disk, the second spans the remainder of the fast disk + the full slow disk. This wastes a bit of space between the two thinpools, but ensures the VMs run on the fast disk.
- LVM (0.012TB)
- Proxmox: 8 GB (root) + 4 GB (swap)
- LVM thin ’thinpool_vms’ (0.5TB)
- VyOS: 0.008TB
- Debian: 0.25TB
- Other/future/debian growth
- LVM thin ’thinpool_data’ (remainder = 5.5TB for 5.5TB data)
- Bulk data: 3TB
- Backups VMS: 0.25TB
- Backups MBP: 1TB
- Backups MBA: 0.25TB
- Backups remote: 1.25TB
- LVM (0.012TB)
-
Create data setup in LVM
# Create thinpool on fast disk lvcreate --thin -L 0.5TB pve/thinpool_vms # Create thinpool on remainder of fast disk + slow disk lvcreate --thin -l 100%FREE pve/thinpool_data # Ensure data split across disks was succesful pvdisplay -m /dev/sda1 pvdisplay -m /dev/nvme0n1p3 lvdisplay -m /dev/pve/thinpool_vms lvdisplay -m /dev/pve/thinpool_data # Create LVs for future use on thinpool_data lvcreate --thinpool pve/thinpool_data --name lv_bulk --virtualsize 3.0T lvcreate --thinpool pve/thinpool_data --name lv_backup_vms --virtualsize 0.25T lvcreate --thinpool pve/thinpool_data --name lv_backup_mbp --virtualsize 1T lvcreate --thinpool pve/thinpool_data --name lv_backup_mba --virtualsize 0.25T lvcreate --thinpool pve/thinpool_data --name lv_backup_tex --virtualsize 1.25T
-
Create filesystems
mkfs.ext4 /dev/mapper/pve-lv_bulk mkfs.ext4 /dev/mapper/pve-lv_backup_vms mkfs.ext4 /dev/mapper/pve-lv_backup_mbp mkfs.ext4 /dev/mapper/pve-lv_backup_mba mkfs.ext4 /dev/mapper/pve-lv_backup_tex
- Mount in Proxmox
mkdir /mnt/bulk mkdir -p /mnt/backup/{vms,mba,mbp,tex} mount /dev/mapper/pve-lv_bulk /mnt/bulk/ mount /dev/mapper/pve-lv_backup_vms /mnt/backup/vms mount /dev/mapper/pve-lv_backup_mbp /mnt/backup/mbp mount /dev/mapper/pve-lv_backup_mba /mnt/backup/mba mount /dev/mapper/pve-lv_backup_tex /mnt/backup/tex chmod og-rx /mnt/backup/{vms,mba,mbp,tex} chmod og-rx /mnt/bulk/
- Automount
cat << 'EOF' >>/etc/fstab /dev/mapper/pve-lv_bulk /mnt/bulk ext4 defaults 0 2 /dev/mapper/pve-lv_backup_vms /mnt/backup/vms ext4 defaults 0 2 /dev/mapper/pve-lv_backup_mbp /mnt/backup/mbp ext4 defaults 0 2 /dev/mapper/pve-lv_backup_mba /mnt/backup/mba ext4 defaults 0 2 /dev/mapper/pve-lv_backup_tex /mnt/backup/tex ext4 defaults 0 2 EOF
- Add directory to PVE storage manager
pvesm add dir backup --path /mnt/backup/vms --content vztmpl,iso,backup
- Add thin pool to PVE storage manager
pvesm scan lvmthin pve pvesm add lvmthin thinpool_vms --vgname pve --thinpool thinpool_vms
- Push back backups from elsewhere & optionally resize disks/partitions
e2fsck -fy /dev/pve/vm-200-disk-0 resize2fs /dev/pve/vm-200-disk-0 300G lvreduce -L 300G /dev/pve/vm-200-disk-0 # Edit LXC config in /etc/pve/lxc #rootfs: thinpool_vms:vm-200-disk-0,size=300G
Samba server
Optional: set up SMB on Proxmox for file sharing.
Install, disable unnecessary netbios deamon, and stop samba itself during configuration.
apt install samba
systemctl stop nmbd.service
systemctl disable nmbd.service
systemctl stop smbd.service
# systemctl disable smbd.service
Configure
[global]
server string = pve.vanwerkhoven.org
server role = standalone server
interfaces = lo vmbr0.10
bind interfaces only = yes
disable netbios = yes
smb ports = 445
log file = /var/log/samba/smb.log
max log size = 10000
# log level = 3 passdb:5 auth:5
Add users
adduser --home /mnt/bulk --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1010 bulkdata
adduser --home /mnt/backup/mbp --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1011 backupmbp
adduser --home /mnt/backup/mba --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1012 backupmba
adduser --home /mnt/backup/tex --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1013 backuptex
chown backupmbp:backupmbp /mnt/backup/mbp
chown backupmba:backupmba /mnt/backup/mba
chown backuptex:backuptex /mnt/backup/tex
chown bulkdata:bulkdata /mnt/bulk
chmod 2770 /mnt/backup/{mba,mbp}
chmod 2770 /mnt/bulk
openssl rand -base64 20
smbpasswd -a backupmbp
smbpasswd -a backupmba
smbpasswd -a backuptex
smbpasswd -e backupmba
smbpasswd -e backupmbp
smbpasswd -e backuptex
Set up shares
[bulk]
path = /mnt/bulk
browseable = yes
read only = no
writable = yes
force create mode = 0660
force directory mode = 2770
valid users = sambarw
[backupmbp]
comment = Time Machine mbp
path = /mnt/backup/mbp
browseable = yes
writeable = yes
create mask = 0600
directory mask = 0700
spotlight = yes
vfs objects = catia fruit streams_xattr
fruit:aapl = yes
fruit:time machine = yes
valid users = backupmbp
[backupmba]
comment = Time Machine MBA
path = /mnt/backup/mba
browseable = yes
writeable = yes
create mask = 0600
directory mask = 0700
spotlight = yes
vfs objects = catia fruit streams_xattr
fruit:aapl = yes
fruit:time machine = yes
valid users = backupmba
Restart Samba
systemctl restart smbd.service
Optional tweaks & bugfixes
Optional: assign USB dongle interface a nice name. N.B. this breaks proxmox recognizing the adapter as network interface in the GUI, disabling some configuration options.
echo 'SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="7c:10:c9:19:47:80", NAME="usb0"' | tee -a /etc/udev/rules.d/70-persistent-net.rules
echo "auto usb0
iface usb0 inet dhcp" | tee -a /etc/network/interfaces
udevadm control --reload-rules && udevadm trigger
Optional: Fix boot delay after bluetooth driver error. Looks like it’s caused by DHCP timeout on unconnected ethernet port, leave as is.
Add intel-ibt-17*
bluetooth driver for NUC –> does not work, conflicts with proxmox kernel. Wait until adopted in main PVE kernel.
- Add non-free debian packages in /etc/apt/sources.list or related
- Install firmware
apt install firmware-iwlwifi
Bugfix: bridge brought up before physical port not up: “error: vmbr1: bridge port enx000ec6955446 does not exist”
- Increase
bridge_maxwait
to 40s - Alternative: Increase
bridge_waitport
? - Try something else
VyOS
Installation as KVM
Get image, install and add serial socket for using xterm.js
support (copy-pasting). Also start on boot.
ls /var/lib/vz/template/iso/
qm create 200 --name vyos --memory 2048 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1 --ide2 media=cdrom,file=local:iso/vyos-1.4-rolling-202211010829-amd64.iso --virtio0 data:8
qm set 200 --net1 virtio,bridge=vmbr1
qm set 200 -serial0 socket
qm set 200 --onboot 1
Open terminal via Spice/xterm.js, install image, remove image, and reboot
qm start 200
# in guest: `install image`
qm set 200 --ide2 none
qm reboot 200
Enable QEMU guest agent in Proxmox (VyOS has this since 2018). Source
qm set 200 --agent 1
qm agent 200 ping
Configure VyOS
Set global settings
set set system host-name vyos
set system domain-name lan.vanwerkhoven.org
Configure eth1
(=vmbr1=WAN) as DHCP client
#TODO replace with dhcp query @ right VLAN id later
set interfaces ethernet eth1 vif 1
set interfaces ethernet eth1 vif 1 description WAN
set interfaces ethernet eth1 vif 1 address dhcp
Set up neworking
Setup VLAN on LAN network, see here
- Source: https://forum.vyos.io/t/bridge-with-vlans/7459
- Source: https://blog.kroy.io/2020/05/04/vyos-from-scratch-edition-1/#Configuring_the_LAN_and_Remote_access
- Source: https://github.com/lamw/PowerCLI-Example-Scripts/blob/master/Modules/VyOS/vyos.template
- Source: https://docs.vyos.io/en/latest/configuration/interfaces/bridge.html#using-vlan-aware-bridge
- Source: https://engineerworkshop.com/blog/configuring-vlans-on-proxmox-an-introductory-guide/
set interfaces ethernet eth0 description LAN
set interfaces bridge br100 enable-vlan
# set interfaces bridge br100 member interface eth0 allowed-vlan 2-4092
set interfaces bridge br100 member interface eth0 allowed-vlan 10
set interfaces bridge br100 member interface eth0 allowed-vlan 20
set interfaces bridge br100 member interface eth0 allowed-vlan 30
set interfaces bridge br100 member interface eth0 allowed-vlan 40
set interfaces bridge br100 vif 10 address 172.17.10.1/24
set interfaces bridge br100 vif 10 description 'VLAN10-Mgmt'
set interfaces bridge br100 vif 20 address 172.17.20.1/24
set interfaces bridge br100 vif 20 description 'VLAN20-Trusted'
set interfaces bridge br100 vif 30 address 172.17.30.1/24
set interfaces bridge br100 vif 30 description 'VLAN30-Guest'
set interfaces bridge br100 vif 40 address 172.17.40.1/24
set interfaces bridge br100 vif 40 description 'VLAN40-IoT'
set interfaces bridge br100 stp
Enable SSH on only management interface without password auth.
set service ssh port '22'
set service ssh listen-address 172.17.10.1
set service ssh disable-password-authentication
set system login user vyos authentication public-keys tim@neptune type ssh-rsa
set system login user vyos authentication public-keys tim@neptune key AAAA...
Harden SSH, only allow strong ciphers, don’t use md5/sha1, don’t use contested nistp256:
set service ssh ciphers aes128-cbc
set service ssh ciphers aes128-ctr
set service ssh ciphers aes128-gcm@openssh.com
set service ssh ciphers aes192-cbc
set service ssh ciphers aes192-ctr
set service ssh ciphers aes256-cbc
set service ssh ciphers aes256-ctr
set service ssh ciphers aes256-gcm@openssh.com
set service ssh ciphers chacha20-poly1305@openssh.com
set service ssh mac hmac-sha2-256
set service ssh mac hmac-sha2-256-etm@openssh.com
set service ssh mac hmac-sha2-512
set service ssh mac hmac-sha2-512-etm@openssh.com
set service ssh key-exchange curve25519-sha256
set service ssh key-exchange curve25519-sha256@libssh.org
set service ssh key-exchange diffie-hellman-group-exchange-sha256
set service ssh key-exchange diffie-hellman-group14-sha256
set service ssh key-exchange diffie-hellman-group16-sha512
set service ssh key-exchange diffie-hellman-group18-sha512
Set timezone & ntp server from global pool:
set system time-zone Europe/Amsterdam
delete system ntp
set system ntp server 0.nl.pool.ntp.org
set system ntp server 1.nl.pool.ntp.org
Enable DNS from DHCP, also for local machine. Set cache to 100k, 10x up from dnsmasq default. More local caching should give more speed and more privay (less public querying).
# TODO: fix when going live to WAN interface
# Specifically use name servers received for the interface that is using DHCP client to get an IP
set service dns forwarding dhcp eth1.1
set service dns forwarding allow-from 172.17.0.0/16
set service dns forwarding domain lan.vanwerkhoven.org server 172.17.10.1
set service dns forwarding listen-address 172.17.10.1
set service dns forwarding listen-address 172.17.20.1
set service dns forwarding listen-address 172.17.30.1
set service dns forwarding listen-address 172.17.40.1
set service dns forwarding cache-size 100000
# set system name-server 172.17.10.1 # use static
# set system name-servers eth1.1 # use from dhcp -- not working?
Configure DHCP server ranges for VLAN. 100-254 is dynamic, 1-100 is for static hosts.
delete service dhcp-server shared-network-name vlan10
set service dhcp-server shared-network-name vlan10 subnet 172.17.10.0/24 range vlan10range start 172.17.10.100
set service dhcp-server shared-network-name vlan10 subnet 172.17.10.0/24 range vlan10range stop 172.17.10.254
set service dhcp-server shared-network-name vlan10 subnet 172.17.10.0/24 default-router 172.17.10.1
set service dhcp-server shared-network-name vlan10 subnet 172.17.10.0/24 domain-name lan.vanwerkhoven.org
set service dhcp-server shared-network-name vlan10 subnet 172.17.10.0/24 name-server 172.17.10.1
delete service dhcp-server shared-network-name vlan20
set service dhcp-server shared-network-name vlan20 authoritative
set service dhcp-server shared-network-name vlan20 subnet 172.17.20.0/24 range vlan20range start 172.17.20.100
set service dhcp-server shared-network-name vlan20 subnet 172.17.20.0/24 range vlan20range stop 172.17.20.254
set service dhcp-server shared-network-name vlan20 subnet 172.17.20.0/24 default-router 172.17.20.1
set service dhcp-server shared-network-name vlan20 subnet 172.17.20.0/24 domain-name lan.vanwerkhoven.org
set service dhcp-server shared-network-name vlan20 subnet 172.17.20.0/24 name-server 172.17.20.1
delete service dhcp-server shared-network-name vlan30
set service dhcp-server shared-network-name vlan30 subnet 172.17.30.0/24 range vlan30range start 172.17.30.100
set service dhcp-server shared-network-name vlan30 subnet 172.17.30.0/24 range vlan30range stop 172.17.30.254
set service dhcp-server shared-network-name vlan30 subnet 172.17.30.0/24 default-router 172.17.30.1
set service dhcp-server shared-network-name vlan30 subnet 172.17.30.0/24 domain-name lan.vanwerkhoven.org
set service dhcp-server shared-network-name vlan30 subnet 172.17.30.0/24 name-server 172.17.30.1
delete service dhcp-server shared-network-name vlan40
set service dhcp-server shared-network-name vlan40 subnet 172.17.40.0/24 range vlan40range start 172.17.40.100
set service dhcp-server shared-network-name vlan40 subnet 172.17.40.0/24 range vlan40range stop 172.17.40.254
set service dhcp-server shared-network-name vlan40 subnet 172.17.40.0/24 default-router 172.17.40.1
set service dhcp-server shared-network-name vlan40 subnet 172.17.40.0/24 domain-name lan.vanwerkhoven.org
set service dhcp-server shared-network-name vlan40 subnet 172.17.40.0/24 name-server 172.17.40.1
Set up masquerading for outbound traffic. Ensure high rule number so it’s processed after firewalling.
#TODO: fix WAN VLAN
set nat source rule 5010 outbound-interface 'eth1.1'
set nat source rule 5010 source address '172.17.0.0/16'
set nat source rule 5010 translation address masquerade
set nat source rule 5010 protocol all
set nat source rule 5010 description 'Masquerade for WAN'
Set up static ips/host names. Still not possible to set the hostname of the router itself.
@TODO add AppleTV, switch, APs
set system static-host-mapping host-name vyos.lan.vanwerkhoven.org inet 172.17.10.1 # not sure if this works, already set to 127.0.0.1
set system static-host-mapping host-name pve.lan.vanwerkhoven.org inet 172.17.10.4
set system static-host-mapping host-name proteus.lan.vanwerkhoven.org inet 172.17.10.2
set system static-host-mapping host-name nextcloud.lan.vanwerkhoven.org inet 172.17.10.2
set system static-host-mapping host-name homeassistant.lan.vanwerkhoven.org inet 172.17.10.2
set system static-host-mapping host-name grafana.lan.vanwerkhoven.org inet 172.17.10.2
set system static-host-mapping host-name unifi.lan.vanwerkhoven.org inet 172.17.10.5
# Split DNS for specific hosts (mostly https & ssh) N.B. Ensure you restart DNS or set dns cache-size to 0 to ensure the cache is cleared!
set system static-host-mapping host-name ssh.vanwerkhoven.org inet 172.17.10.4
set system static-host-mapping host-name home.vanwerkhoven.org inet 172.17.10.2
set system static-host-mapping host-name www.vanwerkhoven.org inet 172.17.10.2
set system static-host-mapping host-name nextcloud.vanwerkhoven.org inet 172.17.10.2
set system static-host-mapping host-name photos.vanwerkhoven.org inet 172.17.10.2
set service dhcp-server shared-network-name vlan10 subnet 172.17.10.0/24 static-mapping gs108e ip-address 172.17.10.3
set service dhcp-server shared-network-name vlan10 subnet 172.17.10.0/24 static-mapping gs108e mac-address 78:D2:94:2F:81:F8
set system static-host-mapping host-name gs108e.lan.vanwerkhoven.org inet 172.17.10.3
set service dhcp-server shared-network-name vlan1 subnet 172.17.10.0/24 static-mapping UAP-LR1-office ip-address 172.17.10.10
set service dhcp-server shared-network-name vlan1 subnet 172.17.10.0/24 static-mapping UAP-LR1-office mac-address 18:E8:29:93:E1:66
set system static-host-mapping host-name UAP-LR1-Office.lan.vanwerkhoven.org inet 172.17.10.10
set service dhcp-server shared-network-name vlan1 subnet 172.17.10.0/24 static-mapping UAP-LR2-Living ip-address 172.17.10.11
set service dhcp-server shared-network-name vlan1 subnet 172.17.10.0/24 static-mapping UAP-LR2-Living mac-address 18:E8:29:E6:00:2E
set system static-host-mapping host-name UAP-LR2-Living.lan.vanwerkhoven.org inet 172.17.10.11
set service dhcp-server shared-network-name vlan1 subnet 172.17.10.0/24 static-mapping UAP-LR2-Living ip-address 172.17.10.11
set service dhcp-server shared-network-name vlan1 subnet 172.17.10.0/24 static-mapping UAP-LR2-Living mac-address 18:E8:29:E6:00:2E
set system static-host-mapping host-name UAP-LR2-Living.lan.vanwerkhoven.org inet 172.17.10.11
set system static-host-mapping host-name appletv-living.lan.vanwerkhoven.org inet 172.17.20.20
set service dhcp-server shared-network-name vlan1 subnet 172.17.20.0/24 static-mapping appletv-living ip-address 172.17.20.20
set service dhcp-server shared-network-name vlan1 subnet 172.17.20.0/24 static-mapping appletv-living mac-address D0:03:4B:26:85:0C
Set up port forwaring
- 10022 to 172.17.10.4:22
- 443 to 172.17.10.2:443
- 80 to 172.17.10.2:80
- 1883 to 172.17.10.2:8883
set nat destination rule 100 description 'Port Forward: SSH to 172.17.10.4'
set nat destination rule 100 destination port '22'
set nat destination rule 100 inbound-interface 'eth1.1'
set nat destination rule 100 protocol 'tcp'
set nat destination rule 100 translation address '172.17.10.4'
set nat destination rule 102 description 'Port Forward: HTTP to 172.17.10.2'
set nat destination rule 102 destination port '80'
set nat destination rule 102 inbound-interface 'eth1.1'
set nat destination rule 102 protocol 'tcp'
set nat destination rule 102 translation address '172.17.10.2'
set nat destination rule 104 description 'Port Forward: HTTPS to 172.17.10.2'
set nat destination rule 104 destination port '443'
set nat destination rule 104 inbound-interface 'eth1.1'
set nat destination rule 104 protocol 'tcp'
set nat destination rule 104 translation address '172.17.10.2'
set nat destination rule 106 description 'Port Forward: MQTT to 172.17.10.2'
set nat destination rule 106 destination port '1883'
set nat destination rule 106 inbound-interface 'eth1.1'
set nat destination rule 106 protocol 'tcp'
set nat destination rule 106 translation address '172.17.10.2'
set nat destination rule 106 translation port '8883'
Hairpin NAT is not implemented well for dynamics IPs (see here and here), so we use split DNS for local resolving instead.
For reference, a hairpin NAT setup looks like:
set nat destination rule 100 description 'Port Forward SSH'
set nat destination rule 100 destination port '22'
set nat destination rule 100 inbound-interface '<WAN interface>' # WAN interface
set nat destination rule 100 protocol 'tcp'
set nat destination rule 100 translation address '<LAN IP>' # LAN IP
set nat destination rule 101 description 'Port Forward: SSH (NAT Reflection: INSIDE)'
set nat destination rule 101 destination port '22'
set nat destination rule 101 destination address '<WAN IP>' # WAN IP --> required but not in official docs
set nat destination rule 101 inbound-interface '<LAN interface>' # LAN interface
set nat destination rule 101 protocol 'tcp'
set nat destination rule 101 translation address '<LAN IP>' # LAN IP
set nat source rule 100 description 'Port Forward: all to <LAN RANGE>/24 (NAT Reflection: INSIDE)'
set nat source rule 100 destination address '<LAN RANGE>/24'
set nat source rule 100 source address '<LAN RANGE>/24'
set nat source rule 100 outbound-interface '<LAN interface>' # LAN interface
set nat source rule 100 protocol 'tcp'
set nat source rule 100 translation address 'masquerade'
Set up port firewall
Set up zone-based firewall using the following zones:
- WAN: Internet
- Local: router itself, access to everything (VPN, DNS, DHCP, etc.)
- Infra: trusted VLAN with infrastructure, access to everything (switch, server, pve, access points, hue)
- Trusted: trusted clients, limited access to infra (e.g. home laptops, appletv, phones, ipad)
- Guest: untrusted clients, only access to WAN (e.g. work laptops, work phones, guest phones, thermostat)
- IoT: untrusted clients, only access to server in Infra, no WAN access (esp clients)
Rules:
- FW_ACCEPT: drop invalid, accept rest
- FW_DROP: drop all
- FW_2LOCAL: allow DNS, DHCP, SSH (to router)
- FW_TRUST2INFRA: allow trusted clients to reach: server (SSH, HTTP, HTTPS, Home Assistant (via proxy?), Grafana (via proxy?)), pve (SSH), unifi (web only?)
- FW_IOT2INFRA: allow IOT to reach server (MQTT(S)/Home Assistant API/HTTP(S))
- FW_GUEST2TRUST: allow guest clients to reach: appleTV (all ports, mdns)
- FW_WAN2ALL: allow established & related, drop rest
- FW_WAN2INFRA: allow established & related, allow port forwards, drop rest. For port forwards we only have to specify the port, as the IP is implied by the port forwarding rule set up earlier (e.g. allowing port 80 doesn’t open http on all hosts because the forward only allows to 1 specific host)
- FW_WAN2INFRA: allow established & related, allow wireguard (maybe IKEv2 later), drop rest.
to Local | to Infra | to Trusted | to Guest | to IoT | to WAN | |
---|---|---|---|---|---|---|
Local | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | |
Infra | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | |
Trust. | FW_2LOCAL | FW_TRUST2INFRA | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | |
Guest | FW_2LOCAL | FW_DROP | FW_GUEST2TRUST | FW_DROP | FW_ACCEPT | |
IoT | FW_2LOCAL | FW_IOT2INFRA | FW_DROP | FW_DROP | FW_DROP | |
WAN | FW_WAN2LOCAL | FW_WAN2INFRA | FW_WAN2ALL | FW_WAN2ALL | FW_DROP |
FW_ACCEPT
set firewall name FW_ACCEPT default-action accept
set firewall name FW_ACCEPT rule 200 action drop
set firewall name FW_ACCEPT rule 200 description 'drop invalid'
set firewall name FW_ACCEPT rule 200 state invalid enable
FW_DROP
set firewall name FW_DROP default-action drop
FW_WAN2ALL
set firewall name FW_WAN2ALL default-action drop
set firewall name FW_WAN2ALL rule 200 action accept
set firewall name FW_WAN2ALL rule 200 description 'accept established/related'
set firewall name FW_WAN2ALL rule 200 state established enable
set firewall name FW_WAN2ALL rule 200 state related enable
FW_WAN2LOCAL
set firewall name FW_WAN2LOCAL default-action drop
set firewall name FW_WAN2LOCAL rule 200 action accept
set firewall name FW_WAN2LOCAL rule 200 description 'accept established/related'
set firewall name FW_WAN2LOCAL rule 200 state established enable
set firewall name FW_WAN2LOCAL rule 200 state related enable
set firewall name FW_WAN2LOCAL rule 210 action accept
set firewall name FW_WAN2LOCAL rule 210 description 'wireguard'
set firewall name FW_WAN2LOCAL rule 210 destination port 51820
set firewall name FW_WAN2LOCAL rule 210 protocol udp
FW_WAN2INFRA
set firewall name FW_WAN2INFRA default-action drop
set firewall name FW_WAN2INFRA rule 200 action accept
set firewall name FW_WAN2INFRA rule 200 description 'accept established/related'
set firewall name FW_WAN2INFRA rule 200 state established enable
set firewall name FW_WAN2INFRA rule 200 state related enable
set firewall name FW_WAN2INFRA rule 210 action accept
set firewall name FW_WAN2INFRA rule 210 description 'accept port forwards'
set firewall name FW_WAN2INFRA rule 210 log enable
set firewall name FW_WAN2INFRA rule 210 protocol tcp
set firewall name FW_WAN2INFRA rule 210 destination port 22,80,443,1883
set firewall name FW_WAN2INFRA rule 210 destination address 172.17.10.4
set firewall name FW_WAN2INFRA rule 210 state new 'enable'
FW_2LOCAL
set firewall name FW_2LOCAL default-action drop
set firewall name FW_2LOCAL rule 200 action accept
set firewall name FW_2LOCAL rule 200 description 'accept established/related'
set firewall name FW_2LOCAL rule 200 log disable
set firewall name FW_2LOCAL rule 200 state established enable
set firewall name FW_2LOCAL rule 200 state related enable
set firewall name FW_2LOCAL rule 210 action accept
set firewall name FW_2LOCAL rule 210 description 'accept dhcp'
set firewall name FW_2LOCAL rule 210 log disable
set firewall name FW_2LOCAL rule 210 protocol udp
set firewall name FW_2LOCAL rule 210 destination port 67-68
set firewall name FW_2LOCAL rule 220 action accept
set firewall name FW_2LOCAL rule 220 description 'accept dns'
set firewall name FW_2LOCAL rule 220 log disable
set firewall name FW_2LOCAL rule 220 protocol udp
set firewall name FW_2LOCAL rule 220 destination port 53
set firewall name FW_2LOCAL rule 230 action accept
set firewall name FW_2LOCAL rule 230 description 'accept ssh'
set firewall name FW_2LOCAL rule 230 log disable
set firewall name FW_2LOCAL rule 230 protocol tcp
set firewall name FW_2LOCAL rule 230 destination port 22
FW_TRUST2INFRA
set firewall name FW_TRUST2INFRA default-action drop
set firewall name FW_TRUST2INFRA rule 200 action accept
set firewall name FW_TRUST2INFRA rule 200 description 'accept established/related'
set firewall name FW_TRUST2INFRA rule 200 log disable
set firewall name FW_TRUST2INFRA rule 200 state established enable
set firewall name FW_TRUST2INFRA rule 200 state related enable
set firewall name FW_TRUST2INFRA rule 210 action accept
set firewall name FW_TRUST2INFRA rule 210 description 'accept mqtt(s)/http(s) to proteus'
set firewall name FW_TRUST2INFRA rule 210 destination address 172.17.10.2
set firewall name FW_TRUST2INFRA rule 210 protocol tcp
set firewall name FW_TRUST2INFRA rule 210 destination port 8883,1883,80,443,8123,22
set firewall name FW_TRUST2INFRA rule 220 action accept
set firewall name FW_TRUST2INFRA rule 220 description 'accept ssh to pve'
set firewall name FW_TRUST2INFRA rule 220 destination address 172.17.10.4
set firewall name FW_TRUST2INFRA rule 220 protocol tcp
set firewall name FW_TRUST2INFRA rule 220 destination port 22
set firewall name FW_TRUST2INFRA rule 230 action accept
set firewall name FW_TRUST2INFRA rule 230 description 'accept ssh to unifi controller'
set firewall name FW_TRUST2INFRA rule 230 destination address 172.17.10.5
set firewall name FW_TRUST2INFRA rule 230 protocol tcp
set firewall name FW_TRUST2INFRA rule 230 destination port 22,443
FW_IOT2INFRA
set firewall name FW_IOT2INFRA default-action drop
set firewall name FW_IOT2INFRA rule 200 action accept
set firewall name FW_IOT2INFRA rule 200 description 'accept established/related'
set firewall name FW_IOT2INFRA rule 200 log disable
set firewall name FW_IOT2INFRA rule 200 state established enable
set firewall name FW_IOT2INFRA rule 200 state related enable
set firewall name FW_IOT2INFRA rule 210 action accept
set firewall name FW_IOT2INFRA rule 210 description 'accept mqtt(s)/HA API to proteus'
set firewall name FW_IOT2INFRA rule 210 destination address 172.17.10.2
set firewall name FW_IOT2INFRA rule 210 protocol tcp
set firewall name FW_IOT2INFRA rule 210 destination port 8883,1883,6053
FW_GUEST2TRUST
set firewall name FW_GUEST2TRUST default-action drop
set firewall name FW_GUEST2TRUST rule 200 action accept
set firewall name FW_GUEST2TRUST rule 200 description 'accept established/related'
set firewall name FW_GUEST2TRUST rule 200 log disable
set firewall name FW_GUEST2TRUST rule 200 state established enable
set firewall name FW_GUEST2TRUST rule 200 state related enable
set firewall name FW_GUEST2TRUST rule 210 action accept
set firewall name FW_GUEST2TRUST rule 210 description 'accept access to AppleTV'
set firewall name FW_GUEST2TRUST rule 210 destination address 172.17.20.20
set firewall name FW_GUEST2TRUST rule 210 protocol tcp_udp
Apply firewall zones to interfaces
to Local | to Infra | to Trusted | to Guest | to IoT | to WAN | |
---|---|---|---|---|---|---|
Local | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | |
Infra | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | |
Trust. | FW_2LOCAL | FW_TRUST2INFRA | FW_ACCEPT | FW_ACCEPT | FW_ACCEPT | |
Guest | FW_2LOCAL | FW_DROP | FW_GUEST2TRUST | FW_DROP | FW_ACCEPT | |
IoT | FW_2LOCAL | FW_IOT2INFRA | FW_DROP | FW_DROP | FW_DROP | |
WAN | FW_WAN2LOCAL | FW_WAN2INFRA | FW_WAN2ALL | FW_WAN2ALL | FW_DROP |
@TODO fix VLAN before live
set firewall zone LOCAL local-zone
set firewall zone LOCAL default-action drop
set firewall zone LOCAL from INFRA firewall name FW_ACCEPT
set firewall zone LOCAL from TRUSTED firewall name FW_2LOCAL
set firewall zone LOCAL from GUEST firewall name FW_2LOCAL
set firewall zone LOCAL from IOT firewall name FW_2LOCAL
set firewall zone LOCAL from WAN firewall name FW_WAN2LOCAL
set firewall zone INFRA interface br100.10
set firewall zone INFRA default-action drop
set firewall zone INFRA from LOCAL firewall name FW_ACCEPT
set firewall zone INFRA from TRUSTED firewall name FW_TRUST2INFRA
set firewall zone INFRA from GUEST firewall name FW_DROP
set firewall zone INFRA from IOT firewall name FW_IOT2INFRA
set firewall zone INFRA from WAN firewall name FW_WAN2INFRA
set firewall zone TRUSTED interface br100.20
set firewall zone TRUSTED interface wg0
set firewall zone TRUSTED default-action drop
set firewall zone TRUSTED from LOCAL firewall name FW_ACCEPT
set firewall zone TRUSTED from INFRA firewall name FW_ACCEPT
set firewall zone TRUSTED from GUEST firewall name FW_GUEST2TRUST
set firewall zone TRUSTED from IOT firewall name FW_DROP
set firewall zone TRUSTED from WAN firewall name FW_WAN2ALL
set firewall zone GUEST interface br100.30
set firewall zone GUEST default-action drop
set firewall zone GUEST from LOCAL firewall name FW_ACCEPT
set firewall zone GUEST from INFRA firewall name FW_DROP
set firewall zone GUEST from TRUSTED firewall name FW_ACCEPT
set firewall zone GUEST from IOT firewall name FW_DROP
set firewall zone GUEST from WAN firewall name FW_WAN2ALL
set firewall zone IOT interface br100.40
set firewall zone IOT default-action drop
set firewall zone IOT from LOCAL firewall name FW_ACCEPT
set firewall zone IOT from INFRA firewall name FW_ACCEPT
set firewall zone IOT from TRUSTED firewall name FW_DROP
set firewall zone IOT from GUEST firewall name FW_DROP
set firewall zone IOT from WAN firewall name FW_DROP
set firewall zone WAN interface eth1.1
set firewall zone WAN default-action drop
set firewall zone WAN from LOCAL firewall name FW_ACCEPT
set firewall zone WAN from INFRA firewall name FW_ACCEPT
set firewall zone WAN from TRUSTED firewall name FW_ACCEPT
set firewall zone WAN from GUEST firewall name FW_ACCEPT
set firewall zone WAN from IOT firewall name FW_DROP
Restart firewall - is done automatically. If you don’t notice the changes you probably made a mistake :p
Allow mDNS reflector (for AppleTV)
set service mdns repeater interface br100.20
set service mdns repeater interface br100.30
Set up port QoS & MSS
Set up QoS. Source: https://gist.github.com/jbrodriguez/cc0b1d9f72f66e555ad7
set traffic-policy shaper WAN_QUEUE bandwidth '100Mbit'
# Default traffic
set traffic-policy shaper WAN_QUEUE default bandwidth '95%'
set traffic-policy shaper WAN_QUEUE default priority '3'
set traffic-policy shaper WAN_QUEUE default queue-type 'fq-codel'
set traffic-policy shaper WAN_QUEUE description "WAN QoS shaper"
# megasuper priority dns and icmp
set traffic-policy shaper EGRESS_QOS class 10 bandwidth '10%'
set traffic-policy shaper EGRESS_QOS class 10 priority '5'
set traffic-policy shaper EGRESS_QOS class 10 queue-type 'fq-codel'
set traffic-policy shaper EGRESS_QOS class 10 match icmp ip protocol icmp
set traffic-policy shaper EGRESS_QOS class 10 match dns ip source port 53
# TODO: fix WAN VLAN when deploying
set interfaces ethernet eth1.1 traffic-policy out WAN_QUEUE
Set MSS-clamping to ensure optimal link utilization. Diagnose max MTU using ping, then set MSS value = MTU - 20 (IP header) - 20 (TCP header):
set firewall options interface adjust-mss 1460
Configure Ad-blocking
Set up DNS-based ad blocking on VyOS
Configure VPN
Set up Wireguard VPN, see the official VyOS docs and this tutorial including firewalling.
config
run generate pki wireguard key-pair install interface wg0
# Public key: uGc4JMJ4IJc0aoIY/ITOrFGWjmn+RxnqRQMecOS4uB8=
set interfaces wireguard wg0 address 172.17.40.1/24
set interfaces wireguard wg0 description Roadwarrior
set interfaces wireguard wg0 port 51820
# Add first peer with local IP '172.17.40.100/32'
run generate pki wireguard preshared-key install interface wg0 peer tim
set interfaces wireguard wg0 peer tim persistent-keepalive 15
set interfaces wireguard wg0 peer tim allowed-ips 172.17.40.100/32
run generate pki wireguard key-pair
set interfaces wireguard wg0 peer tim public-key IBiGrXgZRWdDoUmWwSgUCbH4mTcfNUDtJdl461ACySE=
commit; save; exit
Now generate client configs and install these on your clients
generate wireguard client-config tim interface wg0 server home.vanwerkhoven.org address 172.17.40.10/24
Debian
Proxmox supports two guest architectures:
- LXC:
- Pro: light container, possibility for hardware acceleration(?), faster
- Con: more complicated for Docker, less secure/isolation from host(?)
- VM:
- Pro: fully separated/more secure, Docker works out of the box
- Con: low disk speed for random I/O (and maybe others)
I finally went for LXC because of disk speed:
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --numjobs=1 --size=2g --iodepth=1 --runtime=30 --time_based --end_fsync=1
pve host:
WRITE: bw=220MiB/s (231MB/s), 220MiB/s-220MiB/s (231MB/s-231MB/s), io=6690MiB (7015MB), run=30431-30431msec
debian VM guest @ virtio:
WRITE: bw=67.5MiB/s (70.7MB/s), 67.5MiB/s-67.5MiB/s (70.7MB/s-70.7MB/s), io=2048MiB (2147MB), run=30363-30363msec
debian VM guest @ scsi:
WRITE: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=1493MiB (1566MB), run=48003-48003msec
debian LXC guest @ virtio
WRITE: bw=192MiB/s (202MB/s), 192MiB/s-192MiB/s (202MB/s-202MB/s), io=5876MiB (6161MB), run=30533-30533msec
old proteus:
WRITE: bw=172MiB/s (180MB/s), 172MiB/s-172MiB/s (180MB/s-180MB/s), io=5338MiB (5598MB), run=31078-31078msec
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=64k --size=128m --numjobs=16 --iodepth=16 --runtime=30 --time_based --end_fsync=1
pve host:
WRITE: bw=2429MiB/s (2547MB/s), 140MiB/s-164MiB/s (147MB/s-172MB/s), io=72.7GiB (78.0GB), run=30127-30641msec
debian VM guest @ virtio:
WRITE: bw=1856MiB/s (1946MB/s), 108MiB/s-123MiB/s (114MB/s-129MB/s), io=55.7GiB (59.8GB), run=30133-30712msec
debian LXC guest @ virtio
WRITE: bw=2045MiB/s (2145MB/s), 117MiB/s-141MiB/s (123MB/s-148MB/s), io=61.9GiB (66.4GB), run=30585-30979msec
old proteus:
WRITE: bw=286MiB/s (300MB/s), 15.8MiB/s-20.8MiB/s (16.6MB/s-21.8MB/s), io=9648MiB (10.1GB), run=30656-33702msec
Install & configure Debian server as LXC
Get images using Proxmox’ Proxmox VE Appliance Manager:
pveam update
pveam available
pveam download local debian-11-standard_11.6-1_amd64.tar.zst
pveam list local
Check storage to use
pvesm status
Create and configure LXC container based on downloaded image. Ensure it’s an unprivileged container to protect our host and router running on it.
pct create 201 local:vztmpl/debian-11-standard_11.6-1_amd64.tar.zst --description "Debian LXC server" --hostname proteus --rootfs thinpool_vms:300 --unprivileged 1 --cores 4 --memory 12288 --ssh-public-keys /root/.ssh/tim.id_rsa.pub --net0 name=eth0,bridge=vmbr0,firewall=0,gw=172.17.10.1,ip=172.17.10.2/24,tag=10
Now configure networking, on Proxmox’ vmbr0
with VLAN ID 10. This means the guest can only
# This does not work, cannot create network device on vmbr0.10
# pct set 201 --net0 name=eth0,bridge=vmbr0.10,firewall=0,gw=172.19.10.1,ip=172.19.10.2/24
# Does not work:
# pct set 201 --net0 name=eth0,bridge=vmbr0,firewall=0,gw=172.17.10.1,ip=172.17.10.2/24,trunks=10
# Works:
# pct set 201 --net0 name=eth0,bridge=vmbr0,firewall=0,gw=172.17.10.1,ip=172.17.10.2/24,tag=10
pct set 201 --onboot 1
Optional: only required if host does not have this set up correctly (could be because network was not available at init)
pct set 201 --searchdomain lan.vanwerkhoven.org --nameserver 172.17.10.1
If SSH into guest fails or takes a long time, this can be due to LXC / Apparmor security features which prevent mount
from executing. To solve, ensure nesting is allowed:
pct set 201 --features nesting=1
To enable Docker inside the LXC container, we need both nesting & keyctl:
pct set 201 --features nesting=1,keyctl=1
Start & log in, set root password, configure some basics
pct start 201
pct enter 201
passwd
apt install sudo vim
dpkg-reconfigure locales
dpkg-reconfigure tzdata
Add regular user, add to system groups, and set ssh key
adduser tim
usermod -aG adm,render,sudo,staff tim
mkdir -p ~tim/.ssh/
touch ~tim/.ssh/authorized_keys
chown -R tim:tim ~tim/.ssh
chmod og-rwx ~tim/.ssh/authorized_keys
cat << 'EOF' >>~tim/.ssh/authorized_keys
ssh-rsa AAAA...
EOF
# Allow non-root to use ping
setcap cap_net_raw+p $(which ping)
Update & upgrade and install automatic updates
sudo apt update
sudo apt upgrade
sudo apt install unattended-upgrades
# Comment 'label=Debian' to not auto-update too much
sudo vi /etc/apt/apt.conf.d/50unattended-upgrades
# Tweak some settings
cat << 'EOF' | sudo tee -a /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
EOF
sudo unattended-upgrades --dry-run --debug
Install Docker. Need to use custom apt repo to get latest version which works inside an unprivileged LXC container (as proposed on the docker forums):
sudo apt remove docker docker-engine docker.io containerd runc docker-compose
sudo apt update
sudo apt install \
ca-certificates \
curl \
gnupg \
lsb-release
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo docker run hello-world
@TODO Pass-through USB devices
Non-solutions
I also tried these options that didn’t work for my older Docker version:
And we maybe need to change boot parameters:
sed -i 's/^GRUB_CMDLINE_LINUX_DEFAULT="quiet/GRUB_CMDLINE_LINUX_DEFAULT="quiet systemd.unified_cgroup_hierarchy=0/' /etc/default/grub
This failed.
Docker inside (unpriviliged) LXC not supported, but can be made to work.
- https://forums.docker.com/t/docker-problem-in-unpriviledged-lxc-on-debian-11-2-bullseye/121685
- https://bobcares.com/blog/proxmox-docker-unprivileged-container/
- https://quibtech.com/p/run-docker-containers-in-proxmox-lxc/
- https://www.youtube.com/watch?v=Fc06qnL0Jgw
- https://jlu5.com/blog/docker-unprivileged-lxc-2021
- https://forums.docker.com/t/docker-problem-in-unpriviledged-lxc-on-debian-11-2-bullseye/121685
Try newer version of docker as proposed on the docker forums
sudo apt-get install docker-compose-plugin docker-compose docker.io
Fails.
Try to install all packages:
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Works! So it was missing a package?!
Now try to install the debian original docker (fewer apt repositories is more stability)
sudo apt-get remove docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Install & configure Debian server as VM
# Get ISO from https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/
ls /var/lib/vz/template/iso/
qm create 200 --name proteus --description "Debian VM server" --cores 4 --memory 12288 --net0 virtio,bridge=vmbr0,firewall=0,tag=10 --ide2 media=cdrom,file=local:iso/debian-11.6.0-amd64-netinst.iso --virtio0 thinpool_vms:300
# ipconfig0 did not work? --ipconfig0 gw=172.17.10.1,ip=172.17.10.2/24
qm set 200 -serial0 socket
qm set 200 --onboot 1
Open terminal via Spice/xterm.js, install image, remove image, and reboot
qm start 200
# in guest: install image as usual
qm set 200 --ide2 none
qm reboot 200
Add QEMU guest agent
qm set 200 --agent 1
qm agent 200 ping
Test docker
apt install docker.io docker-compose
sudo docker run hello-world
Works
Enable GPU sharing in VM:
# vi /etc/default/grub
# GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_pstate=disable
#intel_iommu=on
#i915.enable_gvt=1"
GRUB_CMDLINE_LINUX_DEFAULT+="intel_iommu=on i915.enable_gvt=1"
update-grub && reboot
# Check for success
cat /proc/cmdline
dmesg | grep -e DMAR -e IOMMU
# Load modules
cat << 'EOF' >> /etc/modules
# Modules required for PCI passthrough
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
# Modules required for Intel GVT
kvmgt
exngt
vfio-mdev
EOF
reboot
# Pass through PCI --> Via GUI
# Not sure how this works on CLI, something like: qm set 200 --hostpci0 0000:00:02.0,mdev=i915-GVTg_V5_4
Expose bulk storage to Debian server
I prefer to keep the guest OS disks smallish so I can back them up. However if I want to store bulk data I don’t have space. To solve this there are two approaches to share storage from host to guest:
- Via Samba on host machine, mount in guest. Pro: always works. Con: more complex setup, increases host attack surface
- Via bind mount points. Pro: works well in LXC. Fast. Con: only LXC
- Via disk pass-through. Pro: works well in KVM (& LXC?). Fast. Con: cannot write from two guests simultaneously.
Automounting Samba in LXC guest didn’t work for me, giving error “Starting of mnt-bulk.automount not supported.” LXC containers are special, apparently. However I document the steps here for reference.
1. Share data via Samba
Mount Samba share automatically from pve host:
sudo apt install smbclient cifs-utils
cat << 'EOF' >>/root/.smbcredentials
user=sambarw
password=redacted
EOF
Automount, but ensure mounting doesn’t fail because network is not up yet.
sudo mkdir /mnt/bulk
sudo chown root:users /mnt/bulk/
sudo chmod g+rw /mnt/bulk/
sudo cat << 'EOF' >>/etc/fstab
//pve.lan.vanwerkhoven.org/bulk /mnt/bulk cifs credentials=/root/.smbcredentials,rw,uid=tim,gid=users,auto,x-systemd.automount,_netdev 0 0
EOF
2. Share data via mount points (LXC only)
In the second approach, we mount something on the host and propagate it to the guest, or create a privileged container.
Mount points require some care regarding UID/GIDs (e.g. see documented on the proxmox wiki), but overall seem an easy method to get storage from host to guest.
What worked for me was adding a mountpoint using pct
:
sudo mkdir /mnt/bulk
sudo chown tim:users /mnt/bulk
sudo chmod g+w /mnt/bulk
Make user on host (bulkdata:bulkdata
) that we’ll propagate UID/GID to in the guest:
adduser --home /mnt/bulk --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1010 bulkdata
# adduser --home /mnt/backup/mbp --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1011 backupmbp
# adduser --home /mnt/backup/mba --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1012 backupmba
# adduser --home /mnt/backup/tex --no-create-home --shell /usr/sbin/nologin --disabled-password --uid 1013 backuptex
usermod -aG bulkdata tim
Set up UID/GID mapping to propagate users 1010–1020 to the same uid on the host (e.g. using this tool). N.B. this is only required if you want to write from both the host and guest. If you only write in (multiple) guests, you only need to ensure the user/group writing from the different guests have the same UID/GID.
cat << 'EOF' >>/etc/pve/lxc/201.conf
# uid map: from uid 0 map 1010 uids (in the ct) to the range starting 100000 (on the host), so 0..1010 (ct) → 100000..101010 (host)
lxc.idmap = u 0 100000 1010
lxc.idmap = g 0 100000 1010
# we map 10 uids starting from uid 1010 onto 1010, so 1010 → 1010
lxc.idmap = u 1010 1010 10
lxc.idmap = g 1010 1010 10
# we map the rest of 65535 from 1020 upto 101020, so 1020..65535 → 101020..165535
lxc.idmap = u 1020 101020 64516
lxc.idmap = g 1020 101020 64516
EOF
Add the following to /etc/subuid
and /etc/subgid
(there might already be entries in the file, also for root
):
cat << 'EOF' >>/etc/subuid
root:1010:10
EOF
cat << 'EOF' >>/etc/subgid
root:1010:10
EOF
Now mount the actual bind point
pct shutdown 201
pct set 201 -mp0 /mnt/bulk,mp=/mnt/bulk
pct start 201
and that’s it. Now we can continue configuring the services.
3. Via disk pass through (KVM)
Pass through bulk storage using volume pass through with virtio
(should be faster than SCSI or IDE):
qm set 200 -virtio1 /dev/disk/by-id/dm-name-pve-lv_bulk,backup=0,snapshot=0
#qm set 200 -scsi1 /dev/disk/by-id/dm-name-pve-lv_bulk,backup=0,snapshot=0
Proxmox hardening
Tips from Samuel’s Website and pveproxy(8) man page
Limit server access to specific IPs:
cat << 'EOF' >>/etc/default/pveproxy
# TvW 20230114 added for security reasons
DENY_FROM="all"
ALLOW_FROM="172.17.10.0/24"
POLICY="allow"
# For PVE-Manager >= 6.4 only.
LISTEN_IP="172.17.10.4"
EOF
Disable NFS:
cat << 'EOF' >>/etc/default/nfs-common
# TvW 20230114 disabled for security reasons
NEED_STATD=no
EOF
Install Unifi Network Application (controller) as LXC container
Install Unifi Network Application (Controller) on Debian (only supported Linux platform) using the Unifi guide and the the Alpine guide.
Get images using Proxmox’ Proxmox VE Appliance Manager:
pveam update
pveam available
pveam download local debian-11-standard_11.6-1_amd64.tar.zst #OR Alpine?
pveam list local
Check storage to use
pvesm status
Create and configure LXC container based on downloaded image. Ensure it’s an unprivileged container to protect our host and router running on it. Also configure networking, run on Proxmox’ vmbr0
with VLAN ID 10 in the Management VLAN.
pct create 202 local:vztmpl/debian-11-standard_11.6-1_amd64.tar.zst --description "Debian LXC Unifi Network Application" --hostname unifi --rootfs thinpool_vms:8 --unprivileged 1 --cores 2 --memory 2048 --ssh-public-keys /root/.ssh/tim.id_rsa.pub --net0 name=eth0,bridge=vmbr0,firewall=0,gw=172.17.10.1,ip=172.17.10.5/24,tag=10
pct set 202 --onboot 1
Optional: only required if host does not have this set up correctly (could be because network was not available at init):
pct set 202 --searchdomain lan.vanwerkhoven.org --nameserver 172.17.10.1
Start & log in, set root password, configure some basics
pct start 202
pct enter 202
passwd
apt install sudo vim
dpkg-reconfigure locales
dpkg-reconfigure tzdata
If SSH into guest fails or takes a long time, this can be due to LXC / Apparmor security features which prevent mount
from executing. To solve, ensure nesting is allowed:
pct shutdown 202
pct set 202 --features nesting=1
pct start 202
Hardening sshd is not required: by default, root is only allowed to login with pubkey authentication.
Install required packages to add Unifi apt source, then add new source & related keys
apt-get update && apt-get install ca-certificates apt-transport-https
echo 'deb https://www.ui.com/downloads/unifi/debian stable ubiquiti' | tee /etc/apt/sources.list.d/100-ubnt-unifi.list
wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg https://dl.ui.com/unifi/unifi-repo.gpg
Unifi (v7.3.83 in my case) has very specific MongoDB requirements:
unifi : Depends: mongodb-server (>= 2.4.10) but it is not installable or
mongodb-10gen (>= 2.4.14) but it is not installable or
mongodb-org-server (>= 2.6.0) but it is not installable
Depends: mongodb-server (< 1:4.0.0) but it is not installable or
mongodb-10gen (< 4.0.0) but it is not installable or
mongodb-org-server (< 4.0.0) but it is not installable
Prep for specific MongoDB version, see this guide. The MongoDB repo for Stretch (Debian 9) has the newest compatible version (3.6) with a matching pgp key, a bit newer than the 3.4 version as written in the Unifi guide. The PGP key for this repo will expire on 2023-12-09, not sure what will happen then.
wget -O /etc/apt/trusted.gpg.d/mongodb-repo.gpg https://pgp.mongodb.com/server-3.6.pub
echo "deb https://repo.mongodb.org/apt/debian stretch/mongodb-org/3.6 main" | tee /etc/apt/sources.list.d/mongodb-org-3.6.list
apt-get update
Install Unifi Network Application from apt, this takes 560 MB of disk space for the package & required dependencies (yeah, for just a controller).
apt-get update && apt-get install unifi
Enable, autostart, and start Unifi service:
systemctl is-enabled unifi
systemctl enable unifi
Update & upgrade and install automatic updates
apt update && apt upgrade
apt install unattended-upgrades
# Comment 'label=Debian' to not auto-update too much
vi /etc/apt/apt.conf.d/50unattended-upgrades
# Tweak some settings
cat << 'EOF' | sudo tee -a /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";
Unattended-Upgrade::Remove-Unused-Dependencies "true";
EOF
sudo unattended-upgrades --dry-run --debug
Migrate services
Install dependencies, prefer python packages via apt for system-wide install and potentially some security because we don’t install from public pip repository
apt install jq curl python3-netcdf4
Service overview
Now:
-
InfluxDB + data (port X) - via apt 1.6 or special repo 1.8
- Port configuration from old proteus
- Set up new influxdb with accounts (generic write user and read user and admin)
-
Worker scripts
- All scripts:
- Unify naming:
<source>2<target>
, e.g.knmi2influxdb
- Update in-place with credentials, ideally with backwards compatibility
- Unify naming:
- co2signal
- normalize, separate secrets, add influxdb login: OK
- tested: OK
- knmi
- normalize, separate secrets, add influxdb login
- tested: OK
- SBF capture
- Check which scripts are being used, archive old ones
- Read secrets from external file
- epexspot
- evohome
- migrate to HA: yes?
- hue
- mkwebdata
- mqtt2influxdb
- multical
- smeter
- water_meter_reader
- All scripts:
-
Collectd (for data generation/collection) – on proxmox?
- install on proxmox
- migrate configuration
-
Nginx + letsencrypt (port 80/443)
-
Docker
- portainer (port 9000) –> not required
- Nextcloud (port 9080) –> run on proteus port @ nextcloud.vanwerkhoven.org
- bpatrik/pigallery2 (for personal photo sharing) (port 3090) –> run on proteus @ photos.vanwerkhoven.org
- Home Assistant (port 8143) –> run on proteus @ homeassistant.lan.vanwerkhoven.org (VPN)
- Grafana (port 3000) – via vendor apt repository? –> via docker @ grafana.lan.vanwerkhoven.org
- lscr.io/linuxserver/unifi-controller (for Unifi AP management) –> docker on proteus @ unifi.lan.vanwerkhoven.org
-
Mosquitto (glueing home automation) – on proteus
Later:
- Transmission (downloading torrents) – on proteus
- Plex/Jellyfin (HTPC) – needs hw accel, required running in privileged container
- smbd (for Time Machine backups) – on proteus
Service hardware requirements
- GPU: Jellyfin
- Bluetooh: host server & Home Assistant
- USB smart meter: host server & Home Assistant
- USB heat meter: host server
- USB Conbee Zigbee: Home Assistant
Get HW accel in guest/container: https://www.reddit.com/r/jellyfin/comments/s417qw/hardware_acceleration_inside_proxmox_lxc_not/
Docker
First install docker (also see above)
sudo apt install docker.io docker-compose
Reverse proxy options for containers
Optional: prepare forwarding traffic from WAN to containers using a reverse proxy, following some best practices, e.g. using nginx-proxy.
- Expose & publish Docker container ports on host, then reverse proxy to specific port (e.g.
-p 127.0.0.1:8000:8000
)- Pro: already operational, least trust required, fastest solution
- Con: occupies host ports that are never used, potentially exposes services to users with access to host, requires manual acme management
- Use internal Docker network to map reverse proxy (e.g. dynamically using nginx-proxy)
- Pro: easy solution (well that was never a consideration /s), does not expose ports, automatic acme handling
- Con: requires trust in 3rd party nginx implementation, slower(?) than native nginx, requires exposing Docker socket granting it root on host.
- Use Traefik to reverse proxy for Docker (bonus: built-in ACME challenge)
- Pro: easy solution, does not expose ports, automatic acme/letsencrypt handling
- Con: slower than nginx, new approach, requires exposing Docker socket granting it root on host.
I decided to go for option 1: most effort and most overkill (security/speed) for my situation :p
Portainer to ease container management
And optionally install portainer
to help manage docker. Bind to localhost to ensure this service cannot be accessed outside the machine
sudo docker volume create portainer_data
sudo docker run -d \
--name portainer \
--restart=always \
-p 127.0.0.1:8000:8000 -p 127.0.0.1:9443:9443 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
sudo docker ps
InfluxDB
Use (old) native Debian package for stability & fewest apt repositories
apt install influxdb-client influxdb
influxd restore -portable 20230124/
Add users in Influxdb
CREATE USER influxadmin WITH PASSWORD 'pwd' WITH ALL PRIVILEGES
CREATE USER influxwrite WITH PASSWORD 'pwd'
GRANT WRITE ON collectd TO influxwrite
GRANT WRITE ON smarthomev3 TO influxwrite
CREATE USER influxread WITH PASSWORD 'pwd'
GRANT READ ON collectd TO influxread
GRANT READ ON smarthomev3 TO influxread
CREATE USER influxreadwrite WITH PASSWORD 'pwd'
GRANT READ,WRITE ON collectd TO influxread
GRANT READ,WRITE ON smarthomev3 TO influxread
Test account wiht curl
chmod o-r ~/.profile
cat << 'EOF' >>~/.profile
export INFLUX_USERNAME=influxadmin
export INFLUX_PASSWORD=pwd
EOF
curl -G http://localhost:8086/query -u influxwrite:pwd --data-urlencode "q=SHOW DATABASES"
Restore retention policies (https://web.archive.org/web/20230104021722/https://atomstar.tweakblogs.net/blog/17748/influxdb-retention-policy-and-data-downsampling)
SHOW RETENTION POLICIES ON collectd
CREATE RETENTION POLICY "always" ON "collectd" DURATION INF REPLICATION 1
CREATE RETENTION POLICY "five_days" ON "collectd" DURATION 5d REPLICATION 1 DEFAULT
# For Grafana viewing - see https://github.com/grafana/grafana/issues/4262#issuecomment-475570324
INSERT INTO always rp_config,idx=1 rp="five_days",start=0i,end=432000000i -9223372036854775806
INSERT INTO always rp_config,idx=2 rp="always",start=432000000i,end=3110400000000i -9223372036854775806
# Restore continuous queries
cq_60m_cpu CREATE CONTINUOUS QUERY cq_60m_cpu ON collectd BEGIN SELECT mean(value) AS value INTO collectd.always.cpu FROM collectd.five_days.cpu GROUP BY time(1h), * END
cq_60m_cpufreq CREATE CONTINUOUS QUERY cq_60m_cpufreq ON collectd BEGIN SELECT mean(value) AS value INTO collectd.always.cpufreq FROM collectd.five_days.cpufreq GROUP BY time(1h), * END
cq_60m_df CREATE CONTINUOUS QUERY cq_60m_df ON collectd BEGIN SELECT mean(value) AS value INTO collectd.always.df FROM collectd.five_days.df GROUP BY time(1h), * END
cq_60m_interface CREATE CONTINUOUS QUERY cq_60m_interface ON collectd BEGIN SELECT mean(rx) AS rx, mean(tx) AS tx INTO collectd.always.interface FROM collectd.five_days.interface GROUP BY time(1h), * END
cq_60m_iwinfo CREATE CONTINUOUS QUERY cq_60m_iwinfo ON collectd BEGIN SELECT mean(value) AS value INTO collectd.always.iwinfo FROM collectd.five_days.iwinfo GROUP BY time(1h), * END
cq_60m_load CREATE CONTINUOUS QUERY cq_60m_load ON collectd BEGIN SELECT mean(longterm) AS longterm, mean(midterm) AS midterm, mean(shortterm) AS shortterm INTO collectd.always.load FROM collectd.five_days.load GROUP BY time(1h), * END
cq_60m_memory CREATE CONTINUOUS QUERY cq_60m_memory ON collectd BEGIN SELECT mean(value) AS value INTO collectd.always.memory FROM collectd.five_days.memory GROUP BY time(1h), * END
cq_60m_ping CREATE CONTINUOUS QUERY cq_60m_ping ON collectd BEGIN SELECT mean(value) AS value INTO collectd.always.ping FROM collectd.five_days.ping GROUP BY time(1h), * END
Home assistant
Migrate config from old machine
# Create backup on old config (HA Core)
sudo tar cvf ~/homeassistant.tar.gz ~homeassistant/.homeassistant
# Move to new machine & right place
scp oldserver:homeassistant.tar.gz newserver:/var/lib/
cd /var/lib/ && sudo tar cvf ./homeassistant.tar.gz
sudo mv .homeassistant homeassistant
sudo chown root:root homeassistant
sudo chmod og-rwx homeassistant/
Start docker container via docker run
or docker compose
:
# Run new docker, do not use --privileged for safety and easier running in LXC
# https://community.home-assistant.io/t/why-does-the-documentation-say-we-need-priviledged-mode-for-a-docker-install-now/336556/2
sudo docker run -d \
--name homeassistant \
--restart=unless-stopped \
-e TZ=Europe/Brussels \
-v /var/lib/homeassistant:/config \
--network=host \
ghcr.io/home-assistant/home-assistant:stable
cat << 'EOF' >> ~tim/docker/home-assistant-compose.yml
version: '3'
# https://www.home-assistant.io/installation/linux#docker-compose
# docker compose -f home-assistant-compose.yml up -d
services:
homeassistant:
container_name: homeassistant
image: "ghcr.io/home-assistant/home-assistant:stable"
volumes:
- /var/lib/homeassistant:/config
- /etc/localtime:/etc/localtime:ro
restart: unless-stopped
network_mode: host
#devices:
# - /dev/ttyUSB0:/dev/ttyUSB0
EOF
sudo docker compose -f home-assistant-compose.yml up -d
Migrate HA to MariaDB
@TODO Figure out how to setup MariaDB later
Optimize configuration: add mariadb, influxdb, tweak recorder to only store relevant stuff https://smarthomescene.com/guides/optimize-your-home-assistant-database/ https://community.home-assistant.io/t/migrating-home-assistant-database-from-sqlite-to-mariadb/96895
# Remove
sudo apt remove mysql-server-8.0 mysql-server mysql-client-8.0 mysql-client-core-8.0 mysql-common
sudo apt-get remove --purge
sudo apt install mariadb-server
# Fix apparmor because of old mysql installation
# https://askubuntu.com/questions/1185710/mariadb-fails-despite-apparmor-profile
# https://stackoverflow.com/questions/40997257/mysql-service-fails-to-start-hangs-up-timeout-ubuntu-mariadb
echo "# TvW 20230127 fix apparmor issue mariadb" | sudo tee -a /etc/apparmor.d/usr.sbin.mysqld
echo "/usr/sbin/mysqld { }" | sudo tee -a /etc/apparmor.d/usr.sbin.mysqld
sudo apparmor_parser -v -R /etc/apparmor.d/usr.sbin.mysqld
sudo systemctl restart mariadb
sudo reboot
# Did not help? Try reboot
#sudo /etc/init.d/apparmor reload
sudo mysql_secure_installation
## create database
mysql -e 'CREATE SCHEMA IF NOT EXISTS `hass_db` DEFAULT CHARACTER SET utf8mb4'
## create user (use a safe password please)
mysql -e "CREATE USER 'hass_user'@'localhost' IDENTIFIED BY 'pwd'"
mysql -e "GRANT ALL PRIVILEGES ON hass_db.* TO 'hass_user'@'localhost'"
mysql -e "GRANT usage ON *.* TO 'hass_user'@'localhost'"
Migrate: method 1, use only SQL
pip install sqlite3-to-mysql
sqlite3mysql -f ./home-assistant_v2.db -d hass_db -u hass_user -p
Migrate: method 2, todo
sqlite3 ~homeassistant/.homeassistant/home-assistant_v2.db .dump > hadump.sql
git clone https://github.com/athlite/sqlite3-to-mysql
recorder:
auto_purge: true
purge_keep_days: 21
auto_repack: true
db_url: mysql://user:password@localhost/homeassistant?unix_socket=/var/run/mysqld/mysqld.sock&charset=utf8mb4
Grafana
We can either use apt or the docker image. I go for apt here so I can more easily re-use my letsencrypt certificate via /etc/grafana/grafana.ini
.
sudo apt-get install -y apt-transport-https
sudo apt-get install -y software-properties-common wget
sudo wget -q -O /usr/share/keyrings/grafana.key https://apt.grafana.com/gpg.key
Add repo
echo "deb [signed-by=/usr/share/keyrings/grafana.key] https://apt.grafana.com stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
Install
sudo apt-get install grafana
Start now & start automatically
sudo systemctl daemon-reload
sudo systemctl start grafana-server
sudo systemctl status grafana-server
sudo systemctl enable grafana-server.service
@TODO
Migrate configuration
- Install used plugin on new server
- Stop Grafana service on source and destination server
- Copy /var/lib/grafana/grafana.db from old to new server
- Check /etc/grafana/grafana.ini - OK
Nextcloud
Install regular Docker image (instead of the all-in-one image with possibly too much junk):
cat << 'EOF' | tee ~tim/docker/nextcloud-compose.yml
version: '2'
# https://github.com/nextcloud/docker#running-this-image-with-docker-compose
# docker compose -f nextcloud-compose.yml up -d
volumes:
nextcloud:
db:
services:
db:
image: mariadb:10.5
restart: always
command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud
restart: always
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
EOF
sudo docker compose -f nextcloud-compose.yml up -d
@TODO Migrate Nextcloud setup (only config)
Enable file uploads >2M
- In php.ini
- In nginx
Optional: make certain folders accessible outside Docker using bind mount
, first as trial, then permanently on boot via /etc/fstab
:
sudo mount -o bind /var/lib/docker/volumes/docker_nextcloud/_data/data/timlow/files/alexandra/ /media/alexandra
cat << 'EOF' | sudo tee /etc/fstab
/var/lib/docker/volumes/docker_nextcloud/_data/data/timlow/files/alexandra/ /media/alexandra none bind
EOF
PiGallery2
@TODO: launch this, test docker-to-docker bind mount.
Configure docker compose file for PiGallery2 only, we do the reverse nginx proxy ourselves. Furthermore, bind mount the images directory directly to the source in the Nextcloud Docker volume.
cat << 'EOF' > ~tim/docker/pigallery2-compose.yml
version: '3.2'
# Version 3.2 required for long-syntax volume configuration -- see https://docs.docker.com/compose/compose-file/compose-file-v3/#volumes
# Source: https://github.com/bpatrik/pigallery2/blob/master/docker/README.md
# docker compose -f pigallery2-compose.yml up -d
services:
pigallery2:
image: bpatrik/pigallery2:latest
container_name: pigallery2
environment:
- NODE_ENV=production # set to 'debug' for full debug logging
volumes:
- "/var/lib/pigallery/config:/app/data/config" # CHANGE ME -> OK
- "db-data:/app/data/db"
# - "/media/alexandra:/app/data/images:ro" # CHANGE ME -> OK
- type: bind
source: /var/lib/docker/volumes/docker_nextcloud/_data/data/timlow/files/alexandra/
target: /app/data/images
read_only: true
- "/var/lib/pigallery/tmp:/app/data/tmp" # CHANGE ME -> OK
ports:
- 3010:80
restart: always
volumes:
db-data:
EOF
sudo docker compose -f pigallery2-compose.yml up -d
Add virtual host, something like below:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name photos.vanwerkhoven.org;
location / {
include snippets/nginx-server-proxy-tim.conf;
client_max_body_size 1G;
proxy_pass http://127.0.0.1:3010;
}
include snippets/nginx-server-ssl-tim.conf;
ssl_certificate /etc/letsencrypt/live/vanwerkhoven.org/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/vanwerkhoven.org/privkey.pem; # managed by Certbot
}
Mosquitto
@TODO
Install
sudo apt install mosquitto mosquitto-clients
Port configuration
cat << 'EOF' | sudo tee /etc/mosquitto/conf.d/tim.conf
# TvW 20190818
# From https://www.digitalocean.com/community/questions/how-to-setup-a-mosquitto-mqtt-server-and-receive-data-from-owntracks
connection_messages true
log_timestamp true
# https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-the-mosquitto-mqtt-messaging-broker-on-ubuntu-16-04
# TvW 201908
allow_anonymous false
password_file /etc/mosquitto/passwd
listener 1883
EOF
cat << 'EOF' | sudo tee /etc/mosquitto/conf.d/ssl-tim.conf
# Letsencrypt needs different CA https://mosquitto.org/blog/2015/12/using-lets-encrypt-certificates-with-mosquitto/
# Or not?
#cafile /etc/ssl/certs/DST_Root_CA_X3.pem
certfile /etc/letsencrypt/live/home.vanwerkhoven.org/cert.pem
cafile /etc/letsencrypt/live/home.vanwerkhoven.org/chain.pem
keyfile /etc/letsencrypt/live/home.vanwerkhoven.org/privkey.pem
tls_version tlsv1.2
listener 8883
EOF
Port users:
sudo touch /etc/mosquitto/passwd
sudo chown mosquitto /etc/mosquitto/passwd
sudo chmod og-rwx /etc/mosquitto/passwd
cat << 'EOF' | sudo tee -a /etc/mosquitto/passwd
user:$6$SALT$7HASH==
EOF
Worker scripts
@TODO
Live DNS IP updater
@TODO migrate to new server & set live
Via gandi-live-dns-config.py
sudo install -m 600 -o tim -g tim /dev/null /etc/gandi-live-dns-config.py # equivalent to touch && chmod 600 && chown root:root
cat << 'EOF' | sudo tee /etc/gandi-live-dns-config.py
# my config
api_secret='secret API string goes here'
domains={'vanwerkhoven.org':['www','home','nextcloud','photos','alexandramaya']}
ttl='1800' # our IP doesnt change that often, 30min down is ~OK
ifconfig4='http://whatismyip.akamai.com' # returns ipv4
ifconfig6='' # disabled until we get IPv6 right for VPN/firewall/etc.
#ifconfig6='https://ifconfig.co/ip' # returns ipv6
interface='' # set empty because else we get local ipv6
EOF
# Add crontab entry
# TvW 20210927 Disabled because I want some subdomains ipv4-only (home) because
# of VPN. Also, if my IPv6 address changes I need to update router firewalling
# and port forwarding as well. -- Update: run all hostnames as ipv4 only for now
*/5 * * * * python3 /home/tim/workers/gandi-live-dns/src/gandi-live-dns.py >/dev/null 2>&1
Collectd
@TODO
Collect VyOS stats via SNMP
https://collectd.org/wiki/index.php/Plugin:SNMP https://support.vyos.io/en/kb/articles/snmpv3 https://docs.vyos.io/en/latest/configuration/service/snmp.html https://forum.vyos.io/t/difficulty-monitoring-vyos-through-snmp/4146
Set up SNMP on VyOS
Get SNMP browser - e.g. https://www.ireasoning.com/mibbrowser.shtml
Nginx
Here we install and configure nginx. This DigitalOcean guide is a useful reference for nginx configuration.
More sources:
- https://linuxize.com/post/secure-nginx-with-let-s-encrypt-on-debian-10/
- https://community.letsencrypt.org/t/certbot-auto-no-longer-works-on-debian-based-systems/139702/7
Base install
sudo apt install nginx
Inspect, clean, and migrate nginx configuration:
sudo install -m 644 -o root -g root /dev/null /etc/nginx/conf.d/nginx-http-tim.conf # equivalent to touch && chmod 644 && chown root:root
cat << 'EOF' | sudo tee /etc/nginx/conf.d/nginx-http-tim.conf
# TvW 20230222 Additional default http block configuration settings, included automatically by default nginx.conf
# TvW 20200604 Disabled don't advertise version
server_tokens off;
# Add log format separate per virtual host so we can use goaccess to view who visits the server
# Parse using
# `goaccess /var/log/nginx/access.log --log-format=VCOMBINED -o report-all.html`
log_format vcombined '$host: $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log vcombined;
# TvW 20230222 expand gzip options - don't remember why, probably speed
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
EOF
sudo install -m 644 -o root -g root /dev/null /etc/nginx/snippets/nginx-server-site-tim.conf # equivalent to touch && chmod 644 && chown root:root
cat << 'EOF' | sudo tee /etc/nginx/snippets/nginx-server-site-tim.conf
# TvW 20230222 Default options for server blocks serving files
# include snippets/nginx-server-site-tim.conf;
# Add index.php to the list if you are using PHP
index index.php index.html index.htm index.nginx-debian.html;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
#try_files $uri $uri/ /index.php?$args;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
location ~ /\.ht {
deny all;
}
# Cache control
location ~* \.(?:js|css|png|jpg|jpeg|webp|gif|ico)$ {
expires 30d;
add_header Cache-Control "public, no-transform";
}
EOF
sudo install -m 644 -o root -g root /dev/null /etc/nginx/snippets/nginx-server-proxy-tim.conf # equivalent to touch && chmod 644 && chown root:root
cat << 'EOF' | sudo tee /etc/nginx/snippets/nginx-server-proxy-tim.conf
# TvW 20230222 Default options for server blocks acting as reverse proxy. Should be part of location / { }
# include snippets/nginx-server-proxy-tim.conf;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $server_name;
#proxy_set_header X-Forwarded-Ssl on;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
EOF
sudo install -m 644 -o root -g root /dev/null /etc/nginx/snippets/nginx-server-ssl-tim.conf # equivalent to touch && chmod 644 && chown root:root
cat << 'EOF' | sudo tee /etc/nginx/snippets/nginx-server-ssl-tim.conf
# TvW 20230222 Default options for server blocks serving ssl
# include snippets/nginx-server-ssl-tim.conf;
# Added 20190122 TvW Add HTTPS strict transport security
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Added 20190121 TvW Logjam attack - see weakdh.org
ssl_dhparam /etc/ssl/private/dhparams_weakdh.org.pem;
EOF
Fix logrotate conf to keep logs for a year (instead of 14 days):
cat << 'EOF' | sudo tee /etc/logrotate.d/nginx
/var/log/nginx/*.log {
# Rotate weekly instead of default daily
weekly
missingok
# Keep 52 instead of 14 files
rotate 52
compress
# Don't delay, compress after first rotation
# delaycompress
notifempty
create 0640 www-data adm
sharedscripts
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi \
endscript
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
}
EOF
Parse logs into visually digestable data using Goaccess:
# --persist/--keep-db-files on all files parsed
# --restore/--load-from-disk on second & subsequent files parsed
mkdir -p /tmp/goaccess-{nextcloud,photos,all}/
# At reboot, run goaccess on all files, then run on latest file every 5min
zgrep --no-filename "^nextcloud.vanwerkhoven.org" /var/log/nginx/access.log* | nice -n 19 goaccess --log-format=VCOMBINED -o /var/www/html/stats/report-nextcloud.html --keep-db-files --db-path /tmp/goaccess-nextcloud/ -
zgrep --no-filename "^nextcloud.vanwerkhoven.org" /var/log/nginx/access.log | nice -n 19 goaccess --log-format=VCOMBINED -o /var/www/html/stats/report-nextcloud.html --load-from-disk --keep-db-files --db-path /tmp/goaccess-nextcloud/ -
zgrep --no-filename "^photos.vanwerkhoven.org" /var/log/nginx/access.log* | nice -n 19 goaccess --log-format=VCOMBINED -o /var/www/html/stats/report-photos.html --keep-db-files --db-path /tmp/goaccess-photos/ -
zgrep --no-filename "^photos.vanwerkhoven.org" /var/log/nginx/access.log | nice -n 19 goaccess --log-format=VCOMBINED -o /var/www/html/stats/report-photos.html --keep-db-files --load-from-disk --db-path /tmp/goaccess-photos/ -
zgrep --no-filename -v '^nextcloud.vanwerkhoven.org\|^photos.vanwerkhoven.org' /var/log/nginx/access.log* | nice -n 19 goaccess --log-format=VCOMBINED -a -o /var/www/html/stats/report-all.html --keep-db-files --db-path /tmp/goaccess-all/ -
zgrep --no-filename -v '^nextcloud.vanwerkhoven.org\|^photos.vanwerkhoven.org' /var/log/nginx/access.log | nice -n 19 goaccess --log-format=VCOMBINED -a -o /var/www/html/stats/report-all.html --keep-db-files --load-from-disk --db-path /tmp/goaccess-all/ -
# Optional in case of problems, use something like below (from: https://goaccess.io/faq#configuration)
# LC_TIME="en_US.UTF-8" bash -c 'goaccess /var/log/nginx/access.log --log-format=VCOMBINED -o report.html'
# Dump & inspect existing config
nginx -T
# Migrate config
scp -r oldserver:/etc/ngnix/nginx.conf newserver:/etc/ngnix/nginx.conf # -- if you don't have tweaks here you might want to keep the vanilla configuration in case something's improved.
scp -r oldserver:/etc/ngnix/conf.d/ newserver:/etc/ngnix/conf.d/
scp -r oldserver:/etc/ngnix/modules-available/ newserver:/etc/ngnix/modules-available/
scp -r oldserver:/etc/ngnix/sites-available/ newserver:/etc/ngnix/sites-available/
scp -r oldserver:/etc/ngnix/sites-enabled/ newserver:/etc/ngnix/sites-enabled/
Migrate Certbot. Recommended package manager is snap, which has some FOSSissues being closed source. Hence we stick with apt
for now, which has an older version (1.12.0), but which should be fine (I was still using 0.40.0 on my old Ubuntu server).
Alternatives:
- Use snap
- Use another client
Install certbot client, this installs both /etc/cron.d/certbot
and a systemd
timer which can be seen running systemctl list-timers
(see this explanation).
sudo apt install certbot python3-certbot-dns-gandi
Two options:
- Get new certificate with maybe new account (preferred)
- Migrate certificates
New certificates
sudo apt install certbot python3-certbot-dns-gandi python3-certbot-nginx
sudo install -m 600 -o root -g root /dev/null /etc/letsencrypt/gandi.ini # equivalent to touch && chmod 600 && chown root:root
cat << 'EOF' | sudo tee /etc/letsencrypt/gandi.ini
# live dns v5 api key
certbot_plugin_gandi:dns_api_key=APIKEY
# optional organization id, remove it if not used
certbot_plugin_gandi:dns_sharing_id=SHARINGID
EOF
# Get certificate, use old plugin syntax because debian uses an old certbot client
sudo certbot certonly -a certbot-plugin-gandi:dns --certbot-plugin-gandi:dns-credentials /etc/letsencrypt/gandi.ini -d vanwerkhoven.org -d \*.vanwerkhoven.org --server https://acme-v02.api.letsencrypt.org/directory
# IMPORTANT NOTES:
# - Congratulations! Your certificate and chain have been saved at:
# /etc/letsencrypt/live/<domain>/fullchain.pem
# Your key file has been saved at:
# /etc/letsencrypt/live/<domain>/privkey.pem
# Your certificate will expire on 2023-05-24. To obtain a new or
# tweaked version of this certificate in the future, simply run
# certbot again. To non-interactively renew *all* of your
# certificates, run "certbot renew"
# Optional: Run nginx installer to install to servers, else install manually
sudo certbot run --nginx --certbot-plugin-gandi:dns-credentials /etc/letsencrypt/gandi.ini -d vanwerkhoven.org -d \*.vanwerkhoven.org --server https://acme-v02.api.letsencrypt.org/directory
# Optional: install automatic certificate renewal (also installed by default), either explicitly using the plugin, or implicitly via settings stored in /etc/letsencrypt/renewal/<domain>.org.conf
0 0 * * 0 certbot renew -q --authenticator dns-gandi --dns-gandi-credentials /etc/letsencrypt/gandi.ini --server https://acme-v02.api.letsencrypt.org/directory # explicitly use settings
0 0 * * 0 certbot renew -q # implictly use settings
Migrate certificates
Transfer settings/certs, something like:
ssh proteus
sudo scp -r /etc/letsencrypt/* <target>
Didn’t work this out
Deploy Let’s Encrypt certificates
@TODO figure out how to propagate the certificate safely and automatically across services Push certificate to PVE
- Use SSH with unencrypted public key authentication only available to specific user
- Use shared disk mount / mount point, copy new certificates there, poll daily from receiving server
cp fullchain.pem /etc/pve/nodes/pve/pveproxy-ssl.pem
cp private-key.pem /etc/pve/nodes/pve/pveproxy-ssl.key
Transmission
@TODO
Jellyfin
@TODO
Various / scratch
Destroy VGs
vgdisplay vgreduce –removemissing –force pve vgremove pve pvdisplay pvremove /dev/sda1
XCP-NG
- Install XCP-NG – Has Linux xcp-ng 4.19.0+1
- Install Xen Orchestra
- Fix scaling governor – xenpm get-cpufreq-para – xenpm set-scaling-governor ondemand – xenpm set-scaling-governor powersave
Partitioning scheme LVM + LUKS + UEFI
I’m using LVM to merge two SSDs into one volume. Additionally I’m encrypting the volumes (maybe). If you want to boot with UEFI, you need a partition layout like:
- disk0p1: UEFI (~100MB)
- disk0p2: /boot (~1900MB)
- LVM of disk0 and disk1 (1+5TB) – swap (1-2x RAM, i.e. 8GB) — Optional: encrypted swap – / (remainder, ~5TB) — Optional: encrypted /
Encrypting volumes requires entering the passphrase upon boot. This makes sense but is highly inconvenient. Maybe I should use user-space encryption instead.
Packages / configuration
Using USB network dongle under linux
Malfunctioning USB-C dongle (RTL8152B)
- Model as identified under Windows: ‘Realtek USB FE Family’
- Label on dongle ‘USB 2.0 to fast ethernet adatper Model NO: JCX-010-LAN 100’
- Chipset Realtek RTL8152B
Base situation:
- Not recognized in dmesg
- Not recognized under lsusb
sudo apt install firmware-realtek
Does not help
Trying https://github.com/awesometic/realtek-r8152-dkms
Build DKMS module ourselves
sudo apt install linux-headers-5.10.0-16-amd64
Also did not work.
Trying drivers from github/wget, with proxmox pve headers and build tools:
wget https://github.com/wget/realtek-r8152-linux/archive/refs/tags/v2.16.3.20221209.tar.gz
tar xvf v2.16.3.20221209.tar.gz
cd realtek-r8152-linux-2.16.3.20221209/
apt install pve-headers-$(uname -r)
apt install build-essential
make
make install
depmod -a
update-initramfs -u
Also does not work
Configuring ASIX AX88179 USB 3.0/C ethernet adapter
Chip specs: https://www.asix.com.tw/en/product/USBEthernet/Super-Speed_USB_Ethernet/AX88179
Should work in proxmox: https://forum.proxmox.com/threads/solved-the-problem-problem-with-2-usb-network-cards-asix-ax88179.101732/
Netgear thread: https://forum.netgate.com/topic/105696/intel-nuc-with-startech-usb-gigabit-nic-chipset-asix-ax88179/2
Review of USB ethernet dongles: https://www.virten.net/2020/09/tips-for-using-usb-network-adapters-with-vmware-esxi/
Enable Thunderbolt port
Power usage
- Right after clean Debian install, no monitor, wired internet, no WLAN, SSH enabled: 3.9-4.0W
Reduce power consumption
Future work
- Check hardware acceleration inside VM https://cetteup.com/216/how-to-use-an-intel-vgpu-for-plexs-hardware-accelerated-streaming-in-a-proxmox-vm/
#networking #nextcloud #nginx #security #server #smarthome #debian #vyos #proxmox #unix