McGarrah Technical Blog

Tailscale on Dell Wyse 3040 with Debian 12

I have been using the Dell Wyse 3040 as awesome little systems for my Tailscale nodes in my multiple joint homelab networks. These systems are super low power consuming and physically small enough to just plug and go. Truly, deploying a WireGuard®-based VPN solution could not be any easier. I have four of these units connecting my homelab networks across three geographically diverse locations.

MacOS Touch ID for Sudo with TMUX and DisplayLink

This is an out of place post but I figured if setting up Touch ID with sudo on my MacBook Pro stumped me that it would cause others issues and worth a quick write up. Also worth having around when I get a new MacBook Pro in the future.

So to start, I use a MacBook Pro M2 Pro for my daily driver machine at work. It is the closest I can get to a Linux machine in the office. I end up using sudo frequently enough that I liked the idea of Touch ID rather than type a password in a dialog. I encountered a couple of hiccups along the way with tmux, iTerm2 and DisplayLink that had to be fixed.

Proxmox Ceph settings for the Homelab

What is Ceph? Ceph is an open source software-defined storage system designed and built to address block, file and object storage needs for a modern homelab. Proxmox Virtual Environment (PVE) makes creating and managing a Hyper-Converged Ceph Cluster relatively easy for initially configuring and setting it up.

Why would you want a Hyper-Converged storage system like Ceph? So your PVE that runs Virtual Machines and Linux Containers has a highly available shared storage service making them portable between nodes in your cluster of machines and thus highly-available services.

There is a significant learning curve involved in understanding how the pieces of Ceph fit together which the Proxmox documentation does a decent job of helping you along. Proxmox VE sets some decent defaults for the Ceph Cluster that are good for an enterprise environment. What they do not do is help you set default to reduce wear and load on your Homelab system. This is where I am going to try out a few things to reduce load and wear on my Homelab equipment while maintaining a relatively high-availability environment.

My post on Ceph Cluster rebalance issue from earlier was from figuring out issues in an unbalanced cluster from a strange data loaded into a cluster. This post is focused on a regular running cluster that needs some optimization for the homelab.

ProxMox 8.2.2 Cluster on Dell Wyse 3040s

I want a place to test and try out new features and capabilities in Proxmox 8.2.2 SDN (Software Defined Networking). I would also like to be able to test some Ceph Cluster configuration changes that are risky as well. I do not want to do it on my semi-production Proxmox 8.2.2 Ceph enabled Cluster that I have mentioned in earlier posts. With 55TiB of raw storage and 29TiB of it loaded up with content, that would be painful to rebuild or reload if I made a mistake during my testing of SDN or Ceph capabilities.

Test in Prod, what could go wrong?

Posts