Measuring the WAL vs DB Performance Gap on Ceph USB OSDs
Nine of my fifteen Ceph OSDs use WAL-only acceleration while six use DB. I set out to measure the performance gap and discovered the real story isn't WAL vs DB — it's the USB 3.0 hardware ceiling that dominates everything. The matched-hardware comparison shows DB is 5-15% faster on reads, not the 32% that naive cross-node testing suggested.
Hybrid Ceph Storage: SSD WAL/DB Acceleration with USB Drive Data
Running Ceph on USB drives sounds crazy until you put the WAL and DB on an SSD. Here's how separating metadata onto a Crucial MX500 transformed my 15-OSD homelab cluster from sluggish to surprisingly capable — at a fraction of all-SSD costs.
When ZFS and Ceph Problems Collide: Diagnosing Overlapping Failures on Proxmox
A routine ZFS scrub alert on harlan turned into a multi-hour debugging session when a hostid mismatch fix collided with a pre-existing Ceph OSD failure from a dead USB drive. Here's how overlapping storage problems can mask each other and how to untangle them.
Ceph OSD Recovery After Power Failure: SAN Switch Was Dead the Whole Time
A power outage knocked my Ceph cluster from 15 healthy OSDs down to 4. The recovery took days of debugging — heartbeat cascades, a ceph.conf misconfiguration, and a dead SAN switch hiding behind NO-CARRIER flags on every node.
USB Drive SMART Updates: Fast-Track to the GRUB Solution
New Seagate USB drives arrived for the Ceph cluster and predictably won't report SMART data. After months of production use, GRUB boot parameters are the only method that reliably survives kernel updates and cluster reboots.
Homelab Storage Economics: Ceph vs Single Drive Costs
Real-world cost-per-GB analysis of distributed Ceph storage versus single-drive solutions in a homelab — the same infrastructure investment framework applies at any scale.
Optimizing Ceph Performance in Proxmox Homelab
Performance tuning Ceph on USB storage and constrained hardware — mClock configuration, IOPS optimization, and the realities of USB 3.0 as a storage tier.
Managing Ceph Nearfull Warnings in Proxmox Homelab
Ceph's nearfull warnings are capacity planning signals, not emergencies — if you understand the thresholds and respond before the cluster goes read-only.
Proxmox 8 Lessons Learned in the Homelab
Hard-won lessons from running Proxmox since version 7.4 through 8.2.2 — upgrade gotchas, Ceph integration tips, and the patterns that apply at any scale.
Adding Ceph Dashboard to Your Proxmox Cluster
The Ceph Dashboard is essential for monitoring cluster health without SSH, but setting it up on Proxmox isn't straightforward. Here's how to get it working with SSL and proper authentication.
Upcoming Articles Roadmap: September - December 2025
A publishing schedule for the remaining 16 articles planned through end of 2025, covering Proxmox, Ceph, hardware monitoring, CLI tools, and Jekyll optimization.
Ceph Cluster Complete Removal on Proxmox for the Homelabs
How to completely remove a broken Ceph cluster from Proxmox — every service, package, config file, and orphaned mount point — so you can start fresh.
ProxMox 8.2.4 Upgrade on Dell Wyse 3040s
Ceph Monitor refuses to start after a Proxmox 8.2.4 upgrade because root partitions hit 95% — fixed by clearing apt cache, removing atop logs, and purging old PVE and Debian kernels.
Proxmox Ceph settings for the Homelab
Tuning Ceph scrub and deep-scrub intervals to reduce wear on spinning rust drives in a homelab cluster — spreading daily scrubs over 7 days and weekly deep scrubs over 28 days.
ProxMox 8.2.2 Cluster on Dell Wyse 3040s
Building a three-node Proxmox 8 + Ceph test cluster on Dell Wyse 3040 thin clients to safely evaluate SDN and Ceph configuration changes without risking the semi-production cluster.
Hard Drives for the Homelabs
Storage has hit a penny per GB. A 20TB renewed drive for $200 and a 20TB USB-C enclosure for $220 make the math work for a Ceph offsite backup strategy.
ProxMox 8.2 for the Homelabs
Building a Proxmox 8 + Ceph HA cluster on decade-old Dell OptiPlex 990 hardware with Seagate USB drives — the background, the hardware inventory, and why Ceph beat a Synology NAS.
Ceph Cluster rebalance issue
Fixing a severely imbalanced Ceph cluster where OSDs added in batches left the first three drives at 75-85% while newer ones sat nearly empty — by tuning backfill and recovery parallelism.