Homelab Part 1: ORIGINS

During the first COVID lockdown, I found a new way to occupy myself and my wallet; I was going to learn how the internet works or go broke trying.

That was definitely not my original plan, but it’s definitely what ended up happening and led to the rise and fall of my homelab & private cloud. 

A homelab is any number of computers or servers which create an environment to build and test software – a virtual sandbox where you can play technological god and create or destroy virtual machines at will. Basically it’s a platform for learning. Named so by the 500k strong community of amateurs and sysadmin alike on r/homelab; if you break it, you can fix it. No need to worry about the threat of angry users or expensive downtime – bar the few family members you’ve convinced to use it. 

Thanks to days of my life dedicated to my silly little server farm, I’ve practiced lifecycle management, containerization on Docker and Kubernetes, data migration, upgrades, downgrades, frontends, backends and most things in between. I am my own Infrastructure Management team. 

Naturally, my obsession started small… 

PHASE 1:

Rockstor Homelab V1
Homelab V1.0 circa 2015, not sure why I had so many switches.

The server journey began a lot sooner than C19 in around 2015 when I built my first NAS from a jumble of old computer parts. I‘m not sure what my motivation was, but it was fun to stream my games over the network and put my new gigabit switch through it’s paces. It boasted a blazingly fast AMD Athlon CPU, 4GB RAM and two Segate 2TB HDDs. It ran Rockstor and had a simple mirror on the drives. Not exactly enterprise grade but it gave me a taste. I justified 24/7 operation in my parents loft under the guise of backups for the various computers in the house – which to my credit it did perfectly, and came in very handy when the HDD in an iMac failed. It sat in the loft silently backing up for the next 4 years.

PHASE 1.1: A SLIGHT UPGRADE

Fast forward to late 2019 and I had the server itch once again. I scrounged a third 2TB HDD and migrated to TrueNAS so I could make a ZFS RAIDZ1 pool. I had absolutely no idea what any of that meant at the time, and there was no real reason apart from leaving school and suddenly being bored. A few months pass and I do some gap year-y things before being grounded due to C19. On top of my love of everything rackmount and itchy feet about the state of current server, there was now a need to edit lots of photos for Tom Dick & Harry. I bought a small Dell Poweredege R310 on eBay with a view to creating a solid and redundant vault for all our data.

PHASE 2: A SERVER! A REAL SERVER!

my first real server
My first proper server, 2019. Dual power supplies and all.

It didn’t have much horsepower, nor was it efficient… but it had 4 hot swap 3.5” bays with blinky lights which I thought was just the best thing ever. By this point I had learnt more about ZFS and was really keen on using it for all the data protection it provides. The server came with the upgraded PERC H700 RAID controller which was nice, but ZFS much prefers direct access to the drives through an HBA. It’s recommended that if you must use a RAID card, have each drive in its own RAID0 array and let ZFS do its magic from there. The danger with this approach is that if the RAID card fails, you need a replacement of the same model to import the array before it can be detected by TrueNAS. Another downside being that if you want to migrate to a different system, using an HBA instead for instance, the drives aren’t readable by anything but the RAID card… so if you want to keep your data you need a separate system to copy the data to.

I could have bought an HBA with internal SAS ports as a drop in replacement for the H700, but they were a little pricey and I was still naive – so I wasn’t  100% sure it would work (it would have). Since I was using consumer SATA drives, I instead opted to go for a less robust route using SATA → SAS cables to connect the hot swap backplane to the built in SATA ports of the R310s motherboard. It worked as expected and gave TrueNAS direct drive access,  but it still made me uncomfortable since I assumed the on-board controller was really only meant for boot devices with little I/O. Of course it’s still enterprise gear so I’m sure it would have been fine for a long time, but it just made me nervous of a failure. Since I now had direct drive access, a failed SATA port wasn’t actually a terrible issue, but I think I was warming myself up to the idea of another upgrade. I installed a few plugins via the TrueNAS web UI like Nextcloud, Plex and mineOS and was happy for a while. 

PHASE 3: UNLIMITED POWER(VAULT)

So there were still those SATA → SAS cables which were ever present in my conscience. I was also looking to the future and knew I’d need (I didn’t need anything) to expand at some point, and I only had 4 bays in the R310. I knew the NetApp disk shelves were a popular option, but consequently were expensive; so I scoured eBay for a while and found some alternatives. I had the choice of a Dell PowerVault MD1000 or a EMC VNX array. Both similarly priced (very cheaply at the time since Chia Coin mining wasn’t around yet), but while researching the EMC I found some stories with scary words like ‘interposers’ and ‘520b sector sizes’. I had absolutely no clue what they meant but I wasn’t going to waste any money finding out.

I knew I could get the MD1000 to work because I’d seen a pixilated Russian video from 2012 of it working with the OS seeing the individual drives. Perfect! That was enough evidence for me. The Powervault was snapped up and promptly arrived on my doorstep. I’d later wish I chose the EMC, but hindsight is a blessing and it still works to this day… so I can’t complain. It even came with all 15 caddies.

my first disk shelf :)
My MD1000 in action

Now the only issue was connecting it to my R310 in a way that would allow ZFS direct drive access. The controller on the Powervault used SAS SSF-8470, but there are basically no HBAs with SFF-8470 connectors since at the time of the MD1000’s release it was RAID or GTFO. Even if I could find one it would be more than 10 years old and needlessly expensive. So I took a risk and bought a Dell PERC H200 and a SFF-8088 to SFF-8470 cable (I did my research and figured out that’s what the Russians were probably using (espionage)).

The H200 was cheap and plentiful, and I’d read that it could be quite easily crossflashed to LSI 2008 firmware which would turn it into an HBA only card. As for the cable, I confirmed the different connectors use the same communication standard – so are cross compatible apart from the speed, which is limited to the lowest common denominator (3GB/s on 8470 compared to 6GB/s on 8088). I bought the H200 pre-crossflashed and the whole thing miraculously worked out of the box. I installed the drives from the R310 and they were imported by FreeNAS like nothing had changed. Great Success!

It’s worth mentioning that the fans on the MD1000 are stupidly loud. So loud that I had to move the operation from the loft to the garage, which was separated by a brick wall. Even with this you could still hear a faint whine in the middle of the night if you listened hard. To me, it was the sound of progress… to my parents it was the sound of the energy bill rising. This is the main reason I wish I’d chosen the EMC. The fans alone must consume at least 40W at idle, and I’m sure the dual power supplies designed for 15 x 15K SAS disks are horribly inefficient too. To this day I can only guess power consumption, and fear keeps me from buying a power meter. Out of sight, out of mind.

I even tried to tamper with the fans and unplug one on each PSU, but it’s surprisingly smart and ramps up the remaining fans to compensate. With all the fans on full blast you have to shout over it. Stupid.

So now my storage situation was bulletproof!

PHASE 4: ENTER THE HYPERVISOR

the homelab grows
Grandma’s hypervisor pictured running pfSense

Now we were well into in the age of Zoom, I managed to convince Grandma that a laptop would be much more convenient than her clunky desktop. I feel slightly guilty about that as she asked for her old one back after a while, but in the meantime it gave me something to play around with. Initially, I loaded pfSense and installed an Intel 1000 quad port network card so I could use it as a router. Great fun to see all the metrics, although the default state of pfSense is locked down tight. Perfect for business who wants to protect their network, but not great for a home environment where various games and other devices want UPnP and a more open network. I tried configuring it to be open, but couldn’t get it quite right and my brother was becoming increasingly pissed he couldn’t play certain games.

Now one of the most elusive and frustrating issues I have faced emerges.

After a honeymoon with pfSense, a messy divorce was on the horizon. Out of nowhere I started getting massive packet loss and latency spikes in the order of 5 seconds. The internet was unusable. I tried absolutely everything, thought I’d found the problem several times and then been bitterly disappointed when the problem persisted. After many broadband quality monitors, forum discussions and even a new modem from Virgin Media the problem was still there. pfSense had to go. However, it’s return was eagerly anticipated. 

So now I had an extra computer that I didn’t know what to do with. Virtual machines are interesting… but what do I use them for? I have no idea but lets give it a go. I’d been watching Tom Lawrence on Youtube, who is a big advocate of the open source fork of the Citrix hypervisor: XCP-NG. I installed it on Grandmas desktop and started having a play. I think the first thing I did was make a Nextcloud server from scratch on Ubuntu. Of course I was only copying and pasting commands into the terminal, but this was a massive jump to using the command line for the fist time. Did I know what I was doing? Absolutely not. At first I thought you had to install three databases (I didn’t even know what a database was) and it was only on about the third time I rebuilt nextcloud from scratch that I realised that you had to pick only one. Nevertheless, it worked (SSL and all) and I desperately tried to get people to use it so I could load test it (no takers). I then worked out that I could use the R310 as storage backend for virtual machines, but couldn’t do much else as I’d already run out of RAM. Grandmas hypervisor sat running for a few months before it was agreed that I’d switcher her back from her new laptop.

Of course I was “too busy” to switch her back until I had found a suitable replacement for the hypervisor.

PHASE 5: AN UNSUITABLE REPLACEMENT

My growing stack of servers, icecap melters at the bottom.

I thought it might be a good idea to send an email to all the IT companies in the area and asked if they’d got any old enterprise gear they’ve removed from client installs. Lo and behold one of them did! 

I was now the lucky owner of two iceberg melters – a Dell Poweredge 2950 and 1950. The 1950 was a lost cause with only one low power potato for a CPU and 2 GB of memory. The 2950 had the similar specs, but had room for improvement. I bought a second matching CPU and 24GB of DDR2 memory (yeah… circa 2006). A dinosaur in enterprise terms. 

The latest XCP-NG wouldn’t even run on it because it was so old, but i was curious to try another hypervisor anyway. Instead I decided to give VMware ESXi a go. The latest version also wouldn’t run on my space heater, but a slightly older version was certified to work. I exported the Nextcloud VM from XCP-NG and got it up and running on the 2950. It was a little slow for sure. I even tried installing a WindowsVM it to see what it could do, but it hopeless. There was no two ways about it, it was seriously lacking in horsepower. It was also consuming 350W. The garage was now the hottest room by far and was not sustainable in the slightest.

I even built a makeshift server rack out of scrap wood to try and control the heat and noise, but it didn’t help. It did look slightly cooler now though!

This went on for a month or so, but I was looking for a successor all the while. Eventually I came across a dell R610 for £70. I drove an hour up the M11 to get my server and ended up coming home with two for the bargain price of £100.

the r610s!
1 of 2 big boy servers!

Now we enter the new age of the homelab! More ramblings coming in part 2.

Send me an email at

All enquiries welcome!