Self-Hosting my own Cloud Storage: FreeNAS, Nextcloud, and Tailscale

Until recently, I’ve never really felt the urge to access the contents of my DIY NAS from outside of my own network. The way that I’ve used my NAS, it has been simpler to use services like Google Drive to have access to my most critical data on my various computers, tablets, and phones.

Between COVID-19 and my new job being 100% remote, I have spent nearly all of my time on my own network. If anything, I have less of a need to access my NAS’s contents remotely. However, I’m hopeful about our vaccination efforts and I’m cautiously optimistic there might be light at the end of that tunnel. My growing catalog of content (primarily video footage) has long since eclipsed what can be stored on Google Drive or Dropbox—and even if I could buy the space, I’d much rather invest the cash into improving my NAS!

In my blog about implementing Tailscale at home, I installed Tailscale on my OpenWRT router and used Tailscale’s relay node feature to allow my other Tailscale nodes to use all of the resources on my network. But SMB’s performance over the Internet isn’t that great, and I wanted to make sure that important content—like my blog—continued to be synchronized across all of the machines that I’d want to be able to access it from.

I wound up deciding that I’d try and host my own cloud storage using Nextcloud and access it via Tailscale. This decision was instigated by the shift in what I was asking of my own DIY NAS and encouraged by the numerous questions I’ve been asked about self-hosting cloud storage over the years.

My DIY NAS is running FreeNAS-11.2-U8, has an Avoton C2550 CPU, 32GB of RAM, and a 10Gb NIC. Everything in this article was written in the context of using my NAS. I would expect that the same—or similar—steps would work with different versions of FreeNAS (now known as TrueNAS Core), but your mileage may vary.

Please share your experiences with different versions in the blog’s comments!

Plugin vs. Jail vs. Virtual Machine

The most difficult decision I made was whether to use the Nextcloud plug-in for FreeNAS/TrueNAS, to create a FreeBSD Jail, or to host it within a Bhyve virtual machine. Each option had its own benefits and drawbacks, and I as I explored this topic, I experimented with each of the three options.

  • Nextcloud Plug-in: Setting up the plug-in for FreeNAS/TrueNAS was incredibly easy; I had Nextcloud up and running in mere moments. I even briefly exposed my Nextcloud VM to the Internet through my router. I had hoped it would be a simple task (for me) to add the Tailscale client to the jail created by the Nextcloud plug-in, but quickly learned that wouldn’t be the case.
  • FreeBSD Jail: Having run into challenges tinkering with the Nextcloud plug-in, I figured I could just install and host my own Nextcloud alongside Tailscale from inside a jail on my NAS. Setting up Nextcloud in this jail was easy—but the Tailscale client wound up being difficult. Tailscale would crash any time I launched it, and I wasn’t having much success debugging it on my own—or finding helpful information to help me stumble through resolving it. Unfortunately for me, I’m simply not a savvy enough of a FreeBSD user to get Tailscale functioning.
  • Bhyve Virtual Machine: I would have preferred running Nextcloud and Tailscale in a FreeBSD Jail, mostly because it’s less resource-intensive. But after not having much luck before, I ultimately decided that I would use the Bhyve hypervisor to host a virtual machine. For that virtual machine, I’d choose Ubuntu 20.04 for its operating system. I am a bit much more familiar with Linux vs. FreeBSD and infinitely more capable of finding answers to questions that I run into since Ubuntu is fully supported by both Tailscale and Nextcloud.

Why is Brian hosting the Nextcloud Virtual Machine on his DIY NAS instead of his Homelab server?

This is a good question! All things being equal, I would choose to host Nextcloud on my homelab server. But I’ve made exceptions to this rule before! For example, there are things (Home Assistant, Octoprint, etc.) in my house running on Raspberry Pis that probably should be running in a virtual machine on my Homelab server instead.

Ultimately, the reason my Nextcloud VM is running on my DIY NAS is: all of you! I’m quite grateful that so many people find their way to my blog when researching various DIY NAS topics. I know that hosting your own cloud storage is a huge point of interest in the NAS community, so it made sense to do it in a way that a fellow NAS enthusiast might want to follow!

Besides, I routinely advocate building over-the-top DIY NAS machines with more processing power than a NAS might need. Using that extra CPU power to host a cloud storage VM is a great way to leverage that extra capability.

Creating a Virtual Machine to run Nextcloud and Tailscale

On my DIY NAS, I logged into the FreeNAS interface and created a Virtual machine. I allocated the VM with a single Virtual CPU, 512MB of RAM, created a new 64GB disk image, pointed it to the Ubuntu 20.04 ISO I picked and, started the virtual machine up.

Using the FreeNAS UI’s built-in VNC viewer, I attempted to begin the installation and immediately ran into this error:

Initramfs unpacking failed: write error
Failed to execute /init (error -2)
Kernel panic - not syncing: No working init found. Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance.

The culprit to this error was the fact that I only allocated 512MB of RAM (the minimum suggested for Nextcloud) and Ubuntu Server’s minimum hardware requirements list a gigabyte of RAM. Increasing the Virtual Machine’s RAM to a gigabyte moved me past this error.

For the most part, I simply paged my way through the Ubuntu 20.04 Server setup picking primarily default options, naming my server nextcloud and setting up my root username and password. On the Featured Server Snaps screen, I was delighted to see that there was an option for Nextcloud, but as a result of my experimentation I decided to recommend skip over install the Nextcloud Snap at this point.

The installation rebooted the virtual machine, and after the completion of the reboot, the VM was a little annoyed at me and said it wasn’t able to unmount the CDROM and then told me to eject the installation media. My solution for this was probably a bit brutish: I shut down the virtual machine and deleted the CDROM device entirely. A bit crude, perhaps, but still very effective.

I started the Virtual Machine up one more time, confirmed that I was able to log in via FreeNAS’s built-in VNC client, and then switched over to using SSH for the remainder of my work.

Choosing VM Wizard Type Guest OS, Servername, Boot Method, etc... Allocation of Virtual CPUs and RAM Creating a Virtual Disk Image Virtual NIC Setup Mounting the Boot Media First Boot of the VM in VNC Viewer

Installing and Configuring Tailscale on the Nextcloud VM

On my freshly installed Ubuntu 20.04 virtual machine, the first thing that I set out to do was to get Tailscale up and working. As I had learned from my prior experience with setting up Tailscale on other devices, it was incredibly easy.

I simply followed the few steps from the Setting up Tailscale on Ubuntu 20.04 LTS (focal) documentation and my new virtual machine was up and running without any fuss.

Set up a ZFS Dataset and NFS Share on my NAS to house my Nextcloud Data

My intention all along was to create a dataset on my DIY NAS, to share it with the Nextcloud VM, and then to configure Nextcloud to use that shared path to store all of its data, primarily because what I valued was the data that was being stored and less so the virtual machine. In the long run, having the dataset will save me some storage space (example: space wasted in over-allocating space in a virtual hard drive and saving some administrative work in growing the virtual hard drive as more data is stored on Nextcloud), it enables me to set up ZFS snapshots for my Nextcloud data, and I’m poised to include this Nextcloud dataset among what I’m already backing up on Backblaze B2.

  1. Created a user/group to grant permissions to access the Nextcloud Data directory as.
  2. Created a new ZFS dataset on my NAS for the Nextcloud data.
  3. Set the permissions on the ZFS dataset, setting the User and Group to match what was created in the prior steps.
  4. Created an NFS share, pointed it to the Nextcloud dataset’s path (/mnt/volume1/nextcloud), restricted it to the Nextcloud VM’s hostnames (nextcloud, nextcloud.lan, and, and set the Mapall User and Mapall Group to the User and Group created in the first step.

Setting up user/group Adding ZFS Dataset Setting Dataset Permissions Creating NFS Share

Mount the NFS share to the Nextcloud Virtual Machine

Having created the dataset and NFS share on the NAS, I swapped over to the Nextcloud VM to mount the share. I went through a few different iterations before landing on these steps. I wound up learning that the local directory to mount the NFS share in needed to belong under /mnt or /media thanks to how the Snaps work.

  1. Installed the nfs-common package on the Nextcloud VM (sudo apt update and sudo apt install nfs-common)
  2. Created a new Nextcloud data directory (mkdir -p /mnt/nextcloud) on the Nextcloud VM.
  3. Change ownership to root of the new Nextcloud data directory (sudo chown -R root:root /mnt/nextcloud) on the Nextcloud VM.
  4. Changed Permissions on the new Nextcloud data directory (sudo chmod 0770 /mnt/nextcloud)
  5. (Optional) Confirmed that the new NFS share existed, could be mounted, and that files could be viewed/edited/deleted on the Nextcloud VM.
    1. Validate presence of NFS share via showmount -e drteeth.lan.
    2. Manually mounted the NFS share via sudo mount drteeth.lan:/mnt/volume1/nextcloud /mnt/nextcloud.
    3. Created, viewed, and deleted a test file in the /mnt/nextcloud/ path.
    4. Manually unmounted the NFS share via sudo umount /mnt/nextcloud.
  6. Set up mounting the NFS share at boot by editing /etc/fstab.
  7. Executed sudo mount -a to mount the newly added line from /etc/fstab.

Installing the Nextcloud Snap and configuring it to use the custom Nextcloud Data Directory

In my stumbling and tinkering, I deleted everything and started over from scratch a few times. I had made the mistake of too excitedly installing Nextcloud and immediately started using it before figuring out how make it use the network share that I had mounted for its data storage.

But in following the Nextcloud Snap’s directions on changing the data directory to use another disk partition, I wound up overlooking two equally important details:

  1. I needed to connect the removable-media Snap to Nextcloud.
  2. The local path to my share needed to exist beneath either /mnt or /media on my Nextcloud VM.

Not understanding these two details had me scratching my head at a couple different points, running into permissions errors, and flailing trying to get Nextcloud working

  1. Executed the Nextcloud Snap Installation (sudo snap install nextcloud)
  2. Connected the removable-media to Nextcloud Snap (sudo snap connect nextcloud:removable-media)
  3. Edited the Nextcloud autoconfig (/var/snap/nextcloud/current/nextcloud/config/autoconfig.php) and updated the directory variable to ‘/mnt/nextcloud’.
  4. Restarted the Nextcloud PHP Service (sudo snap restart nextcloud.php-fpm)
  5. From my browser I opened the Nextcloud VM’s URL
  6. I set up the primary administrator’s account in the Nextcloud UI.

Now What?!

The possibilities are really endless! First, I need to get the Nextcloud client installed on my desktop computer, laptop, my phone, and tablet. But after that, I’m curious about using Tailscale’s sharing to maybe provide some cloud storage to family and friends. I’m very interested in untangling my rat’s nest of synchronization tasks and cloud storage providers and relocating my blog’s storage into Nextcloud. The same goes for my recent FPV quadcopter footage, I’d like it to get uploaded to Nextcloud so that I can more easily edit those videos.

Final Thoughts

This blog was supposed to be mostly about Nextcloud, but I can’t stop raving about Tailscale. Prior to using Tailscale, hosting my own cloud storage solution was going to be too much investment—of dollars and time! The sum total of effort in setting up a VM, configuring up Nextcloud, maintaining SSL certificates, dealing with the opening ports on my firewall, and any fallout from aggravating my Internet service was just too much. Combining Nextcloud with Tailscale eliminated or mitigated the hassles associated with many of those factors.

I’m excited to tinker with Nextcloud, and at the rate that Tailscale keeps announcing new features, I’m excited to see what feature I’ll get to try on this VM next! For the time being, I’m going to prioritize how I can leverage Nextcloud to make my most important data more ubiquitous, but I’m open to any possibilities!

Are you using Nextcloud with your NAS in order to host your own cloud storage? How have you tackled keeping your data synchronized between many devices? Are you interested in Nextcloud—but haven’t yet taken the plunge? What’s standing between you and adoption of Nextcloud? What sort of functionality would you like to see featured in future blogs? Let me know down in the comments below, I’d love to hear your thoughts!

Tailscale: A VPN that even Brian can use!

I built and blogged about building my own DIY NAS back in 2012 and I’ve been repeating building a new DIY NAS once or twice a year ever since. One of the most frequently asked questions across those blogs has been, “How do you access your DIY NAS from the Internet, Brian?”

My answer to that question has always been “I don’t.” For the most time I’ve had a DIY NAS, I just didn’t have much need or interest in accessing the contents of my NAS from outside my home. At first, I only used my DIY NAS for backing up the computers that I had at home. But slowly over time, I’ve transitioned to using my DIY NAS as my primary storage for all of my data.

Consequently, accessing my data remotely has been more and more important over the years. Primarily, I’ve used services like Google Drive or Dropbox to access critical data and synchronize changes made to it across all my machines. For quite a while this has been both easy and cheap. But I started creating more content, especially videos for my YouTube channel, and this solution slowly began to break down and become more expensive.

Over the past two years, I’ve made a few halfhearted attempts to install and configure a VPN endpoint within, or at the edge of, my own network. Every time that I tried, I ran into issues—mostly all related to my lack of expertise—and set it aside to figure out another day.

For a while now, a few people have been encouraging me to check out Tailscale. In fact, Pat’s been telling me routinely about how he’s made his life easier with Tailscale and insisting that it was really simple to set up. While I have had no reason to doubt Pat’s assessment, I’ve also learned that there’s a cornucopia of topics that Pat thinks are painfully simple which completely short-circuit my brain.

What is Tailscale?

Over on Tailscale’s website they describe Tailscale as “A secure network that just works. Zero config VPN. Installs on any device in minutes, manages firewall rules for you, and works from anywhere.”

You create an account with Tailscale, you install a client on each machine, associate those clients with your account, and Tailscale encrypts traffic between any of your endpoints.

Tailscale was as easy as Pat made it sound every time he told me about it. In fact, I think it was easier!

How am I using Tailscale?

Out of curiosity, I set up Tailscale on a few different devices and without any effort I had Tailscale up and running on my phone, my tablet(s), two different Raspberry Pis, my laptop, my desktop computer, and on my OpenWRT router.

Remote Desktop access to my Tradewars 2002 game server

A couple years ago, I wrote a nostalgic blog about Tradewars 2002 which convinced me to spin up my own Tradewars game server on a small virtual machine hosted in Azure. For a long time, I’ve lived with exposing more than just the Tradewars game server’s port in order to remotely access the machine.

Implementing Tailscale on the game server let me close down the port(s) that I had exposed—and probably never should have—on the TW2002 virtual machine.

This is something that I could’ve accomplished in the Azure portal on my own by setting up some firewall rules in Azure and also on the virtual machine. But it was so much easier to just completely close it down and use Tailscale instead.

As an added benefit, my super-secret outer space trading strategy is now happening over an encrypted tunnel!


If you’ve read my two-part series of blogs about Pi-KVM then you’re already aware that Pi-KVM is an awesome little project that allows you to build an inexpensive KVM-over-IP using a Raspberry Pi 4 Model B (2gb version), a video-capture device, and an assortment of USB cables.

Combining the Pi-KVM and Tailscale is a really compelling pairing which demonstrates the value of both products. If I had a friend who was having a computer problem in another location (near or far), I could give them the Pi-KVM, they could hook it to their computer and network, and thanks to Tailscale I could access the Pi-KVM interface remotely. I could access the machine’s BIOS, boot from an ISO, or remotely access the native operating system without much effort at all.

A shotgun approach to access the contents of my DIY NAS remotely

Tailscale’s recommended approach is to put the Tailscale client on all of the devices and assemble a mesh network of connected endpoints. This assumes that you’re able to install the client directly on each machine. Unfortunately, FreeNAS (or TrueNAS) does not include the Tailscale client in their base operating system and they don’t really want you tinkering with the operating system at all, so the Tailscale-recommended approach is not as straightforward.

In evaluating my options, I knew I’d need to use Tailscale’s relay node if I wanted to be able to access my NAS using Tailscale. It seemed like I had a few options using Tailscale as a relay node:

  1. A jail hosted on my FreeNAS server.
  2. A virtual machine on my homelab server.
  3. On my OpenWRT router.

For the time being, I have opted for the third option. I knew that running it on my own OpenWRT router was possible thanks to Pat’s blog about putting Tailscale on his Mango OpenWRT router.

I configured the relay node to relay for the subnet that my NAS (and the rest of my home network) is on. I wound up deciding that if I was going to compromise by moving away from Tailscale’s mesh of encrypted network endpoints, then I would do so in a way that provided the maximum of possible utility. In this way, I was using Tailscale pretty similarly to a traditional VPN—everything on my network at home is accessible from outside of my network—provided the computer I’m using it from (my laptop) is running a Tailscale client too.

So what do I think?

Ultimately, I would’ve preferred adhering to the full-Tailscale method and installing client(s) on every machine that I want to be reachable from my other machines running Tailscale. My lack of understanding of FreeBSD and how FreeNAS is architected were significant enough of obstacles that I made a compromise.

I expect that it is possible that a FreeNAS enthusiast could create a jail, install the FreeBSD Tailscale package in the jail, and tighten the scope of the Tailscale relay node down to only the IP of the NAS. Perhaps this will be a topic of a future blog? Or even better—maybe somebody will answer this question in the comments below?!

Regardless of my obstacles with FreeNAS, Pat absolutely was correct—Tailscale makes all of this so much easier! I’ve installed Tailscale on devices with very different hardware, running a number of different operating systems, and numerous different use cases. All of it was really straightforward and easy to set up. I didn’t have to create any firewall rules—it all just worked.

I’m excited about the possibilities that Tailscale presents. They recently added endpoint sharing as a Public Beta, which is a really useful concept that I’m going to be exploring as part of future blogs. So please stay tuned!

Pi-KVM: Controlling a 4-port KVM and setting up Tailscale

In a previous blog, I raved about how awesome Pi-KVM is. If you’re not aware, Pi-KVM is an open-source project that allows you to turn a Raspberry Pi into an IP-KVM.

You plug the Pi-KVM into your network and into a computer and then from anywhere else on that network, you can remotely control that computer as if you’re sitting in front of it, including doing things like accessing the remote machine’s BIOS.

When I did my initial research, I learned that building a Pi-KVM can be done incredibly inexpensively—it would cost less than $80 to build one from the recommended parts list. By building mine from a CanaKit Raspberry Pi 4 Model B (4GB) Pro Kit, I wound up spending quite a bit more than that $80. Regardless of how much I spent, I felt like I got a great value out of what I built.

Enhancing my Pi-KVM Setup

After building a Pi-KVM for Pat for Christmas, I was hooked and quickly built one for myself. Following that, I built another one that I could easily use with other computers outside of my office. Most notably, I wanted one to use in my “recording studio” where I do most of the work in assembling my DIY NAS builds.

I also began contemplating improving the Pi-KVM I was planning to use with my DIY NAS and my homelab servers in my office. When I started out, I planned on just sharing the same Pi-KVM with both machines, but the more I thought about it, the more I realized that I wanted to avoid having to swap the cables between the two servers.

This got me thinking. Either I’d need to build yet another Pi-KVM, or find a KVM switch that I could trigger from Pi-KVM to swap between the two hosts.

Adding a 4-port KVM Switch, the ezcoo EZ-SW41HA-KVM

Thanks to Novaspirit Tech on YouTube and his Q&A video about the Pi-KVM, I was already aware of what I wanted to try next. Among the topics discussed in the video was an inexpensive 4-port KVM switch made by ezcoo, the ezcoo EZ-SW41HA-KVM. This KVM switch has its own USB management interface which allows for firmware updates and switching which port is active on the KVM switch.

What set this particular switch apart from others was that Pi-KVM has functionality built-in that can issue the commands to the ezcoo EZ-SW41HA-KVM. You can configure the web interface’s elements to suit your needs and then pick which of the four ports is active with your Pi-KVM.

By using my Pi-KVM, the ezcoo EZ-SW41HA-KVM, some HDMI cables, some USB A to B cables, a couple VGA-to-HDMI adapters, and a generous helping of obsessive-compulsive cable management, I now have a 4-port IP-over-KVM setup functioning.

Parts List

A Pi-KVM can be built for under $80, and for an additional $150, your Pi-KVM can be extended to work with up to 4 different computers. This was ideal for me because I often wind up tinkering with a third—and sometimes fourth—computer at my desk, particularly when I’m working on one of my DIY NAS builds.

Setting up Tailscale on my Pi-KVM

Tailscale is a simple and easy-to-use VPN service built atop Wireguard. By installing and configuring a device, you can access that device over a secure VPN connection on any machine that you’re running the Tailscale client on. Pi-KVM has incorporated Tailscale as a configurable option. Once it is set up and on your Tailscale account, you can access your Pi-KVM from another machine as long as it is connected to the Internet.

Being able to remotely access your Pi-KVM is a handy way to remotely access a machine without having to directly expose it to the Internet. This isn’t a critical feature to me, but I think it’s really quite interesting. I don’t really access anything on my network from outside of my house, but having that as an easy option is really intriguing.

Pat’s been telling me to check out Tailscale for ages. Every time that he tells me about it, it has sounded really interesting—but I didn’t really have a good use-case for it. But now that I’ve set it up for using with my Pi-KVM, I know that it’s a matter of time before I’m using it with other machines too!

What’s Next?

After writing two blogs about it, I’m still quite excited about the Pi-KVM project. Pi-KVM is working on their own hardware to build in a bunch of extra features and are preparing to have the hardware manufactured. I’m excited enough about what’s been shared that I’ve signed up to preorder that hardware. As I have been working on this second blog, Novaspirit Tech published a video reviewing the Pi-KVM v3 hardware. Here are a few things I plan to do with my Pi-KVM:

  • Build another Pi-KVM for use on my workbench in my studio
  • Set up network passthrough to share the Raspberry Pi’s wireless interface for use where a network drop isn’t convenient.
  • Upgrade one (or more?) of my Pi-KVMs to the version 3 of the hardware
  • Other ideas? What would you do with a Pi-KVM that I haven’t considered yet? Please share your ideas in the comments, I’d love to see them!

Final Thoughts

Between building and using three different Pi-KVMs, reading about the Pi-KVM version 3 hardware, and now seeing it in action, I am doubling down on my previous position. I have no reservations whatsoever about capitalizing on the opportunity to pre-order the version 3 hardware once I can. I’m also happy that I can help support Pi-KVM on Patreon.

Pi-KVM is an awesome project. Pi-KVM has joined OctoPi and HomeAssistant at the top of my list of favorite Raspberry Pi projects. I’m excited to see what’s in store for the future of Pi-KVM.


I think that Pi-KVM is an interesting enough project that as part of this pair of blogs, I’m going to give away three kits which will hopefully get someone well on their way to building their own Pi-KVM. Each kit will contain:

Here are the giveaway details (Note: There’s a new way to enter that’s unique to this blog!_): Raspberry Pi 4 Model B (2GB version) with Customized 3D-Printed Case Giveaway

Pi-KVM: an inexpensive KVM over IP

I recall griping at Pat one day he was over for dinner that I wished everything had an IPMI interface or that nothing did. Only two of the computers at my house have an IPMI interface, all the rest did not. When the DIY NAS: 2020 Edition was burning in, I had to get up and go into the other room to see what it was up to, but I didn’t think it was worth the price premium to move up to a motherboard which included IPMI among its features.

Pat nodded in agreement and remarked, “There’s this neat project, Pi-KVM, that lets you build a cheap IP-KVM out of a Raspberry Pi! You should check it out.” We talked about different possible uses for it and then got busy repairing a quadcopter or playing some video games.

A few days later, I recalled this conversation later when wracking my brain for a Christmas gift idea for Pat and immediately decided then that I’d build a Pi-KVM for Pat. As a bonus, I’d get to play with his Christmas present a bit before deciding if I wanted to build one of my own!

KVM over IP (IP-KVM)

A KVM (or KVM switch) is basically a device that allows you to share a keyboard, video, and mouse between two computers. For a very long time, I had my own DIY NAS, primary workstation, and work laptop all plugged into a KVM switch. Whenever I needed to use one of those three computers, I’d hit a button and the KVM would switch me between the three computers.

An IP-KVM is very similar: you plug the IP-KVM device into the keyboard, video, and monitor of a computer but then you access it over a network. The keyboard, mouse, and display that you’re using aren’t actually plugged into the remote computer.

Both a KVM and IP-KVM are superior to remote access (TeamViewer, Remote Desktop, VNC, etc.) because you’re accessing the actual hardware remotely. You’re able to see the machine POST, access the BIOS, and watch it load the operating system. Most remote access options require the operating system to be up and running first. In enough scenarios, that’s simply not enough.


The most controversial parts of my DIY NAS build blogs is how frequently I recommend consumer-grade hardware. When people advocate for using enterprise hardware, the two reasons that resonate the most to me are support from the manufacturer (potentially including longer warranties) and that most server-grade motherboards have an IPMI interface.

Simplifying things a bit—maybe tremendously—IPMI is quite similar to having a built-in IP-KVM incorporated on the motherboard. The motherboard has a dedicated network interface that gets plugged into a switch and assigned an IP address by your router. Using a web browser or an IPMI client, you’re able to access this interface and interact with the hardware as if you were sitting in front of the computer with a keyboard, mouse, and monitor plugged into it.

The DIY NAS: 2016 Edition was the first motherboard I used with an IPMI interface. That feature (among others) helped convince me to use the same motherboard when I upgraded my personal NAS later the same year. That upgrade also meant that I was able to retire my KVM switch and all of its cables. When I built my homelab server, I made sure to pick a motherboard that included an IPMI interface. Since then, you know how much I’ve successfully used those IPMI interfaces? Almost zero!

Thankfully, I’ve rarely actually needed to use the IPMI interfaces. Both machines have been tremendously stable and do their jobs without much interaction from me. However, each time that I have attempted to access their IPMI interfaces, I have run into minor issues. When I encountered these difficulties, I simply reverted to old behavior and grabbed my spare monitor and keyboard from my closet for the following reasons: it’s less effort, and the IPMI’s web interfaces have been pretty terrible.

When I bought an extension for my desk, a new switch for my 10Gb network, and moved around both my DIY NAS and homelab servers, I decided to just leave the IPMI interfaces disconnected and removed the network cables on each machine. It’s almost like I knew that a couple months later, I would be tinkering with something I liked way better than any IPMI interface that I’d used.


So what’s Pi-KVM all about anyways? It’s an open-source project for building your own IP-KVM. So far, Pi-KVM has been through a couple different hardware variations. All of the hardware variations have been built around different Raspberry Pi models and a varying amount of do-it-yourself electronics. The current hardware version (v2) can be built around either a Raspberry Pi 4 Model B (2GB or higher) or a Raspberry Pi Zero W. Depending which Raspberry Pi option you pick, you’ll also need a video-capture device and some USB cables/adapters.

When it’s all said and done, the Raspberry Pi is connected to your computer’s display and USB ports. You pull up the Pi-KVM web interface in your browser, and you’re then in control of remote computer as if you’re physically standing right there. It’s really quite fantastic!

I looked for a comparable off-the-shelf piece of equipment, but there’s really nothing quite like it. I suspect that there’s just not much consumer demand for IP-KVM hardware right now. For most consumer users, there are acceptable enpough methods for accessing computers remotely, like VNC, Remote Desktop, and many others.

Nevertheless, after assembling the Pi-KVM that I gave to Pat for Christmas, I was immediately convinced that I wanted at least one for myself. Seeing these features in action were what sold me:

  • Incredibly easy to build the hardware (version 2)
  • The web interface was really responsive and easy to use.
  • The latency was low.
  • CD-ROM or Flash Drive emulation to pass through to the connected host.

These features aren’t all-encompassing either! They’re just the features that I was immediately zeroed-in on. There’s a whole cargo container full of other features that I haven’t leveraged yet too. The ATX controls sound really intriguing—having the ability to remotely press the power and reset buttons seems like it could come in really handy. Securely accessing my Pi-KVM from the Internet sounds interesting, but I’d rather not open ports on my router’s firewall in order to do so. However, there is a Tailscale client available. The idea of being able to access Pi-KVM from any device that I have a Tailscale client running on seems fascinating. Plus, Pat keeps telling me about how Tailscale makes these kinds of things easy, so this sounds like an excellent opportunity to prove Pat correct!

Brian’s Pi-KVM Parts List

When I ordered parts for Pat’s (and then again for my own) Pi-KVM, I made a mistake (or did I?) and bought a Raspberry Pi 4 Kit with 4GB of RAM. A Raspberry Pi 4 2GB meets Pi-KVM’s hardware requirements and would’ve worked just fine.

I decided that having 2GB of extra RAM might be useful in case there was other functionality that I wanted to add to my Pi-KVM down the road. Maybe one of these five awesome headless Raspberry Pi uses are good candidates to run alongside my Pi-KVM?

Brian Spent Too Much Money!

It’s important to keep in mind that {“I wasn’t a very thrifty shopper and wound up spending way more than I needed to. A Pi-KVM can easily be built for about $80!”} This can be done by more closely following the suggested hardware list:

I mentioned before that I couldn’t really find a comparable product when I searched for one. About the closest thing I could find were USB Crash Cart Adapters, like this one from This crash cart adapter is over $200 more than what I paid, doesn’t allow remote access over the network, is only VGA, has a much smaller set of features, and requires a custom application installed on the machine you’re accessing the remote machine from.

A price tag of under $80 is inexpensive enough that I’d gladly lend my Pi-KVM to friends who need my help with something on their PCs. It’s cheap enough that I’m going to definitely build another one just to have a loose spare for whenever it might be handy. For example, when I’m working through one of my DIY NAS builds!

What’s Brian think? I’m all in on Pi-KVM!

This is all Pat’s fault; he suggested I look into Pi-KVM awhile back. Once I did, I knew I wanted to build my own. In building one for both Pat and I, I’ve also learned that Pi-KVM is working on their own hardware and I now know that I want that too. Their hardware will include an extra ethernet interface to act as a pass-through, its own low-latency video capture ability, wider hardware support for finicky BIOSes, and many other features.

I’ve signed up to pre-order it and I’ve also become a Patron of pikvm on Patreon. The next iteration of hardware is going to make a fine upgrade—and another blog—down the road when I get my hands on it. It might be a fun project to design my own 3D-printed case for, or even maybe collaborate with Pat and mill something on his CNC machine.

But wait, there’s more!

In Novaspirit Tech’s YouTube review of the Pi-KVM and his subsequent video Q&A about Pi-KVM, he mentioned that Pi-KVM can also interact with a traditional KVM to allow you to switch between numerous different machines. I have purchased the ezcoo EZ-SW41HA-KVM 4-port KVM switch, a couple VGA-to-HDMI adapters, HDMI cables, and USB cables to hook into it.

Adding the ezcoo KVM switch on to my own Pi-KVM is something I’m looking forward to building and blogging about in the very near future!

Giveaway Raspberry Pi 4 Model B (2GB version) with Customized 3D-Printed Case Giveaway

DIY NAS: EconoNAS 2020

For a long time, I’ve been building two different NAS builds every year: an impressive premium DIY NAS with a premium price tag and a more economical DIY NAS that’s more friendly to the bank account. The whole point of all of these different DIY NAS builds has always been to encourage people to build their own do-it-yourself network attached storage (NAS) server. Got a big budget and want to build something to show off? I’ve got a DIY NAS for that! Or do you want to squeeze every little drop of storage out of your budget? Well, I’ve got a DIY NAS for that too!

In true 2020 fashion, the DIY NAS: 2020 Edition found itself neglected until very late in the year. This created the likelihood that there would be no EconoNAS at all in 2020. Rather than skipping the EconoNAS in 2020, I decided that I would frantically write a blog containing the parts list of what I would have put in the EconoNAS.


A long time ago, in a blog post far, far, back in history—I wrote a blog about a fictitious DIY NAS that I had not actually built. Because of this, I wound up picking a CPU that physically fit in the motherboard, was listed by the motherboard’s specifications as compatible, but would not boot up unless the BIOS was up-to-date enough. At least one person shared an experience where they received a motherboard with a BIOS version old enough that it would require a BIOS update before that CPU could be used.

I was mortified to learn this, and I continue to be embarrassed about it to this day. Since that mishap, I’ve built each DIY NAS that I’ve blogged about and I have done everything in my power—assuming my bank account allows it—to make sure that I was able to assemble what I’m recommending to others. Here at the end of 2020, that streak is temporarily coming to an end.

I did not build this year’s EconoNAS. While I’m quite confident that these parts will all work together to build a great machine, I think it’s worth warning you all that I did not actually build it. I put in quite a bit of effort to try and confirm hardware compatibility and also to seek out others’ shared experiences with this same hardware. Hopefully, someone will build this DIY NAS and share their experience in the comments below!

Motherboard & CPU

Originally, I was really tempted to build the EconoNAS using the same exact motherboard as the DIY NAS: 2020 Edition, but its supported CPUs were just too expensive for my tastes. I couldn’t justify the amount of expense that would’ve been incurred by the motherboard and CPU alone. Instead, I opted to realize the opportunity that existed in saving money on both components.

I wound up selecting the GIGABYTE B450 Aorus M (specs). The price, the wide range of supported CPUS, the Micro ATX form factor, the support for up to 128GB of RAM, and the six onboard SATA3 ports were all features that made it easy to choose the B450 Aorus M motherboard.

Of all the components in the DIY NAS: 2020 Edition, the CPU was by far the most over-the-top. My friend, Pat, wrote in his NAS building tips and rules of thumb blog that “Serving files doesn’t require a gargantuan processor.” In putting together this parts list for an economical NAS build, I wanted to focus in on that point—because I’m not quite sure if Pat believes that I actually comprehend it!

The AMD Athlon 3000G CPU (specs) is a 3.5GHz dual-core CPU. Compared to the AMD Ryzen 9 3900X CPU that I used in the DIY NAS: 2020 Edition it is dirt-cheap to purchase (about 11% of the price of the AMD Ryzen 9 3900X) and power-sipping to run (35W TDP vs 105W TDP). Inexpensive to purchase and inexpensive to operate are hallmarks of a good decision in building an economical NAS.


For RAM, I opted to match the 32GB of memory found in the DIY NAS: 2020 Edition, but went with more affordable non-ECC RAM by choosing a Crucial 32GB (16GBx2) DDR4 2666 MT/s (PC4-21300) kit (specs). The TrueNAS Core hardware suggestions suggest 8GB as a bare minimum, and the bare minimum is usually what I opt for in building an economical NAS. However, I wanted to build a NAS that measured up to its big brother (as long as the budget allowed it), and giving the file system more RAM to cache data to is a great way to squeeze out extra performance in the budget.

Case and Power Supply

The Fractal Design Node 804 (specs) is routinely suggested at me for NAS builds—and for good reason—it’s a fantastic case. But it’s lacked a feature that I appreciate in my “big” NAS builds—hot swap bays—and it’s always been a little bit more expensive than I’d like to spend on a case for an economical build. However, this year I splurged on the case.

If I were rebuilding my NAS today, and I couldn’t 3D-print my own custom Mini-ITX NAS case, then the Fractal Design Node 804 would be at or near the top of my list. I’ve been tempted to incorporate it into a NAS build for a while. Selfishly, I’m a little disappointed I’m not actually building this NAS, because I really would have liked to build a NAS out of it.

The Fractal Design Node 804’s key features are:

  • 8x 3.5” Internal HDD bays
  • 2x 2.5” Internal HDD bays
  • Fits Micro ATX or Mini ITX motherboards
  • Fits a standard ATX Power supply up to 260mm deep.

I’m not a huge fan of power supplies—I never have been. They’re not fun to shop for because it’s hard to quantify what the differences are between a $30 power supply and a $150 power supply. The features that do differentiate power supplies are infuriating to me; modular cables and LEDs.

I picked the Corsair VS500 power supply (specs) because it was from a manufacturer that I recognized/respected, its wattage (500W) would be roughly twice the amount of power the CPU and eight hard disk drives would consume (about 235 watts), and it seemed to be fairly well reviewed from what I was able to gather.

Host Bus Adapter and Cables

When I built the DIY NAS: 2020 Edition I thought to myself: “Getting rid of the IBM M1015 SAS Controller and specialty SAS cables will be an easy cost savings for the EconoNAS.

But after pricing out all the components, I realized that it’d only cost about $100 to add an enterprise-grade HBA to support 8 additional hard disk drives. Being able to fill up every drive bay in the Fractal Design Node 804 by adding an IBM M1015 SAS Controller and a pair of SFF-8087 to SATA Forward Breakout cables seemed like a very good return of investment of another $100 dollars.

If five drive bays (reserving at least one for the OS) is enough hard drives bays for you, this is your best place to start saving money on your own EconoNAS!


In my DIY NAS builds, there are two types of storage used: storage for the operating system and the actual storage that is used and exposed to the network. Over the years, I’ve come to the conclusion that it’s easy to make specific suggestions for the operating system’s storage device and it’s foolhardy to try and make specific suggestions with regards to the actual drives that data is being stored on.


In the past year (or longer), the using a USB flash drive for the OS drive in TrueNAS CORE has gone from not suggested, to outright discouraged. In 2020, I’ve adjusted my NAS builds and said goodbye to my beloved SanDisk Ultra Fit USB drive. In the DIY NAS: 2020 Edition I used a pair of m.2 SSDs to mirror the OS on. The m.2 SSDs were not expensive, but their use prevent the simultaneous use of some of the onboard SATA ports. To maximize the number of SATA devices the EconoNAS could accommodate, I opted to pick a 2.5” SATA SSD to hold the operating system.

A better, more economical, choice is the Kingston 120GB A400 SSD (specs). A better—but less economical (and still a good value)—choice would be to mirror TrueNAS CORE across a pair of SSDs. I think a single SSD is a fine choice due to the quality and reliability of modern SSDs. Failure of the OS drive is further mitigated by the fact that the TrueNAS Core configuration is automatically backed up daily and it is possible to manually back up the system configuration as well.

NAS Hard Disk Drives

How much storage you need and how expensive it is makes it difficult to present an economical choice. The hard drives with the best price-per-terabyte usually carry a higher price tag, and the fewer drives that you buy, the higher percentage of your storage winds up being dedicated to redundancy. Buying cheap hard drives sometimes means settling for obsolete or refurbished drives. Here are some tips that I have for buying hard drives for an economical NAS.

  1. Shuck External Drives: External hard drives tend to be cheaper than their internal counterparts and tend to go on sale more often. While only the hard drive manufacturers can tell exactly why they’re less expensive, many people speculate that it’s in part because external HDDs typically have a much shorter warranty period. Whatever the reason, shucking an external hard drive is oftentimes the best value in acquiring storage for your NAS.
  2. Two Drives of Redundancy: Regardless of your NAS build, I recommend that you pick a configuration that has two drives’ worth of redundancy. There are a few different ways to achieve this, but my favorite is RAID 6. When using ZFS on TrueNAS CORE, the equivalent is raidz2.
  3. Lots of Smaller HDDs > Fewer Larger HDDs: While the largest hard drives are the best value, you’ll get far more net storage by buying more smaller drives. As an example, consider two different configurations of a 48TB array each with two disks of redundancy (RAID 6/RAID-Z2) using either a 12TB Western Digital Red Plus or a 6TB Western Digital Red Plus.
    1. 8x 6TB HDDs (RAID-Z2)
      1. Cost: about $1,432 ($179.00/ea)
      2. Usable Storage: 36TB
      3. Price per Usable TB: $29.83
    2. 4x 12TB HDDs (RAID-Z2)
      1. Cost: about $1,219 ($304.99/ea)
      2. Usable Storage: 24TB
      3. Price per Usable TB: $50.79
  4. Try and Avoid Buying Drives from the same Batches: This is a challenge and there’s no guarantee, but when you buy the same model hard drive all from the same vendor, the chances are that all of the drives you bought came from the same batch. If there was a flaw in that batch, you should expect to see that flaw across all of those drives. You can increase your chances of avoiding this by: buying the same model of hard drive from different vendors or by buying equivalent hard drives from different manufacturers.
  5. Read Backblazes HDD Stats: I can’t stress this enough. Backblaze is buying enough drives and sharing the data about those drives with everyone. Let these statistics guide you in assessing the hard disk drives you’re considering. These statistics are an awesome resource.

Final Parts List

Component Part Name Count Cost
Motherboard GIGABYTE B450 Aorus M specs 1 $92.99
CPU AMD Athlon 3000G specs 1 $99.99
Memory Crucial 32GB Kit (16GBx2) DDR4 specs 1 $148.29
Case Fractal Design Node 804 specs 1 $149.98
OS Drive Kingston 120GB A400 SATA 3 2.5” SSD specs 1 $24.99
Host Bus Adapter IBM Serveraid M1015 SAS/SATA Controller specs 1 $70.96
SAS Cable Internal Mini SAS to SATA Cable (SFF-8087 to SATA Forward Breakout) N/A 2 $11.99
Power Supply Corsair VS500 specs 1 $49.99
TOTAL: $661.17

NAS Software

In my mind, there are two attributes of a NAS that differentiate it from other computers: a large amount of hard drives, and some sort of user-friendly interface to help you manage it. Both attributes are a bit subjective; you could definitely build a NAS and with only one hard drive in it and you could install nearly any operating system on the server and add some sort of method for sharing files across your network.

But, in my opinion, what really turns your DIY NAS into a competitor to the QNAP, Synology, Drobo, etc. NAS offerings is being able to easily manage the DIY NAS without sinking much time into performing any command-line wizardry and mastering the different products that make a computer into a functioning NAS.


I’ve been using TrueNAS CORE (formerly known as FreeNAS) in my own NAS and nearly every single DIY NAS build I’ve built since. TrueNAS Core’s interface runs atop the FreeBSD operating system and aside from the interface, the next most important feature is the ZFS filesystem. TrueNAS Core is bundled with everything you need to get started to create a redundant array of hard drives, share the content to all of the computers on your network, and includes a comprehensive plug-in library.

TrueNAS is not without its own drawbacks. The ZFS file system does not like SMR (shingled magnetic recording) drives, and many/most inexpensive hard drives out there are SMR drives. PMR (Perpendicular magnetic recording) drives are strongly recommended when working with ZFS and they tend to be pricier than SMR drives, which isn’t the most budget friendly. Additionally, I’ve sometimes found the community of users to be closed off to the value of DIY NAS builders building their NAS out of consumer-grade components.

None of these drawbacks have discouraged me in the past. I was (and have been) willing to run my own NAS on hardware that you’re more likely to find in a home user’s computer, rather than a server in a datacenter.

The Rest

I’m perpetually curious about NAS distributions, and while I clearly favor TrueNAS, there’s no shortage of equally viable alternatives. I’ve tinkered with a few, but not extensively enough to perpetually keep up with all the new features of all the different NAS distributions out there. If you find TrueNAS isn’t your cup of tea, here’s a list of a few others you might want to look at:

There are no bad choices here; all of the options look pretty good. When I built my first DIY NAS back in 2012, I planned to experiment with a few options to see what was out there. Unfortunately for all the others, I tried FreeNAS first and didn’t feel it was critical for me to try any of the others once I had everything set up and working.

What’s Brian Think?

I’m every bit as excited about the DIY NAS: EconoNAS 2020 as I am with the DIY NAS: 2020 Edition, but for different reasons:

It demolishes off-the-shelf solutions

The key features of the DIY NAS: EconoNAS 2020 (listed below) put it at a sticker price of right around $500. At that price point I can’t find a single off-the-shelf NAS that matches up well. The closest equivalent NAS that I could find was the QNAP TS-873-8G and the QNAP TS-932PX-4G, but both are expensive (between $550 and 900), have less compute horsepower, less memory, fewer (or no) upgrade options, and have far fewer options for customization. In comparison, the DIY NAS: EconoNAS 2020 has:

  • 10 Internal Drive Bays (8x 3.5” bays and 2x 2.5” bays)
  • AMD Athlon 3000G CPU
  • 32 GB of DDR4 RAM
  • 120GB SSD for the operating system
  • Upgrade Options: more HDDs, more CPU, more RAM, 10Gbit network, etc.
  • Is less expensive at $513

Update (03/14/20): As I expected, prices have climbed dramatically for the motherboard, the case, and the CPU! This has driven the price up to just over $650. If you’re good shopper, I think you can beat the prices I’m seeing on Amazon right now. But I’d also encourage you to tweak the blueprint and look for better values on these components—especially the CPU and Case!

Extremely flexible

If you wanted something as equally potent as this year’s over-the-top DIY NAS, it’d be as easy as swapping out the CPU (see caution below about BIOSes and CPU support), but it’d still be a bit more economical by skipping out on the expensive case, premium low-profile CPU cooler, and the two m.2 SSDs. If this list of parts does not meet your compute needs, it’d be pretty inexpensive (around $180) to add an AMD Ryzen 1600AF CPU and the GPU from the DIY NAS: 2020 Edition and turn this EconoNAS into a little mini homelab server.

Upgrades, upgrades, and more upgrades

Nearly everything about the DIY NAS: EconoNAS 2020 can be upgraded, either before you build it or down the road as new releases push the prices down on all hardware:

  • CPU: The motherboard supports a tremendous number of different CPUs, too many to list succinctly.
    • Note: When picking or upgrading a CPU, make sure to research which BIOS is needed for which CPUs and chart out your upgrade path. What happens when a BIOS upgrade is needed to support a newer CPU? What happens if making that update renders your old CPU incompatible?
  • RAM: Upgradeable to 128GB of RAM total.
  • More HDDs: If a bigger case were selected, it’d support up to 14 SATA devices (6 onboard, 8 on the IBM M1015.)
  • PCI Express: one PCI Express x16 and one PCI Express x1 slots available to do things like building your own inexpensive 10Gb network.

I would build the EconoNAS for myself and I’d tell my friends to build it too

If for some reason I needed to replace my NAS today or a friend asked me about building their own NAS, I would ultimately wind up building this EconoNAS and not the bananas DIY NAS that I built earlier in the year. I’d be really tempted by all the bells and whistles of the DIY NAS: 2020 Edition, but ultimately I’d wind up building this EconoNAS.

What do you all think? Which of the two would you be inclined to build? Would you build the DIY NAS: 2020 Edition? Or would you build this more economical variation on the same theme? Please let us know in the comments below!


Every NAS that I’ve built in the past few years, I’ve wound up giving away. But because I didn’t build the EconoNAS this year, I can’t give it away. However, the giveaway for the DIY NAS: 2020 Edition is still ongoing at the time this EconoNAS blog was published. Check out the giveaway details below. Entering a giveaway and winning the DIY NAS: 2020 Edition is quite budget-friendly!

Update (01/01/2021): I rang in the New Year by picking a winner in the giveaway of the DIY NAS: 2020 Edition! I hope that everyone joins me in congratulating Matt H. of Orlando, Florida for winning the TrueNASGiveaway! Matt’s visitation of [my Youtube page][bcm_yt] (I hope he clicked like and subscribe!) in early December is the entry that wound up turning him into the winner of the DIY NAS. Altogether, 1,636 people combined for 7,893 entries into this year’s giveaway. Thanks to everyone for making the giveaway a big success!

DIY NAS: 2020 Edition

DIY NAS: 2020 Edition

For many years now, I’ve been building, blogging, and giving away DIY network-attached storage (NAS) builds. I was started down this path when I couldn’t find a relevant and recent build parts list to follow when I built my first DIY NAS back in 2012. In blogging about my own experience building my NAS, I surprisingly find myself atop Google’s search rankings for search terms like “DIY NAS”. Ever since, I’ve been regularly building and blogging about my different DIY NAS builds. My perpetual hope is to encourage potential DIY NAS enthusiasts to build and design their own custom DIY NAS solutions.

When I got done with last year’s DIY NAS build, I thought that I had built the most bananas over-the-top DIY NAS build that I could possibly imagine. Upon finishing it, I committed to myself that the 2020 DIY NAS build would be far more restrained.

But then I built my first AMD-based DIY NAS, the 2019 EconoNAS, and in building that NAS I realized that the extreme flexibility of AMD’s CPU architecture was suited quite well for DIY NAS enthusiasts to take advantage of. When I published last year’s EconoNAS blog, I remarked to myself, Well I guess I have to build an even more bananas AMD DIY NAS now, don’t I?!

And then 2020 happened…

…and thankfully my wife, son, and I remained healthy—something I wish for all of you too. As a Type 1 Diabetic, I’m among the highest risk for bad outcomes. As I began working from home, I thought I’d be able to spend a few of the weekly hours that I spent commuting working on blogs instead.

But what I found was the opposite—surviving a pandemic, being a productive remote worker, plus trying to help keep an eye on my feral 4-year-old son was a tremendous energy drain. I did my best to emulate my amazing wife and focused my efforts on our household, and unfortunately the DIY NAS: 2020 Edition suffered for it.

All of the hardware that I’d purchased so far languished in my “other office” while I re-acclimated to this new normal!

If you’ve ordered anything online recently, you’ve certainly noticed that COVID-19 has disrupted a lot of the availability of items. This is especially true of the components in the DIY NAS: 2020 Edition. I was disappointed to see how much of it has been difficult to find in stock. If you decide to emulate this build, please use the comments below and help each other out in finding vendors with the parts in stock—or suitable replacements for hard-to-find components!

Case and Power Supply

For every NAS build, I always like to lead off with the key component, which in most years is the motherboard. Especially with my preference for smaller motherboards and integrated CPUs. But this year is different! In late 2019, SilverStone contacted me and asked if I’d review the SilverStone CS381 (specs) if they sent one to me.

The SilverStone CS381 is really impressive on paper. There’s room in the case for a total of 12 different hard disk drives. With 8 of those hard drives being accessible in hot-swap drive bays. The case accommodates Micro-ATX, Mini-ITX, and Mini-DTX motherboards. While I prefer smaller cases for my DIY NAS builds, the CS381 is not a large case by any stretch of the imagination. Moreover, its bigger footprint allows it to accommodate cards up to 267mm or even a 240mm radiator for a water-cooling setup.

Of the many components in the DIY NAS: 2020 Edition, I’ve been most excited about getting my hands on the SilverStone CS381. Will it warrant an update to Brian’s Top 3 DIY NAS Cases on Stay tuned!

Power supply was a bit of a headache, but only because I didn’t pay attention to the details of the SilverStone CS381 and initially bought a full-size ATX power supply instead of the SFX or SFX-L power supply that the case supports. I wound up choosing the be quiet! BN639 SFX-L Power Supply primarily because of its wattage and price. I am not a fan of modular power supplies. I’d much rather use a couple zip ties to manage an extraneous power cord or two, rather than have to dig around my office months—or even years—down the road to find where I stored the extra cables. Unfortunately for me there just wasn’t any non-modular option within what I was shopping for. I would’ve considered spending an extra $5-10 for a non-modular option.

Motherboard, CPU, and CPU Cooler

One of the things that I was most excited about after selecting the SilverStone CS381 as the case was the additional motherboards that I’d get to shop for. Being able to include Micro-ATX motherboards more than doubled the number of motherboards that met the criteria of what I feel is important for a DIY NAS build. Since I’d already decided I wanted to build a DIY NAS with an AMD CPU and that I’d picked out a case that could support up to 12 hard disk drives, my ideal criteria for the motherboard was:

  • Mini-ITX or Micro-ATX
  • AMD AMD4 CPU Socket
  • Support for 12 SATA Devices
  • Documented support for ECC RAM
  • Support for M.2 SSD(s)

Of the criteria, I knew that the 12 SATA devices and the documented support of ECC RAM would present the biggest challenge. While the AMD Ryzen CPUs support ECC RAM, it’s not necessarily implemented on all of the motherboards nor is it really something that the motherboard’s marketing departments have put a lot of effort including in their marketing materials. In doing my research, the best point of advice I read was to read reviews of the motherboards and focus on whether they tested the ECC functionality.

In regards to the SATA devices, I knew that I wasn’t going to find a motherboard that supported 12 SATA devices—especially at a “reasonable” price point. Moreover, I also wanted to use M.2 SSD(s) for the operating system, and that typically knocks out the use of some of the available SATA controllers on the motherboard.

With all of that in mind, I very quickly narrowed in on the ASRock X570M Pro4 motherboard (specs). In reading about the motherboard, I was confident it supported ECC RAM, and I liked that it had enough onboard SATA to support the SilverStone CS381’s 8 hot-swap bays. The motherboard very nearly met all of my ideal criteria by itself and at a fairly reasonable price. The only criteria it wasn’t able to meet—support to fill up all of the case’s internal and external drive bays—would get handled in the rest of my hardware purchases.

If you wind up using this build as a template for your own DIY NAS, be aware that ASRock has several X570-based motherboard with similar model numbers. Specifically, there’s an ASRock X570 Pro4 which is an ATX form factor and will not fit in the case.

Before you purchase, double-check to make sure that you are buying the ASRock X570M Pro4—the “M” is important! I nearly made this same exact mistake myself and decided to post this update after a commentor shared their unfortunate experience and wanted to warn others.

In building a bananas AMD NAS, I instantly scrolled to the top of AMD’s processor offerings and observed to myself, “Now that’s just a bit TOO bananas, Brian.” But in doing a bit of browsing of benchmarks of high-end CPUs, I was drawn to the price-to-performance of the AMD Ryzen 9 3900X (specs).

Moreover, whatever way you slice it, the AMD Ryzen 9 3900X is complete and absolute overkill for the processing power needs of NAS. The selection of this processor really makes the machine capable of much more than just being a NAS. I’d encourage people who follow this blueprint to fully leverage the extra processing power to experiment with virtualization and host things that complement the storage capabilities.

An eager patron on Patreon started building his own DIY NAS from my parts list and helped me realize (Thanks, Alex!) that because I’d selected the SilverStone CS381, I’d need a low-profile CPU cooler. Because I wound up selecting the AMD Ryzen 9 3900X, I opted for what’s widely regarded as one of the best low-profile AM4 CPU cooling solutions; the CRYORIG C7 Cu (specs). The 105W TDP of the CPU convinced me that I’d need to make sure to pack the most amount of cooling in the space I was allowed.


Very little expense was spared in the building of the DIY NAS: 2019 Edition, but in building it I intentionally saved a few dollars by going with the bare minimum in recommended RAM. In fact, had I experienced any difficulty in the benchmarking of the NAS, I was ready to buy more RAM and talk about both of those decisions.

In making sure this year’s DIY NAS was more bananas than the prior year’s significantly upgrading the RAM was a no-brainer for me. I picked two 16GB DIMMs of DDR4 2666MHz PC4-21300 Unbuffered ECC RAM (specs) for the DIY NAS: 2020 Edition. A total of 32 GB of RAM would be sufficient for the needs of this year’s DIY NAS build, although I would advise the virtual machine enthusiast to consider more, depending on the number and workload of virtual machines they plan to run.

In this year’s DIY NAS, I’d also think that RAM would be one of the places where you’d see some opportunity to find savings. With my prior builds, especially the EconoNAS builds, I’ve been routinely pleased with how everything performs when using RAM that’s at the minimum side of the hardware recommendations.

Host Bus Adapter and Cables

My ideal motherboard would’ve had enough onboard controllers to support 12 SATA devices and two M.2 SSDs. My ideal motherboard likely doesn’t exist or comes with such a price tag, that I’d never even consider it. Rather than try and find that motherboard, I opted to add a host bus adapter (HBA) to add support for the additional devices that I wanted the DIY NAS: 2020 Edition to support. I chose an IBM M1015 (specs) to add those additional drives. The IBM M1015 is widely recommended for use with FreeNAS/TrueNAS, once you reflash its firmware, and adds support for an additional 8 SATA devices.

As always is the case, neither the motherboard nor the case includes enough SATA cables to support all of the drive bays. I complemented the standard SATA cables that are shipped with the motherboard with two 3-packs of 18” SATA3 cables with locking latches. But because I’d purchased the IBM M1015, I’d need more cables than just the extra SATA cables.

Because of the drive backplane inside the SilverStone CS381 and the IBM M1015, an additional type of cable was needed. Effectively, the cable needs to connect from the two SFF-8087 Mini-SAS ports on the IBM M1015 to the two SFF-8643 Mini-SAS ports on the drive backplane inside the SilverStone CS381 case.

When everything was all said and done, SilverStone CS381’s 8 external bays would be handled by the IBM M1015and the 4 internal bays would be handled by the SATA controller included on the ASRock X570M Pro4.


One of the drawbacks of choosing from the tippy top of AMD’s CPU offerings is that you lose the integrated graphics options. While I’m a big fan of integrated graphics in building DIY NAS builds, none of AMD’s compelling CPUs support the integrated graphics. I went online and found the least expensive low-profile PCI-e GPU that I could find, the MSI Gaming GeForce GT 710 1GD3H LPV1 (specs).

I picked the MSI Gaming GeForce GT 710 1GD3H LPV1 so that someone could follow along at home and build their own DIY NAS. If I were building this for myself, I’d strongly consider rummaging around my spare parts bin. Or potentially, just borrowing a GPU from another machine, getting it up and running, and then running the NAS headless indefinitely into the future.


TrueNAS CORE Drives

The constant from my very first DIY NAS build to the DIY NAS: 2019 Edition has been my use of the SanDisk Fit and Cruzer Fit flash drives to hold the FreeNAS/TrueNAS OS. In my own NAS, I’ve mirrored the USB boot device and have recommended others do the same for years.

In building a ridiculous NAS, it seemed like I should wade into unchartered territory and consider something a bit more stable and durable than a USB flash drive. For this year’s DIY NAS, I decided I’d pick a pair of Corsair Force Series MP500 120GB M.2 SSDs (specs) with the intention of mirroring the OS across both of the SSDs like I’ve done for years on my trusty USB flash drives.

Update (03/14/21): The 120GB Corsair Force drive has been pretty hard for some to find. The price of the 240GB variant the Corsair Force Series MP500 240GB is cheaper than what I paid for the 120GB version and seems like it would be a fine substituion for those of you wanting to use the DIY NAS: 2020 Edition as a blueprint. I’ve updated the parts list below to reflect the 240GB version.

Final Parts List

Component Part Name Count Cost
Motherboard ASRock X570M Pro4 specs 1 $169.99
CPU AMD Ryzen 9 3900X specs 1 $429.99
CPU Cooler CRYORIG C7 Cu specs 1 $101.00
Memory Crucial 16GB DDR4 DIMM 2666 MHz / PC4-21300 ECC (CT16G4WFD8266) specs 2 $87.98
Case SilverStone Technology CS381B specs 1 $349.99
Host Bus Adapter IBM Serveraid M1015 SAS/SATA Controller 46M0831 specs 1 $70.96
Power Supply be quiet! BN639 600W SFX L Power Supply specs 1 $119.90
OS Drive Corsair Force Series MP500 120GB specs 2 $109.00
OS Drive Corsair Force Series MP500 240GB specs 2 $99.99
SATA Cable BENEI SATA3 18” Straight-through Cable with Locking Latch (3 pack) N/A 2 $6.99
SAS Cable Internal Mini SAS SFF-8087 to Mini SAS High Density HD SFF-8643 N/A 2 $13.30
GPU MSI GT 710 1GD3H LPV1 specs 1 $69.94
TOTAL: $1,728.29

NAS Hard Disk Drives

For the majority of DIY NAS builders, the most expensive component that you’ll wind up buying is the hard disk drives. More importantly, how much storage a DIY NAS builder needs and how much redundancy they need are both very personal decisions. Because of these factors, I’ve decided to stop buying hard drives for each year’s DIY NAS build and instead make some recommendations, instead.

Here are few tips and considerations that I have to aid picking out hard drives for your DIY NAS

  1. Quanity how much data you need to store and how quickly you accrue additional data.
  2. Assume that you’ll be replacing drives at or near the end of their warranty.
  3. With FreeNAS/TrueNAS, growing your array is easiest by replacing smaller drives with bigger ones.
  4. Decide how much redundancy you want within your array (Note: Brian strongly recommends at least 2 drives’ worth of redundancy!)
  5. Buy drives from different manufacturers and/or vendors to try and maximize the chance that your different drives came from different batches.

Going through each of the above should give you an idea of how much storage capacity your array should need at the end of the hard drives’ warranty period.

Please also keep in mind that I have zero qualms about putting consumer-grade hard drives into a DIY NAS. Most people who find themselves looking at my DIY NAS builds are probably already storing all of their data on consumer-grade hard drives. It’s important to remember that the “I” in RAID originally stood for inexpensive, and it’s indisputable that consumer-grade hard drives are usually the best value when it comes to price per terabyte.

However, it’s also worth understanding and doing a bit of research comparing and contrasting the both shielded magnetic recording (SMR) and perpendicular magnetic recording (PMR)—sometimes also known as conventional magnetic recording (CMR)—technologies used in hard drives. SMR drives achieve higher data density, but because data is laid down much like shingles on your roof, there’s a substantial performance decrease when certain tracks of data are written. The tracks being changed plus its neighboring tracks need to be read and rewritten.

I won’t begin to proclaim to full understand it, but the real world application here is that ZFS doesn’t necessarily play nicely with all SMR drives. This recently came to a head when Western Digital sneakily started using SMR in its Red drives. In making suggestions for hard drives in NAS builds using ZFS, I tend to suggest CMR drives.

Hardware Assembly, BIOS Configuration, and Burn-In


Putting the DIY NAS: 2020 Edition together was a rather straightforward event. I spent a couple hours over five different nights and had everything put together without too much frustration. The most difficult part of the assembly was easing the motherboard into the case, thanks to the sheer weight of the CRYORIG C7 Cu. Between the heft of the CRYORIG C7 Cu and the horizontal support pieces connecting the SilverStone CS381, I had to incrementally inch the motherboard onto the case’s standoffs to get it mounted and aligned.

Reflashing the IBM M1015

Reflashing the IBM M1015 was by far the most challenging part of assembling and configuring the DIY NAS: 2020 Edition. The IBM M1015 has been around for a really long time and has been a go-to choice for DIY NAS builders for a very long time. The one problem with the card is that it’s recommended that the card is reflashed with a different firmware to put it into IT mode—especially if you’re using ZFS.

There’s no shortage of “how-to” guides on getting this done. When I did my research and bought the card, I thought it’d be no big deal and something I’d be done with in a matter of minutes, but it actually wound up being a bigger hassle than that.

At first, I read through a few guides:

  1. /r/DataHoarder: Flashing an IBM M1015 to IT mode
  2. How-to: Flash LSI 9211-8i using EFI shell
  3. ServeTheHome: IBM ServeRAID M1015 Part 4: Cross flashing to a LSI9211-8i in IT or IR mode

The problem with all of these guides is that the content has been around for so long that a lot of it has become stale. Links are dead, technologies have changed, and weird manufacturer-specific incompatibilities made working through any of these guides impossible!

Basically, I wound up using a mish-mash of steps from all of these guides to bungle my way through what was needed for the hardware that I picked out. I’m not going to try and reinvent the wheel and add my own guide to make the waters more murky. But here are the important parts that I learned:

  1. Made a FreeDOS-bootable USB disk using the utility Rufus, and formatted it FAT32
  2. Extracted the various utilities (DOS and EFI) and firmwares to the USB disk.
  3. Used the v1 (important) version of the Tianocore EDK2 Shell_Full.efi, renamed it to Bootx64.efi, and placed it in the /efi/boot path on the USB drive.
  4. Used Legacy-mode to boot into FreeDOS on the USB drive to perform the steps using the megacli.exe and megarec.exe executables to preserve the SAS ID and clear the HBA’s memory.
  5. Rebooted and booted from the USB drive using UEFI-mode.
  6. Ran the steps using the sas2flash.efi utility for flashing the firmware and to restore the SAS ID.

BIOS Configuration

Over the years, there’s been a handful of critical BIOS settings that have wound up being a game-changer for the different DIY NAS builds, and I’ve always made sure to capture those kinds of changes as part of these blogs. However, the DIY NAS: 2020 Edition proved to be rather simple.

The only thing I changed in the BIOS was to juggle the boot order of the devices. All I did was set the NVMe drive to be the primary boot device, but then used the function key for anytime I needed to boot from the USB drive when installing TrueNAS CORE.


In my rush to get the DIY NAS: 2020 Edition into my rear view mirror, I opted to run Memtest86+ overnight using the default configuration. My two concerns in building my DIY NAS machines is flaky hardware and poor installation. Running Memtest86+ overnight completed 5 passes without any problems, and I rebooted into the BIOS to check the system temperatures.

The next morning, there’d been zero errors captured by Memtest86+ and after rebooting into the BIOS, I didn’t have any concerns with any of the temperatures that were reported within.

TrueNAS Installation

Way back in 2012, I was disappointed in the information that was out there regarding FreeNAS. In the time since, lots of people (me included, I hope!) have shared their experiences setting up FreeNAS/TrueNAS CORE. More importantly, the content that iXsystems has created and shared is quite helpful.

It seemed inefficient to try and recreate the same content, especially when iXsystems has done such a good job with theirs. Please take a look at How to Set up and Install TrueNAS CORE and check out the FreeNAS and TrueNAS Youtube channel.


When I started building DIY NAS systems, I was particular interested in the throughput and power consumption of my DIY NAS machines. As time has gone by over the years, I’ve learned a couple things:

  1. Network is your first bottleneck: Year after year, nearly every single DIY NAS I’ve built has easily saturated the Gigabit network interface that the overwhelming majority of our computers are connected with.
  2. Power consumption depends on usage: The biggest single power-consuming component the DIY NAS has is the CPU (about 105W), but it’s important to consider that a typical 7200 RPM hard drive uses up to 25 watts. I put a lot of effort into trying to test and gather the same data from every DIY NAS build that I build and blog about, but those tests don’t really reflect how I use my own DIY NAS, and, more importantly, probably don’t reflect how you’ll wind up using your own DIY NAS.

Nevertheless, it’s still fun to grab all of the video I’ve recorded for the DIY NAS: 2020 Edition, copy it over to the NAS, and see it saturate the gigabit network interface on the NAS! Mission accomplished!


I love playing with the new hardware—it’s the most fun part of any DIY NAS build that I wind up doing. But getting a chance to evaluate the latest version of TrueNAS Core (formerly known as FreeNAS) is a huge perk! I’m always very reluctant to make changes to my own NAS, since it has become my primary place to store data. Having a sandbox machine to evaluate the latest and greatest offering is a huge value to me.

For the first time ever, I wished that I had a stopwatch running. From the time that I plugged in the USB drive with the TrueNAS CORE installer ISO on it and turned on the DIY NAS: 2020 Edition to the point where I was copying files over to the NAS itself was definitely under 30 minutes. I was surprised at how smoothly it went.

The release notes for TrueNAS CORE 12 contains a few items which have me intrigued. I’m particularly interested where they state “Virtually every area of the platform has been updated and includes some major performance improvements, including SMB, iSCSI, ZFS and more.” But on top of that, the polish and refinement of the TrueNAS interface is a nice upgrade in itself.

It is probably worth pointing out that along the way, I was asked this in Twitter by @JonathonMoore:

Similar to what Jonathon reported, I also found that TrueNAS CORE’s reporting wasn’t working particularly well. I didn’t see any of the graphs get populated with data and when I tried clicking around to the other categories of reports, it didn’t seem to have any effect in the TrueNAS interface. If reporting and monitoring are important to you, you may want to wait for what’s coming next.


If you thought I went overboard when I built the DIY NAS: 2019 Edition, then you’re definitely going to think that I completely overdid the DIY NAS: 2020 Edition. If this is what you think, I agree with you! This year’s DIY NAS build is going to go down as a missed opportunity for a course correction by returning to a more pragmatic approach to the DIY NAS.

Instead, I outdid last year’s DIY NAS build in nearly every regard—especially the price:

  • AMD Ryzen 3900X CPU (12 cores up to 4.6GHz)
  • 8 hot-swap drive bays.
  • TrueNAS CORE 12.0 (mirrored on two 120GB NVMe SSDs)
  • A price tag of nearly $1,700 (and climbing!)

This machine is ultimately much more of homelab server than it is a NAS. It has plenty of potential to handle quite a bit of computing responsibilities beyond network-attached-storage.

If you’re thinking that this is just too much money to spend on a network-attached-storage device, I agree with you. If you’re thinking that this is a top-notch homelab server to run your NAS (among other things) with a price tag to match it, I agree with you too! The AMD Ryzen 9 3900X did not break a sweat in anything that I asked it to do. There’s a tremendous amount of room to grow in to in the DIY NAS: 2020 Edition.

The DIY NAS: EconoNAS 2020 blogbut no build—shortly followed the publishing of this over-the-top NAS. Among the things I liked most about the DIY NAS: 2020 Edition was its use of AM4 CPUs and the wide variety of CPUs that it could support. I tried to leverage that flexibility in picking out components for the EconoNAS.

What do you all think of the DIY NAS: 2020 Edition? Is it way too overboard for you, or do you think there’s a ton of potential and you’re excited to build your own little data center inside it? I’d love to hear in the comments below!


Update (01/01/2021): I rang in the New Year by picking a winner in the giveaway of the DIY NAS: 2020 Edition! I hope that everyone joins me in congratulating Matt H. of Orlando, Florida for winning the TrueNASGiveaway! Matt’s visitation of my Youtube page (I hope he clicked like and subscribe!) in early December is the entry that wound up turning him into the winner of the DIY NAS. Altogether, 1,636 people combined for 7,893 entries into this year’s giveaway. Thanks to everyone for making the giveaway a big success!

DIY NAS: 2020 Edition

Maturing my Inexpensive 10Gb network with the QNAP QSW-308S

About four years ago, I built an inexpensive 10Gb network for my computer, DIY NAS, and homelab server. In building the homelab server around surprisingly-affordable used Intel Xeon CPUs, I discovered there was quite a bit of inexpensive enterprise network hardware to be found on eBay.

Ultimately, I wound up spending around $120 in order to have dedicated 10Gb links between each of the three computers and I’ve been pretty pleased with it ever since.

I found inexpensive switches too, but rarely with enough 10Gb ports. Worse, all of the inexpensive switches that I found were meant to installed in a rack and had a large footprint. For years, I’ve been pretty adamant at not wanting to dedicate that much square footage computer infrastructure in my home.

Nitpicking my 10Gb Network

At a time when the hardware for 10Gb Ethernet over CAT6 was costing around $150-200 per port, I’d built a 10Gb network of my own across three machines cheaper than it would’ve been to add a single 10Gb network card (for CAT6) to one of my computers.

That being said, there were still a few minor annoyances that irked me:

  1. Windows acted…funny: After plugging in my new NICs, I noticed that if I rebooted the computer, I’d lose connectivity between my NAS and homelab servers. The easiest method I found to resolve this was to shut down and power off my computer and then power it back on. Additionally, I noticed that my screen saver stopped working on my computer.
  2. It wasn’t plug and play: I know some will scoff at this, but my preference when it comes to networking is that I plug it in—and it just works. I had to set up each of the 6 network interfaces (2 per computer) to use static IPs, to make sure there weren’t any conflicts with my router’s DHCP addresses, and then I used hosts files on each machine to help me remember where I wanted network traffic directed.
  3. All the network cables: Each computerhas 3—4 network cables (1x CAT5 for connectivity to the rest of my network—including the Internet, 2x 10Gb cables, and 1x additional CAT5 for the IPMI interface on the NAS/homelab machines)

In the grand scheme of things, these were—and continue—to be no huge deal. The worst of them is Windows’ behavior; a bit of research seems to suggest that drivers were to blame and that there wasn’t much hope for updated drivers for the discontinued NIC that I had purchased. These Windows-specific side effects were easily managed by power cycling my computer after a reboot happens and by manually locking the computer.

Enter the QNAP QSW-308S

I’m mostly familiar with the QNAP brand from all of my NAS-related research. I frequently find comparable QNAP NAS hardware to my own DIY NAS builds and use them as a comparison point. A few weeks ago, I was absentmindedly scrolling through Amazon and a product listing caught my eye, a 3-Port 10G SFP+ and 8-Port Gigabit Switch.

I did a double take at the price of $159 and remarked to myself, “That can’t be right,” and scrolled back up to check the product out in more detail. I was surprised to find that what I read was correct; the product listing was for a QNAP QSW-308S 10GbE Switch, with 3-Port 10G SFP+ and 8-Port Gigabit Unmanaged Switch. With 3xSFP+ ports, 8xGigabit ports, and a smaller form factor, the QNAP QSW-308S’ specifications were ideal for a small office like my own.

Granted, it’d been several years since I looked into SFP+ switches—there’s a greater-than-zero chance that I’ve just been oblivious of network gear pricing. It’s likely that I’ve just been unaware of the fact that there are now switches on the market which align with my needs better. Regardless, I was surprised to learn that switches like the QNAP QSW-308S existed—and that they were pretty affordable.

Did I need a 10Gb Switch?

As it turns out, the answer to the question of whether I needed a new switch or not was: “Yes!”—but probably not for the reason you may be assuming. I had already conquered two of my pain points in setting up the 10Gb links among the three computers that I wanted on my 10Gb network. Four years later, the QNAP QSW-308S was too late to solve those two problems for me. I also felt that there wasn’t any reason to hope that adding a 10Gb switch would resolve the Windows-specific issues I have encountered.

However, I bought the QNAP QSW-308S right away for a less obvious reason: rearranging my office. For a long time my DIY NAS and homelab server have sat on and next to a largely unused desk in my office. But at the end of September, I began a new job that will have me permanently working from home. I’ve slowly been repurposing the neglected desk to be my office space. But this desk quickly became too crowded to accommodate everything that was on top of it: my DIY NAS, a tablet stand, a Google Home Mini, my work laptop, docking station, monitor, keyboard, and mouse.

For the first few weeks of my new job I’ve felt a bit like I was working in a cramped server closet! I decided that I would invest some money in my office space. I bought a matching desk extension with the intention of moving the NAS and homelab machine a few feet further to the right. But as a result of using direct-attach copper cables for my 10Gb network, I was already at or near the specifications’ limit on length for 10Gb for my direct-attach copper cables.

In order to maintain my prior arrangement, I’d need to spend money on new media to interconnect my computers. I could’ve done something like purchase six SFP+ to RJ-45 Transceivers and some CAT6e cables (up to 10M), but that would’ve wound up costing over $250 all by itself. Buying a QNAP QSW-308S, placing it in the middle of where the computers are interconnected, and spending a little bit of time reconfiguring my network interfaces was going to be quite a bit cheaper.

Installing and Testing the QNAP QSW-308S

All of my other desks are assembled, had stuff on them, and I didn’t want to move them—so I took the easiest route and installed the QNAP QSW-308S right on the back of the new desk on the side closest to the edge of the desk it was seated next to. My new switch would be located in nearly the same place that my homelab server had previously occupied. The network cable had to reach about 7—9 feet to my computer and then about 3—4 feet in the opposite direction to reach my DIY NAS and homelab servers on the opposite side of the new desk.

Once I had the new switch installed and the desk in position, I powered it up, plugged my computer into it, and powered my computer back on. On my computer, I disabled my Gigabit network adapter and updated the 10Gb interface to use DHCP. Just like I had hoped, it simply worked. My computer obtained an IP address from the router and I successfully tested my connectivity to the Internet.

What came next took a little while longer. With my desks firmly entrenched in their positions and all various cables behind my two desks being meticulously (some might even say obsessive-compulsively) cable managed, it took quite a bit of time. Unplugging and removing the cables (10Gb, 1Gb, USB and power), moving the computers, and then neatly plugging the cables back in was quite a lot of work!

Finally, once the computers were all plugged in, I reconfigured their interfaces to use DHCP and confirmed that my NAS, the homelab server, and my few virtual machines were accessible within the network and could connect out to the Internet.


I performed a pair of simple crude tests to make sure that I was seeing throughput from my network that I’d be happy with. Firstly, I fired up iperf using my homelab server and my desktop as a client. I wasn’t surprised at all to see it fully utilize the 10Gb link—but I found it every bit as satisfying as when I saw it four years ago.

The next benchmark I performed was to see throughput to the NAS—this is important to me because my NAS is my primary storage for all of my data. I measured this throughput using IOMeter, and I simulated reading a file off the NAS that more than doubled the amount of RAM on the NAS. I monitored this from both my DIY NAS’ web-interface and from inside Windows’ Task Manager.

Note: The FreeNAS widget reports throughput in bytes per second (Bps) and Task Manager reports it in bits per second (bps)

Frankly, I was a bit shocked with the results of my read test from NAS. The 477.78MBps reported by FreeNAS is equivalent to just over 3.8Gbps and matched what I was seeing in Windows’ Task Manager on my computer. I was surprised because this exceeded similar tests that I ran back in 2016 by a considerable margin. Back in 2016, a similar test measured at about 300MBps—478MBps is roughly 59% faster. I had absolutely no expectation to see a performance increase as part of adding the QNAP QSW-308S to my 10Gb network, but I did!

Final Thoughts

When I built my inexpensive 10Gb network, I was excited to show how inexpensive it was to build a small network of 2—3 computers. I was quite impressed at how budget-friendly it was to have 10Gb connections to my DIY NAS and homelab servers available. But would I have still called it a bargain if I increased the price tag by an additional $160 to add a 3-port 10Gb switch?

Today, I think the QNAP QSW-308S is an excellent value. If it had been available to me back in 2016, I would have quickly purchased it and still would’ve felt that incorporating 10Gb into my home network for less than $300 would’ve been a great deal.

Does the existence of a switch like the QNAP QSW-308S make you more likely to build a 10Gb Ethernet network of your own? Or are you opting to wait for the price of 10Gb over CAT6 to come down? What other sorts of inexpensive faster-than-Gigabit networking options are you considering? I’d love to hear how you’ve built your own high-speed networks at home in the comments below!

Replacing my IFTTT Applets with Node-RED and Home Assistant

This is part of a series of blogs that I wrote after IFTTT announced their “Pro” subscription and restrictions on free accounts, which makes it impossible for me to continue my use of their service. Given what they’ve done recently, I would strongly discourage everybody from using IFTTT.

If you’re interested at all in detail how I got to this point, check out these blogs too:

  1. Ditching IFTTT for Home Assistant
  2. Replacing my IFTTT Applets with Automations in Home Assistant

So Far, Half of My Automated Tasks Have Been Recreated in Home Assistant

When I deleted all of my applets on IFTTT, I essentially had four different automated processes that I was using on a regular basis. In my prior blog, I recreated the first two of those automated tasks in Home Assistant

  1. Porch and Staircase Lights: Turn the lights on just before sunset and off just after sunrise.
  2. Office Lights: Turn on my office’s lights when I get home and turn them off when I leave the house.
  3. 3D Prints: At the completion of a 3D print, turn the red cherry light above my printer on for 30 seconds.
  4. Pat’s Tweets: Send a notification to my phone with links to Pat’s tweets.

In my research into Home Assistant, I saw a lot of people supplementing it with Node-RED and decided that I’d try both the built-in automation scripting and then also with Node-RED when creating my automated tasks.

What Is Node-RED?

I consumed a lot of people’s guides for using Home Assistant, and a number of them had complimented Home Assistant with Node-RED, and spoke very highly of it. While I was aware of Node-RED’s existence prior to embarking on this journey, I don’t think I could’ve answered this question very well. Now that I’ve tried to tackle working with Node-RED, please understand that I’m still probably not qualified to answer the question well!

On Node-RED’s web page, they say that “Node-RED is a flow-based programming tool…” In consuming the content I’d come across, I liked the flowchart-like interface and immediately recognized that I would prefer developing my automated tasks inside this interface more than I would inside of a text editor.

At one point, I asked Pat, “Hey, do you think I should check out Node-RED?” and Pat answered my question with a question (Don’t you hate that?!), “What does Node-RED do that Home Assistant doesn’t do?” At the time, I didn’t have an answer to Pat’s question. I figured that the best way to find an answer was to create a few automated tasks using Node-RED.

3D-Printing Lighting

IFTTT isn’t really designed to handle automations with multiple steps as easy as it is to create a single piece of automation. When I created my 3D-printing automation, it was using multiple services (eWeLink, Google Drive, YouTube, Twitter, and more) to accomplish a number of tasks, which resembled a Rube Goldberg machine, but nothing as amazing as this one that the Guiness World Records shared on Youtube.

The automation I put together wound up being convoluted enough that I instantly deleted all of the steps except these: turning the red cherry light on, waiting a few seconds, and then turn it back off.

Even then, with those 2—3 steps, it wasn’t all that reliable and then a few months later that simpler automation mysteriously stopped working entirely. The cherry light would turn on, but never turn off. My office is on the front side of my house, and I was more than a little worried a 3D print would finish late at night and the cherry light would go off all night, potentially worrying my neighbors. As a result, I unplugged that light a couple of months ago and it has been dormant since.

Since originally writing that automation, I installed an IKEA Tertial lamp at my 3D printer to improve the lighting for my time-lapse photography of the 3D prints. Naturally, I plugged this new lamp into a Sonoff S31 smart outlet and wanted to incorporate the new lamp into my automation.

To start off, I decided that this new automation would perform the following steps.

  1. When the print begins
    1. Turn on the IKEA Tertial lamp.
  2. When the print ends:
    1. Turn on the Cherry light.
    2. Wait a few seconds.
    3. Turn off the Cherry light.
    4. Turn off the IKEA Tertial lamp.

In creating this automation, I got to flex one of the benefits of Node-RED. I could use one node to monitor the state of OctoPrint Printing and then use the switch node based on the two possible states (on and off) and create two different sequences of nodes to execute based on those two states.

Even better yet, debugging my new 3D-printing automation was infinitely easier using Home Assistant and Node-RED! Between the inject node and debug node, I was able to understand exactly what was going on inside my sequence.

So how did it turn out?

Pat’s Tweets

I’ve been using IFTTT combined with Pushover to send myself a push notification with each of Pat’s tweets for a really long time. This task doesn’t really fit under the umbrella of “home automation,” and I was a bit apprehensive that it wouldn’t be possible using Home Assistant. Home Assistant definitely has Twitter integration, but in my initial tinkering, I did not discover a painfully obvious way to trigger some automation based off Pat’s Twitter activity.

At first, I was complacent and thought I’d just leave this running as one of the three free applets that IFTTT was allowing. However, by this time I was really motivated to delete my IFTTT account and I wanted to demonstrate to others that there are options available that complement what Home Assistant does.

A little bit of tinkering quickly made it obvious that I’d be able to move this automation from IFTTT using Node-RED. Essentially, what I needed to do was to use up the Twitter node, a function node, and the Pushover node to replace what I’d been doing in IFTTT. Within the function node, I wrote a little bit of code to set the message variables that the Pushover node was needing using the Tweet object returned by the Twitter node.

Creating Automated Tasks in Node-RED vs. Home Assistant

In comparing and contrasting the built-in scripting in Home Assistant with the features and functionality in Node-RED, I think that Node-RED is hands down the better choice. In my very basic discovery, here are a few of the things that I liked about creating these automated tasks in Node-RED:

  1. Node-RED’s interface is much nicer to work in.
  2. Node-RED’s scripting features seems to be broader than what Home Assistant can do.
  3. You’re able to hook into seemingly anything that Home Assistant is capable of from Node-RED.
  4. Node-RED has a wide array of functionality which isn’t available in Home Assistant’s scripting (for example: polling Twitter for Pat’s tweets!)

However, there were a couple minor observations that I had after working with both. I’m not experienced enough to know if these are actual limitations of using Node-RED with Home Assistant, or if it’s just an uninformed suspicion that I have.

I suspect that as I learn more about using Node-RED and Home Assistant together, I’ll find these aren’t actual limitations and that more experience will allow me to develop better automation that accounts for these observations.

  1. Node-RED is running aside Home Assistant, but independently.
  2. None of my Node-RED sequences show up in Home Assistant’s Logbook as being executed the way Home Assistant’s built-in automations show up in the Logbook.

What’s Next?

Now that I’ve completely replaced and enhanced my old IFTTT automations, I get to do more fun things with Home Automation. Even though my whole family is now working from home permanently, most of the energy savings we’d have for home automation have been negated. But there are other benefits to home automation other than energy savings. In no particular order, here are a few things I’d like to start working on:

  1. Expose my Home Assistant server using Tailscale: Pat and I have both been interested in Tailscale for a while, and Pat’s experience has me encouraged about using Tailscale to access a number of things in my network remotely.
  2. Convert all my Home Assistant Automations to Sequences in Node-RED: I’m definitely impressed with Node-RED and I’m way happier doing the development inside its interface.
  3. Get Home Assistant working with Google: My home security system is working with Google Home, and there’s a bevy of sensor data out there that I’d like to be able to work with. Most importantly, the exterior door sensors.
  4. Level up to using Smart Light Switches: One of my best home automation ideas is to use the temperature data from my idle 3D printer and smart thermostat to turn my office ceiling fan on (or off). I also want to use smart switches to control the lighting on my house’s exterior.
  5. DIY Some Smart Things?!: The Internet is chock-full of DIY-able projects to enhance Home Automation: bed sensors, remotely opening/closing blinds, temperature data, motion detection, etc.

All of these things will help me start making my home smarter! I’d like to get to the point where much of the house’s lighting is automated to the point where someone rarely has to touch a light switch in the rooms we use most frequently.

Adios, IFTTT!

When I received the email from Sonoff about eWelink’s “VIP” service, I was discouraged—but I was not surprised. Having learned IFTTT was trying to make more money by charging vendors to use their platform, I begrudgingly accepted that I should expect to see the hardware I wanted to use be more expensive if it worked with IFTTT. I was not surprised when IFTTT announced their paid model. If they had success charging the hardware vendors, why wouldn’t they also try and charge the consumers of their service?

I wanted to delete my IFTTT account when:

I attempted to be pragmatic and retained my IFTTT account as I used Home Assistant and Node-RED to automate all the things that my IFTTT applets had been responsible for. Once I had them recreated and working, it freed me to follow through with what I had set out to do!

Don’t Miss Out on the Giveaway!

Update (11/16/20): A winner has been found! It took a few tries, but Brian C. from Florida was picked late last week. In that time, Brian and I ironed out the shipping details, and just this morning I dropped the RaspberryPi Kit in the mail. The package is now on its way to Florida. Congratulations, Brian and have fun with Home Assistant!

In my first blog about Home Assistant, I tested getting things running on a RaspberryPi Kit that I’d purchased. But since I don’t really have a need for it, I’m going to be giving it away on Halloween. Home Assistant + Raspberry Pi 4 Kit Giveaway

Replacing my IFTTT Applets with Automations in Home Assistant

Much to my chagrin, IFTTT implemented a subscription model and greatly restricted what it’d allow users to consume for free, and the hardware vendor for my favorite smart switches also announced that only its paid subscribers would be able to access their IFTTT integrations.

Because I didn’t want to pay to subscribe to a service and then pay for an additional subscription to use my hardware with that service, I decided to ditch IFTTT entirely and switch to Home Assistant.

The setup of Home Assistant and getting it configured to be able to automate the same tasks that IFTTT was assisting with was simple and straightforward.

Would transferring over my automated tasks to Home Assistant be as easy?

Things I had automated in IFTTT

It took 2—3 dozen different applets in IFTTT to automate a few tasks at my house:

  1. Porch and Staircase Lights: Turn the lights on just before sunset and off just after sunrise.
  2. Office Lights: Turn on my office’s lights when I get home and turn them off when I leave the house.
  3. 3D Prints: At the completion of a 3D print, turn the red cherry light above my printer on for 30 seconds.
  4. Pat’s Tweets: Send a notification to my phone with links to Pat’s tweets.

Moving my automation off IFTTT is good, but enhancing it is way better!

I like all of these automations, but they were each shaped by the restrictions in IFTTT. The downside of IFTTT’s simplicity is that it made it cumbersome and difficult to do more things with different services without creating multitudes of applets.

This convoluted nature is demonstrated in my blog about incorporating my 3D Printer into my IFTTT home automation. For each print, I was turning lights off/on, and sharing a time lapse video of the print in a tweet. But due to IFTTT’s offerings, it took a number of different services (my smart outlets, Google Drive, YouTube, Twitter, etc…) and dozens of applets strung together in a Rube Goldberg device-like manner. It was complicated and error-prone enough that I immediately turned off most of the applets in the automation.

Recreating these automated tasks would also give me the opportunity to explore how they could be further simplified and improved.

Creating new Automations within Home Assistant

One of the things that initially drew me to IFTTT was that it was incredibly easy. I could set up and create really simple automated tasks from inside a mobile app. I didn’t need to be insanely familiar with a particular scripting language as a pre-requisite for getting started.

Sharing how easy it was to get Home Assistant installed and working is helpful, but that work is a drop in the bucket with regards to the amount of effort to create and maintain the automated jobs. I ended my previous blog with these questions left unanswered:

  • Would I have to abandon any of my automated tasks on IFTTT?
  • How difficult was it to recreate my automation in Home Assistant’s interface?
  • Would developing more complex automations become convoluted like they had using IFTTT?

Porch, Staircase, and Office Lights

I have smart bulbs and smart outlets that I use to light a few areas in our house: my office’s complementary lighting, the porch, and a small table lamp near the staircase in our house. As my first automations in Home Assistant, I figured I could create rule(s) for each of these groups of lights.

Essentially, the automation is all very simple: an event (my location or the position of the sun) triggers the devices to turn on or off depending on the type of trigger. In total, I created 6 automations to automate all of these lights. In this blog, we’ll walk through the automation that I use to turn on the lights in my office when I arrive home.

Brian’s Office: Turn on lights when Brian enters Home

I wrote, modified, and rewrote some of these first few automations more than one time. As I progressed, I realized that I would need a better naming convention and descriptions so that the automated tasks were grouped together better.


When I installed Home Assistant, I set up my home’s address and created a user for myself. When I installed the Home Assistant app on my iPhone, I logged on as that user. As a result, Home Assistant could then track my location well enough to know when I’m entering or exiting my home. I used this trigger to kick off the automation to turn on my office’s lights.


Because I’m only interested in recreating my crude IFTTT applets, I didn’t have any need to delve into Home Assistant’s conditions. As a result, I didn’t use conditions in any of the automations that I first created.

If I wanted to—and I do—I could use conditions to hone in whether or not the lights in my office actually needed to be turned on when I return home. As an example, because I like to capture time-lapse recordings of my 3D prints, I leave the lights in my office on during a 3D print. Conditions could be added to my office lighting’s automation to check on whether a 3D print was going and to avoid needlessly turning the lights off or on.

Back when I used IFTTT, a number of the 3D print’s time-lapse videos were impacted because I left the house mid-print. I will be able to avoid that in the future thanks to Home Assistant’s conditions.


Next up was adding action(s) to be executed when the automation is triggered. At first, I tinkered with the idea of adding an action for each individual device I wanted to turn on in my office. But ultimately I wound up deciding that what using a group made more sense and kept the action simpler.

Two Down and Two More to Go!

I created, modified, and tested these first new rules an hour or two after hitting the publish button on my blog about ditching IFTTT for Home Assistant. Hopping in the car, driving down the block, looking at my Home Assistant logs (from the mobile app), driving back home, and looking at the logs again broke up a bit of the weekend’s COVID-19 monotony.

Really the only gotcha that I encountered was realizing that I needed to expose my Home Assistant installation to the Internet. This was discovered the first time I left the house, I was so fast that I was off my WiFi before the mobile app could feed the GPS data to Home Assistant for it to determine that I was leaving. The DuckDNS Home Assistant add-on for Home Assistant made all of this easy and even has its own Let’s Encrypt features built in to equip SSL encryption between Home Assistant and its clients.

If I’d been a little more patient, I would’ve been excited to try and use Tailscale to access my Home Assistant server remotely, like Pat had done with his own machines. But I opted to go with DuckDNS because I was already moving so much faster than I could write blogs about!

What’s Next?

I managed to recreate half of my automated tasks from IFTTT in a matter of minutes, which was all the encouragement I needed. As I put the finishing touches on this blog, I’m already developing the remaining pieces of automation and I’ll capture their creation in the next blog. Rather than using the Home Assistant’s built-in scripting, I’m going to evaluate authoring and executing these other two automated tasks in Node-RED.


When I was surfing through the videos, how-tos, and other guides that I came across in my Home Assistant research, I saw lots of people mentioning Node-RED and sharing development of their own automation from the Node-RED interface. The automated tasks that I saw were impressive in both their complexity and the ease of developing them, especially because of the flowchart-like interface.

As I prepared to publish my first Home Assistant blog, a few people suggested that I check Node-RED. Because of their recommendations, I decided that I’d make sure to author half of my automated tasks using Node-RED and share what I thought as part of this blog. But as I started tinkering, I realized that using Node-RED was going to require its very own blog!

To be continued…

Setting up automations within Home Assistant was very straightforward and easy. I was able to create these six automations to accomplish what it took dozens of applets in IFTTT to accomplish. What I was able to create was simpler and more straightforward and best of all it was all being orchestrated inside my own network by Home Assistant, instead of mish-mash of cloud-based services with IFTTT stitched between them all.

Before I started, I was supremely confident that I’d be able to easily migrate all of my lights’ and 3D printer’s automated tasks from IFTTT to Home Assistant without any difficulty. The fact I had these new automations created, tested, and functioning in a matter of minutes confirmed my assumption.

I asked a few questions to start off this blog, let’s see if I’ve been able to answer them!

  • Would I have to abandon any of my automated tasks on IFTTT? To Be determined—but probably not!
  • How difficult was it to recreate my automation in Home Assistant’s interface? Not difficult at all.
  • Would developing more complex automations become convoluted like they’d had using IFTTT? Creating automations in Home Assistant is a little less user-friendly, but less complicated—especially when multiple actions are required.

This is the second blog of a series, make sure you keep reading my the other blogs in this series!

  1. Ditching IFTTT for Home Assistant
  2. Replacing my IFTTT Applets with Automations in Home Assistant
  3. Replacing my IFTTT Applets with Node-RED and Home Assistant

Make sure you also check out the details below on the RaspberryPi Kit that I’m giving away! The winner will be picked on Halloween!


Update (11/16/20): A winner has been found! It took a few tries, but Brian C. from Florida was picked late last week. In that time, Brian and I ironed out the shipping details, and just this morning I dropped the RaspberryPi Kit in the mail. The package is now on its way to Florida. Congratulations, Brian and have fun with Home Assistant! Home Assistant + Raspberry Pi 4 Kit Giveaway

Ditching IFTTT for Home Assistant

For the past five years, my home automation has been pretty basic. I was happily using a few Sonoff S31 smart outlets and IFTTT to do some really simple things like turn off my office lights when I left the house and back on again when I returned home.

But then in the past couple months two things have happened:

  1. eWeLink announced their VIP plan for about $10/year and that all of their IFTTT integrations would only be available as part of this plan.
  2. IFTTT was going to restrict the number of “applets” you could run for free to 3 and that if you wanted more, you’d have to sign up for their $9.99/month professional plan.

Basically, what I had been doing for free was now going to cost me $10 a year from the eWeLink/Sonoff team and another $120 a year from IFTTT. I always assumed that IFTTT would eventually try and entice me to pay for their services, but I expected it’d be by offering more features—not by extorting the hardware vendors and me into paying for what we’d already been using.

There isn’t really another way to say this. I think how IFTTT has operated recently is flat-out scummy. They’re trying to double-dip by charging both hardware vendors and their users to use their platform. That’s their decision to make, but my decision in response is to spend my money elsewhere.

On top of that, their value proposition isn’t even all that valuable to me. I subscribe to a lot of monthly services that are around $5—10 a month: Netflix, YouTube Music, Hulu, etc. I get infinitely more use out of each of them than I ever would from IFTTT. IFTTT’s value proposition isn’t even playing the same sport, let alone in the same ballpark.

It became glaringly obvious that I needed to migrate away from IFTTT—and fast!

This wasn’t even what I wanted to be working on!

Most of the traffic to my website is related to my yearly builds of DIY NAS machines, and I’ve had the parts for the DIY NAS: 2020 Edition picked out and waiting to be built for a really long time. So far this year, that blog has been delayed a bit each time I turn around! Something equally important or exciting always seems to be capturing my attention. Whether it was COVID-19, building a new quadcopter, a weekend of eSkateboard fun, or finding a new “day job,” I keep getting distracted from the DIY NAS: 2020 Edition!

If you’re interested in the DIY NAS: 2020 Edition, keep reading! There’s a little bit of DIY NAS overlap in my home automation interests and a surprise at the end of the blog that you might both be interested in and familiar with!

What was I looking for in my Home Automation?

In thinking about what I wanted to do with my home automation next, I tried to set up a little criteria. I wanted to be able to:

  1. Recreate the crude automation accomplished in my applets on IFTTT.
    1. Turning the lights in my office off and on based on my location
    2. Turning the smart bulbs on my porch on and off based on the sun’s position in the sky.
    3. Briefly turn on the cherry light above my 3D printer after every completed print.
  2. Reduce (or eliminate) my dependence on “freemium” services from 3rd parties.
  3. Orchestrate the home automation on my own hardware within my own network.
  4. Cost less than $130/year, preferably a lot less.

Enter Home Assistant

One of my drone-flying friends, Tom (aka SpacePants FPV on YouTube), is in the process of buying a house. A few weeks back he asked Pat and I about our own home automation. Pat’s been using OpenHAB for quite a while and shared some of his experiences and advice. I confessed to Tom that my home automation was rudimentary at best and that I didn’t necessarily recommend that he follow in my footsteps.

Tom came back a few days later and announced in our Discord server that he had decided he was going to use Home Assistant. Tom also suggested that I look into it. So you know what? I did check it out, and I was impressed! I have wanted to level up my home automation for a long time, but I let my lack of knowledge intimidate me and keep me complacent. But what I saw in looking at Home Assistant’s documentation and the content that other enthusiasts have shared convinced me to give Home Assistant a detailed look.

I punched ‘Home Assistant Sonoff’ into Google and one of the first results was this video claiming that Sonoff devices can work with Home Assistant without changing the Firmware! on DrZzs’ YouTube Channel. By the end of the video’s introduction, I had already made up my mind to give Home Assistant a try.


One of the things that instantly drew me to Home Assistant was its plethora of installation options—both the number of installation methods supported by Home Assistant and the variety of installation guides crafted by its community.

Brian’s Kludge Virtual Machine

As you might know, I built a dual Xeon homelab server four years ago. In that time, I’ve tinkered with a few virtual machines, but the only thing my homelab server has been doing on a regular basis is hosting my Plex media server. Both my Plex server and my Homelab server have been woefully under utilized. I was excited to try Home Assistant because they had a virtual machine image for the KVM hypervisor, which is what’s running on my homelab machine.

Normally for my virtual machines, I wind up creating an iSCSI device(s) on my DIY NAS and use that for the new VM’s storage. However in my haste and excitement about getting started, I wound up accidentally hosting the Home Assistant KVM image in a random Samba share on my NAS. While it works just fine, it bothers me that I’m not adhering to my own standards.

Upon realizing I’d set up my own Home Assistant VM in a bit of a kludge manner, I intended to delete it and start all over from scratch. But I was quickly surprised to see that Home Assistant had discovered some sensor inputs automatically. I quickly got distracted, and started tinkering with Home Assistant. Eventually, I will add the iSCSI device and move the contents of the hard drive over to that iSCSI device—but that’s not until later. Right now I’m having too much fun with Home Assistant to work on sorting that out.

Raspberry Pi

Of the recommended options, running Home Assistant on a Rasperry Pi is among the most popular. For the sake of writing this blog, I picked up a Raspberry Pi 4 4GB Starter Kit. The kit includes nearly everything you need to host a Home Assistant server on: the Raspberry Pi 4 4GB, a 32GB MicroSD card, a USB MicroSD card adapter, a case, a power supply, heat sinks, a fan, and a mini HDMI to HDMI cable.

I was excited to see that the Raspberry Pi 4 kit was sufficient to run Home Assistant and that the kit was less expensive than the projected yearly cost for IFTTT (about $120/year). My own hardware cost to adopt Home Assistant was $0.00 thanks to prior investments in my homelab machine. But I was still encouraged to learn that buying hardware dedicated to Home Assistant would still be a better option than continuing on with IFTTT’s premium plan.

From past tinkering with other Raspberry Pi images, I suspected that getting Home Assistant to run on a Raspberry Pi 4 would be much easier than my somewhat-convoluted virtual machine. I wasn’t surprised at all to confirm that it was every bit as easy I expected it to be. I wrote the image to the SDCard, assembled the kit, put the MicroSD card into the Raspberry Pi, plugged it into my network, and turned it on. It booted up, started loading Home Assistant, and its web interface was available to start configuring a few minutes later!

Observe all the Things!

I’ve tried to encourage Pat to write more blogs about his home automation. Many of the things he has done with his OpenHAB server have been fascinating to listen to him talk about. For example, he created automation which detected when he launched a full-screen game in Steam which dimmed the lighting in his office for a better gaming experience.

Pat had a great tidbit of advice for Tom and I, and I want to share it: “Don’t worry initially about writing automation, instead focus on getting as much data input as possible.” This is great advice because ultimately the key difference between what Pat’s achieved and what I’ve achieved with our respective bits of automation is the amount of actionable data. If I had as many data points as Pat had plumbed into my home automation, I would’ve quit using IFTTT years ago! The amount and data that Pat has available to him in his OpenHab simply wasn’t available to me in my variety of cobbled-together 3rd-party services.

With a tiny bit of manual configuration and a bit of automated wizardry, my Home Assistant is now currently monitoring:

  1. A couple network devices thanks to uPNP allowing their discovery. (Note to self, disabling uPNP might be a good idea!)
  2. All sorts of data points from my mobile phone via the Home Assistant iOS mobile app.
  3. My Prusa I3 MK3 3D printer via Home Assistant’s OctoPrint configuration
  4. My Ring doorbell via Home Assistant’s Ring integration
  5. My Tile Bluetooth trackers via Home Assistant’s Tile integration
  6. All of my Sonoff smart outlet devices using the SonoffLAN project installed via HACS (Home Assistant Community Store).
  7. Our two different iRobot Roomba vacuums using Home Assistant’s iRobot Roomba integration.

Setting up Home Assistant to work with these devices was surprisingly easy. I didn’t have to re-flash any of my devices’ firmware. I didn’t have to do any work at the command-line on the Home Assistant virtual machine. I didn’t really have to do much at all—I added integrations and it just worked. Frankly, I was—and still am—amazed at how easy it was to get hooked into my devices.

What’s up next?

  1. Rebuild and enhance all of my IFTTT automation: I had a few tasks that I automated twenty or so IFTTT “applets”: I would toggle my office lights based on my location, I would toggle the smart light bulbs in my porch at sunset and sunrise, and I would set off the red cherry light above my 3D printer whenever it completed a print.
  2. Level-up my Home Automation: So far, my home automation has been pretty simple. I’d really like to make it smarter and expand its use outside of my office. I’d like to start looking at smart light switches to replace the switches throughout the house and maybe start using some door sensors. That way when I go out at midnight to let the dogs out before bed, the house’s light in the back yard could automatically come on.


If you’re using IFTTT today, you really need to check out Home Assistant. So far, everything about it has impressed me and I’ve really only started scratching the surface.

I’m assuming that most folks reading this blog don’t have an under utilized homelab server like I do. But even if you have to buy a Raspberry Pi 4 4GB starter kit, the hardware is a more cost-effective expenditure than continuing on and using IFTTT’s paid model.

But beyond that, moving away from IFTTT makes everything a bit simpler. IFTTT’s ease of use was a big benefit, but its simplicity is also a hindrance. It was incredibly convoluted to automate turning on my cherry light, waiting 30 seconds, and turning it off each time a 3D print completed. It took six applets in IFTTT to accomplish this, it took three different services, and it wasn’t always reliable.

What sorts of hardware are missing from my home automation that I need to incorporate next? What kinds of automation do you think I should look into adding with the hardware that’s currently available? I’d love to hear about your own home automation projects and goals down in the comments below!

This is the first blog of a series, make sure you keep reading my the other blogs in this series!

  1. Ditching IFTTT for Home Assistant
  2. Replacing my IFTTT Applets with Automations in Home Assistant
  3. Replacing my IFTTT Applets with Node-RED and Home Assistant


Update (11/16/20): A winner has been found! It took a few tries, but Brian C. from Florida was picked late last week. In that time, Brian and I ironed out the shipping details, and just this morning I dropped the RaspberryPi Kit in the mail. The package is now on its way to Florida. Congratulations, Brian and have fun with Home Assistant!

I’ve been burned in the past where I recommended things that I thought would work—but didn’t. Ever since, I’ve been buying and trying things before I recommend them whenever I possibly can. I bought the Raspberry Pi 4 4GB Starter Kit knowing that I would want to recommend using it with Home Assistant, but I’m not going to be using it.

You might be asking yourself “What happens when Brian doesn’t need the things he buys for his blogs?” and the answer to that is easy! I give them away! If you’re interested at all, here are the details on the giveaway. I’ll be drawing the winner on Halloween. Good luck! Home Assistant + Raspberry Pi 4 Kit Giveaway