Getting More Serious about Lasers: Ortur Aufero Laser 2

| Comments

Over the past few years, Pat and I each have collected 3D-printers, Pat acquired a CNC machine, and we recently chipped in together for a LumenPNP pick-and-place machine for the Ooberlights project.

Between the two of us, we have a decent collection of machinery that we can use for different methods of fabrication. For a long time, I’ve been wondering what I might buy next (other than my Prusa XL 3D-printer, of course). I had narrowed down my most likely choices, and it has been a coin flip between my own CNC machine and some sort of laser cutter.

The only thing that’s held me back from making this purchase is its cost. Both in the shape of the impact to my bank balance and the investment of my time learning to use each machine. A machine the size I’d want to work with the materials I want to work with is going to be expensive, and I have little to no expertise with either kind of machine.

So when I was contacted and asked if I wanted to review the Ortur Aufero Laser 2, I saw it as an excellent opportunity to try and answer this question once and for all!

Ortur Aufero Laser 2

Several years ago, I reviewed a very inexpensive laser engraver, the NEJE DK-8-KZ. It was fun to review, but it wasn’t very capable. Its limitations prevented me from incorporating it into any of my making. Skimming through its specifications, I realized that the Aufero Laser 2 was much, much, much more capable than what I had used in the past:

  • Workable area: 390mm × 390mm
  • Laser: 4500—5500mw Short Focus Laser Module (LU2-4-SF)
  • Speed: 0—10,000 mm/min
  • Cuttable materials: Plywood, pine board, paperboard, black acrylic, leather, felt cloth, etc…
  • Engravable materials: Food, MDF, paperboard, black acrylic, leather, stainless steel, powder-coated metal, stone, etc…
  • Price: $399 $369

On paper, I thought that the Aufero Laser 2 seemed quite capable without breaking the bank!


Assembly

Before agreeing to review the Aufero Laser 2, I was curious about what was in the box. I was also curious about how challenging it would be to assemble it and to learn how to use it. As part of my preparation, I went spelunking through the first page of Google results and found myself encouraged by the fact that nearly each search result said that the laser engraver was easy to assemble. But given my level of competence, would I think it was easy to assemble too?

The assembly of the Aufero Laser 2 was exactly as I expected: incredibly simple. There were a handful of bolts, two at each corner of the frame’s four corners, and a pair of bolts (and washers) on each side of the engraver’s X-axis. The laser module’s installation was incredibly straightforward. A few power cables needed to be plugged in to the laser module and each of the stepper motors. Finally there was a tiny bit of cable management that needed to happen.

The assembly did not take much time, and I was ultimately successful, but not without a couple relatively minor problems.

  1. Laser Module ground cable: The laser module power cable’s ground is held in place by one of the four screws that go through the laser module’s plastic lid and down into the module’s metal body. The problem being that there’s a tiny countersink in the lid so that the screw sits flush. After trying to tighten that screw all the way back down so it sat flush, I noticed that the ground was being deformed and coming loose as I tightened it more and more. However, that screw is quite long, and I quickly realized that I would not need to tighten that until it was flush for it to properly perform its function. I backed the screw out, straightened out the ground connector, and then screwed it in tightly enough that it held place, but not so tight that the ground connector deformed again.
  2. Short X-Axis Bolts: The four bolts that attach the X-Axis to the frame seemed quite short. I had a very difficult time getting the included M5 nuts to bite down on those bolts with the included wrench. A difficult enough time that I am pretty certain that I stripped the threading on two of the nuts. Thankfully, these M5 nuts are the same size as what I use to hold the propellers on my 5-inch quadcopters. I have a collection of spare colorful aluminum M5 nuts and decided use four of those rather than the M5 nuts provided in the kit.

As you can see from this time-lapse video, I spent the largest amount of time struggling with the bolts on the X-axis. I invested that time because I was interested in being thorough in this review. For others buying the Aufero Laser 2, I’d recommend using a proper 8mm driver to try and avoid the issues I ran into by stripping those M5 nuts.

Laser-Engraving Software

The Ortur website lists two applications to control the Aufero Laser 2: LightBurn and LaserGRRBL. Pat and Alex have both spoken highly of LightBurn, so I opted to give it a try. I watched a few of the videos from the LightBurn Software channel on YouTube, and within a few minutes I had proved that I could control the Aufero Laser 2 from my laptop. LightBurn worked well enough that I decided against trying out LaserGRRBL. Once my trial license expires, I’ll be purchasing LightBurn.

My first few engraving jobs

My first few engraving jobs were all successful, mostly. Using Ortur’s materials reference spreadsheet for the LU2-4-SF laser module, I started off experimenting with engraving “briancmoses.com” into some scrap cardboard.

I was a tiny bit concerned about cutting through the cardboard and into my table below it, so I set the laser engraver atop another layer of cardboard. I didn’t secure the piece I was cutting very well, and the laser module’s shield moved my cardboard piece a bit. The power setting also wound up being a bit conservative. It definitely engraved into the cardboard, but so faintly that you could only make out the text at certain angles.

My second engraving job was a bit bolder. I decided that I wanted to engrave my site’s logo into the cardboard but then to also cut that around the logo out of the cardboard entirely.


IIt took me a couple attempts to accomplish this well. On the first attempt, I left the laser power at the same setting for the engraving and so it was very difficult to see my logo again. However, the power of the cut was nearly perfect. It cut straight through the cardboard and even scorched the second layer of cardboard that I had beneath it.

My second attempt at engraving my face into the cardboard and then cutting around it was even better. The laser was moving fast enough and set a low enough power that it engraved—but did not burn—the top layer of the cardboard box.

While the result was still on the faint side, I was still rather impressed with how it had turned out. I had expected that cardboard would char quickly and that my attempts would all turn out quite well done. I was super impressed that the top layer(s) of the cardboard were removed but nothing was charred by the laser.

As soon as I agreed to review the Aufero Laser 2, I started brainstorming things to try and create as part of this blog. A long time ago, I had a laminated QR code with an NFC tag that had our house’s WiFi credentials in it. When guests visited, we could just hand them that card and they could join our WiFi access point. I decided that I wanted to engrave something similar into wood.


Conclusion

I’m really impressed with the Ortur Aufero Laser 2. Before this review, I was interested in buying a laser cutter but I had no idea what I was doing. I was especially a bit worried about the cost, as powerful laser cutters get expensive very quickly.

Before being made aware of it, I was mostly oblivious and unaware of laser engravers like the Ortur Aufero Laser 2 existed. If I had known about it sooner, I’m pretty certain that I would already own one. Especially knowing exactly how much it can do.

Using the Aufero Laser 2 has me very interested in buying an even bigger laser some day down the road. When—or if—that day comes, I am confident that buying the bigger laser will not replace the Ortur Aufero Laser 2.

But watt wait, there’s more!

The Ortur Aufero Laser 2 was not the only thing that was sent to me. I was also sent the Ortur YRR 2.0 Rotary Roller for cylinder engraving. Once I finish this blog and tidy up my studio, I’m going to get busy using the rotary roller, thinking of ideas to put it to use, and reviewing it too!

What sorts of projects would you use the Ortur Aufero Laser 2 and/or the Ortur YRR 2.0 Rotary Roller for? I’d love to hear your ideas in the comments below, out in social media (Twitter, Facebook, or Instagram), or over in the Butter, What?! Discord server!

I like ESPresense so much…

| Comments

…that I want to make it even easier for other Home Assistant users to enhance their own home automation!

I’ve been using ESPresense in my home automation for 3—4 months now. A few months ago, I wrote a blog about how easy adding presence detection using ESPresense was for me. Based on the feedback I’ve gotten in the comments and on social media, others seem to agree!

What is ESPresense?

I explain it in more detail in my earlier blog, but you use ESPresense to create a tracking base station by flashing ESPresense to a supported ESP32 development board. Depending on its configuration, ESPresense then relays information about nearby Bluetooth devices to Home Assistant.

In my case, I have several ESPresense base stations in different rooms that are tracking my Apple Watch SE and reporting the watch’s location back to Home Assistant as I move around the house.

ESPresense was the final piece of the puzzle that I needed to fully automate the lights and ceiling fan in my office. Since setting it up a few months ago, I rarely—if ever—have needed to use either switch.

What’s the catch, Brian?

To be honest, I don’t think there is a catch. ESPresense is an open-source project, the hardware isn’t difficult or expensive to obtain, I have a 3D printer, and there are lots of free designs for 3D-printed cases for the ESP32 D1 Mini.

As I see it, there are a few very minor obstacles for the Home Automation enthusiast who wants to deploy ESPresense base stations in their home:

  1. Access to a 3D Printer to make cases.
  2. Acquiring a supported ESP32 development board.
  3. Flashing the ESP32 development board with ESPresense.
  4. Configuring and deploying ESPresense Base Stations.

I am going to try and remove obstacles!

My experience with ESPresense has been so positive that I wanted to try and provide a shortcut around as many of these minor obstacles as I can for other Home Automation enthusiasts.

No 3D printer? No problem!

A few people have contacted me asking about how they can buy a 3D-printed case for their ESPresense Development boards. I encouraged them to see if there’s a 3D-printer at a nearby makerspace or library that they can use.

But that advice might not be very helpful. Learning the workflow of 3D printing and then successfully 3D printing requires an investment of time. Places that do on-demand 3D printing could be an option here, but they’re usually pretty expensive.

I figured I’d address this by designing my own case for the ESP32 D1 Mini and selling them on my Tindie Store.

Unit price for the ESP32 D1 Mini cases is $3.00 when ordering 5 or more.

This friction-fit case snaps together around the ESP32 D1 Mini. There are two holes in the case so that you can see the ESP32 D1 Mini’s LED and to access the reset button. There are seven different styles for both the top and bottom sides of the case: solid, horizontal bars, vertical bars, diagonal bars, ESP32, Bluetooth logo, and WiFi logo.


If you’re curious about another color of filament or a different style of case, just ask! If there’s enough demand, I’m happy to buy the filament or put in the effort to meet your needs.

Sourcing ESP32 Hardware and Flashing ESPresense

Let me start off by saying this: It is not difficult at all to flash ESPresense to an ESP32 development board! As part of writing my first ESPresense blog, I probably flashed and re-flashed different ESP32 development boards 20+ times as I tinkered with the project. I absolutely never ran into any difficulties.

That being said, I’ve bricked my fair share of different devices with seemingly innocuous firmware updates. I’ve also accidentally bought the wrong hardware for small electronics projects like ESPresense too. If you’re worried about buying the hardware and flashing it with ESPresense, then consider buying one of my ESPresense Base Stations. Each base station includes:

  • An ESP32 D1 Mini pre-flashed with the latest ESPresense release.
  • A 3D-printed ESP32 D1 Mini Case
  • A USB power adapter
  • A short Micro USB Data/Power Cable

Like I did with the ESP32 D1 Mini cases, I am selling the ESPresense Base Stations on Tindie. I am hopeful people will find some value in being able to skip having to be concerned about sourcing the hardware and flashing ESPresense on their own.

Unit price for the ESPresense Base Stations is $12.00 when ordering 5 or more.

What’s Next?

I’m pretty curious about branching out! In the home automation for the lights in my office, I have a separate motion sensor that I use to turn on the lights and my ESPresense node to keep the lights on until just after I leave the room.

There are empty pins on the ESP32 D1 Mini and ESPresense supports: PIR motion, radar motion, temperature, ambient light, weather, and weight sensors. For other rooms in my house, I’d love to add a motion sensor to my ESPresense Base Stations and combine those two functions into a single piece of hardware. It’d be especially fun if that meant I got to 3D-design a new case to accommodate additional sensor types.

Whether you wind up buying my ESPresense Base Station(s) or you decide to go ahead and do it yourself, I hope this blog encourages you to check out the incredible presence-detecting capabilities of Home Assistant and ESPresense.

What sort of home automation projects have you done using an ESP32? I’d love to hear other home automation problems that people have solved with their own ESP32 development boards. Or come join the #home-automation channel in the Butter, What?! Discord server and tell us all about them!

ESPresense: Easy Room Detection for Home Assistant

| Comments

I recently installed a Wink Relay in my office and “hacked” it to work with Home Assistant. This completed one of my home automation goals: all of my office’s lighting and its ceiling fan are accessible to Home Assistant for automating tasks.

Ever since first installing up Home Assistant, I’ve been wanting to automate turning on my ceiling fan. I very easily threw together this Node-RED flow. On each update of the temperature from office’s Zooz 4-in-1 sensor, it will use the temperature to decide whether to turn the fan on or turn it off.

There’s a shortcoming with this flow, though. Turning on a fan doesn’t lower the temperature in the room, it only makes it feel cooler thanks to the evaporative effect of air moving over your skin. The only time this automation would be beneficial was if someone (me) was already in the room.

Room-Specific Presence Detection in Home Assistant

Thanks to the Home Assistant iOS App and the GPS features in my iPhone 12 Pro Max, my Home Assistant installation has a pretty good idea of when I’m home. For the longest time, I’ve used this as a condition to either turn the lights on or off inside my office.

But my phone doesn’t necessarily know what room I’m in. I don’t think the GPS is accurate enough, and I definitely don’t want to try and figure out the GPS coordinates for the boundaries of each of the rooms in my house. This is problematic, but even more problematic is the fact that my phone isn’t always near me. I routinely leave my phone in other rooms as I nomadically wander around my house during the day.

I was in need of an easier—and better—method to implement room-specific presence detection inside Home Assistant.

Enter ESPresense

Thanks to a reply to one of my tweets from @HolgBarath, I learned of the existence of the project, ESPresense. After it caught my attention, I took a look at the ESPresense website, the ESPresense GitHub repository, and watched a few videos on YouTube. I immediately knew that I wanted to give it a closer look.

What is ESPresense? On their website, they say it’s “An ESP32 based presence detection node for use with the Home Assistant mqtt_room component for localized device presence detection.”

ESPresense accomplishes its goal by providing an interface to easily flash their firmware onto an ESP32 development board, which enables the ESP32 board to monitor nearby Bluetooth low-energy devices. Scatter a few of those ESP32 devices across your house and set up the Bluetooth device(s) in Home Assistant you want to track and you’re ready to unlock the room presence achievement!

I have two Bluetooth devices that are pretty much attached to me all the time: my Apple Watch SE and my Medtronic 770G Insulin Pump. Of those two devices, I figured the watch was the better device to use ESPresense to track.

Brian implements ESPresense at home

I am relieved to report that setting up ESPresense was easy enough and well-documented enough that I don’t think there’s much need for this blog to turn into a how-to guide. The ESPresense install page has all the information you need to get started, including Everything Smart Home’s excellent video on ESPresense embedded in the page.

The software prerequisites for ESPresense are pretty straightforward. I already had my own functional Home Assistant installation, which includes a MQTT server. For the hardware, I decided that I’d use the following hardware to build my ESPresense base stations:

  1. D1 Mini NodeMCU ESP32 ESP-WROOM-32 Development board (5 pieces) ($34.99)
  2. UorMe 1A 5V Single Port USB Power Adapters (6 pieces) ($10.96)
  3. Spater 6” Micro USB Sync Cable (5 pieces) ($7.98)
  4. A 3D-printed ESP32 D1 Mini case:

Altogether, I wound up spending $60 and some time on my 3D printer to add Bluetooth tracking to 5 different rooms in my house. I definitely could’ve done it cheaper too. I didn’t really need all the USB power adapters or cables, as I probably have plenty of both stashed somewhere in the house.

Flashing ESPresense onto my ESP32 boards was a snap from their Install page. Their website allows you to flash the ESP32 with the latest version of ESPresense from right inside the browser and to open a serial terminal connection to the ESP32 after it is done flashing.

For the most part, everything went as smoothly as I expected from the documentation. I thought I’d share a few things that I encountered along the way that might have made it even smoother.

Bluetooth Chatter: I have a lot of Bluetooth devices in my office: my insulin pump, watch, phone, work laptop, personal laptop, smart speaker, etc.. Figuring out the Bluetooth details to create the sensor in Home Assistant wound up being a bit of a challenge. I used a couple different methods to try and sort that out.

  • MQTT Explorer, connected to my MQTT server on Home Assistant, and monitored the espresense\devices topic.
  • Took my laptop, watch, and an ESP32 board to a room with no BLE devices and used the ESPresense Terminal to determine the Bluetooth IDs
  • Bluetooth Scanner Apps were recommended a couple different places, and I expected them to be helpful. But I didn’t exactly find them to be especially useful—but everyone’s mileage may vary!

ESPresense’s very active development and automatic updates: By default, the auto-update feature is enabled on the ESPresense base station. It is also a very active project on GitHub. The combination of these two factors might occasionally work against you. On the day I was setting everything up for the first time, a release happened that caused my ESP32s to repeatedly crash and be quite unreliable. I wound up disabling the auto-update and using the ESPHome-Flasher to flash an earlier, more stable, version.

Each base station required calibration: This should be expected—especially in areas of the house where there were base stations near each other. I had to fine-tune each base station’s Maximum Distance to Report (in meters). It’s good to point out that this is an approximation based on the Bluetooth signals RSSI (Received Signal Strength Indicator). I ended up using Home Assistant’s developer tools to monitor the state and attributes of the sensor I created while I walked around each room.

When it was all said and done, I had ESPresense base stations in my office, the master bedroom, the living room, and our dining room.

What about that Ceiling Fan Automation?

Incorporating presence condition into the automation was a snap! I wound up adding a node to that flow to check which room ESPresense detected my watch was in. In order for the fan to get turned on in my office, two conditions would now need to be met: the temperature would need to be over 75 degrees and my watch would need to be nearest to the ESPresense base station in my office.

We had a rather warm day last week, and the automation worked great. I was working on writing this blog and noticed that the fan turned on. As the day progressed, I wandered in and out of my office to do other tasks. It was awesome to see that the ceiling fan was on when I was in the office—but off when I was somewhere else.

Final Thoughts

I enjoyed implementing ESPresense enough that I went ahead and ordered another 5-pack of the D1 Mini ESP32 boards. I don’t necessarily need them, but I like the idea that we could have ESPresense base stations in every room in our house. Adding presence detection in Home Assistant for about $12 per room is a tremendous value!

Reliable room-based presence detection is going to open the door for creating better automation that hasn’t been available to me before:

  1. Motion detection and room presence to turn the lights on in my office, keep them on, and turn them off shortly after I leave the office.
  2. Create new automation to automatically turn off the lights in my office when it’s empty
  3. Using my iPhone’s charging status and room presence in the bedroom to deduce whether I’m in bed.
  4. Personalize automations for other members of the household.

I have enjoyed using ESPresense that I’ve already published a second blog about how much I like ESPresense. In this blog, I talk about my experience after using ESPresense for 3—4 monthsand I discuss listing two products on my Tindie store:

What other kinds of ideas am I overlooking? If you had presence detection enabled in your smart home, what kind of Bluetooth devices would you use for presence detection? What kind of tasks would you automate using presence detection? I’d love to hear what you think; share your ideas in the comments below!

A few weeks with TrueNAS SCALE

| Comments

About two months ago, I was putting the finishing touches on assembling and burning in the DIY NAS: 2022 Edition. Since then, I’ve been working on a couple tasks:

  1. Build some confidence in the hardware that I purchased.
  2. Establish some confidence in the two latest TrueNAS SCALE release candidates.

From the beginning, my plan was to create a fresh install of TrueNAS SCALE, export my 7-drive pool from my old NAS, import it into the DIY NAS: 2022 Edition, and rebuild everything from scratch. In addition to that, I also wanted to move my virtual machines from my Homelab server onto this new DIY NAS.


Since building it, the DIY NAS: 2022 Edition has been stable. However, I’ve had one hiccup. I experienced a single read error on two different hard drives one night (2 read errors total). One of those errors pushed the total error count of one of the two hard disk drives past a threshold, and ZFS kicked the drive out of the pool.


Out of an abundance of caution, I ran long SMART tests on both drives, examined the SMART data, and felt confident that I would be able to put the supposed degraded drive back into the array. After the resilvering was complete, I ran another set of long SMART tests on both drives, and haven’t had any issues since.

In the couple weeks since they occurred, I haven’t seen anything like it since, and I’ve been using my NAS pretty heavily since then. I’m not at all concerned about these errors. I often use my blog as a personal reference, so I’m noting them here for when—or if—they occur again.

TrueNAS SCALE does everything that I have been using TrueNAS CORE for

Above everything else, I needed TrueNAS SCALE to be able to replicate all of the features of TrueNAS CORE that I rely on. In the event that I couldn’t get SCALE to do this, I was not going to make any compromises and I planned to revert back to CORE immediately.

I’ve been using my DIY NAS as my primary storage for years now. I had a suite of snapshot tasks for the data stored on my NAS, SMB shares, an NFS share, cloud sync tasks to back up my NAS to Backblaze, a VM operating as a Tailscale Relay Node, and Nextcloud running in a VM and accessible via Tailscale.

It took me a handful of hours spread over a few days to recreate all of this from scratch on the DIY NAS: 2022 Edition. Setting everything from scratch was effortless, thanks to the SCALE UI. The folks at iXsystems have done a commendable job at making the management interface consistent between SCALE and CORE that rebuilding my NAS from scratch was a straightforward task.

TrueNAS SCALE quickly demonstrated to me that it was up to the task to meet all of the needs that FreeNAS and TrueNAS CORE have done so well since building my very first DIY NAS.

I was not able to consolidate all of my Virtual Machines onto TrueNAS SCALE

Between my Homelab and TrueNAS CORE machines, I had a few Virtual Machines: a Tailscale VM that I used as a Relay Node, Plex, and my Home Assistant virtual machine. I chose to deprecate the Tailscale Relay Node VM. I installed the “official” Plex app, I had some misadventures with the Nextcloud “apps” (more on that later), and ultimately wound up creating a new Nextcloud Virtual Machine to run on the DIY NAS: 2022 Edition.

After all of that, the only thing left running on my Homelab machine was my Home Assistant VM. I have a lot of Zwave home-automation devices which are controlled by a Zooz S2 USB stick (ZST10 700). On my Homelab machine, I’ve been passing through this single USB device to the Home Assistant virtual machine. But when it came time to do the same under SCALE, I learned it was going to be a bit trickier.

Currently, you cannot pass through an individual USB device using the SCALE interface. You can do it from the command-line, but anything you do at the command-line will get wiped out the next time the virtual machine restarts. The supported workaround is you need to pass through the entire USB controller to that virtual machine. Technically, I could potentially make this work but I’m currently unwilling to implement it. Redirecting the entire USB controller doesn’t seem like a very good solution. I’m fairly certain that other USB devices (keyboard, mouse, my UPS, etc.) that I wanted to plug into my NAS would get redirected to a virtual machine.

For the time being, I’m giving up my hope of decommissioning my Homelab server. I will continue to use it to host virtual machines that TrueNAS SCALE can’t support until SCALE resolves these requests. From the speculation I’ve read, it’ll be quite a few months before SCALE will support passthrough of an individual USB device to a virtual machine.

TrueNAS SCALE’s Apps Experience

I was pretty excited that TrueNAS SCALE was leveraging containerzation–even more so that they were going to try and make it simpler via its apps. I was pleased when Nextcloud and iXsystems announced that Nextcloud would be officially supported on TrueNAS. I think the concept of companies collaborating together like this will have a lot of benefit to their users.

I installed two official apps from the TrueNAS SCALE catalog; Plex and Nextcloud. As I mentioned previously, Plex was up and running without any issues in mere moments. The official Nextcloud app was a bit different.

Because my apps and VMs are being installed to an SSD mirror, I needed to use the Nextcloud option to use an external data source (a dataset on my 7x drive array). But anytime I picked the options to use an external storage location, the file-permissions inside the Nextcloud folder wound up being incorrect. It took a little bit of tinkering inside the container to change the file permissions for both the Nextcloud directories as well as my external data in order to get it working.

Because I worried that any future updates of the Nextcloud app would mean I’d need to be updating those permissions, out of curiosity, I thought that I’d try the Nextcloud app from a different catalog: TrueCharts.

TrueCharts

TrueCharts is a separate organization that has created an impressive catalog for use inside TrueNAS SCALE. Initially, I was excited because it appeared that there was a pretty substantial amount of documentation, including some guided walkthrough videos on YouTube. But ultimately, I found the documentation and videos to be pretty spartan, and I was frustrated by the number of bad links that all 404’ed on TrueCharts’s website.

Regardless of the state of the documentation, I was able to get their Nextcloud app functional on my NAS using my external storage. However, what I really wanted to do was share my Nextcloud app with friends and family using Tailscale, and that wasn’t working for me. I never thought this was TrueCharts’s fault, but I lacked the expertise to get to the bottom of it on my own, and I was going to need some help.

I’ve been a member of the TrueNAS Discord server for a while before installing SCALE. I was curious about listening in on what people had to share about SCALE. I was especially curious to hear about what people were doing with “apps” under SCALE. One of the recurring themes was users who were frustrated with how TrueCharts’s Discord server was being moderated. I mostly chalked it up to people being frustrated, but the repeating pattern was troubling.

When I ran into issues with TrueCharts’s Nextcloud app not listening on my NAS’s Tailscale IP address, I experienced this firsthand. Aware of the emphasis on following rules, I read through everything that I could, scoured each channel’s pinned posts, and searched all the past threads before creating my own support thread. Within a few moments, I was told what I was doing was hacky, unsupported, my support thread was archived, and I was told to go ask my question in a different “experts” channel. I quickly found that I lacked the ability to even post in the experts channel, so I messaged the moderator who gave me those directions. Their response was to scold me for using direct messages. Without any other options, I decided to delete the TrueCharts Nextcloud app and leave their Discord server.

A week or so later, I got a chance to privately discuss this experience with a different moderator of the TrueCharts Discord. It was a pleasant conversation, and I appreciate their admission that it was handled poorly and must have been either a bug or a mistake.

My experience with TrueCharts sapped any interest that I had in their app catalog. Even worse, it had a negative impact on my opinion of TrueNAS SCALE. I hope that the TrueCharts community matures, grows, and evolves to the point where experiences like mine are rare. However, that journey is just beginning for TrueCharts. Until they complete that journey, I’d recommend exploring all other possible options before relying on anything from the TrueCharts app catalog.

Things seem to break in TrueNAS SCALE when running Tailscale on the host

One of the biggest reasons that I’ve interested in TrueNAS SCALE is to install Tailscale on the host and use Tailscale to access my NAS from outside of my network and to share the VMs and apps that I’m running on my NAS. In migrating over to my new NAS, I learned that I wasn’t going to be able to share my Tailscale relay node with my friends and family. To accomplish what I wanted to do, I’d need to install Tailscale on the host itself.

It’s unsupported, but it is possible to install Tailscale on the TrueNAS SCALE host, which is an improvement over CORE. For a while, it even seemed to be working okay. I could access the official Plex app and all of SCALE’s hosted services (the web-management interface, SSH, SMB shares, etc.…) but in the days after installing Tailscale, I wound up running into several other problems:

  1. Tailscale got wiped out when I upgraded TrueNAS SCALE from 22.02-RC1 to 22.02-RC2
  2. The Nextcloud app from TrueCharts only listens on the IP address of the host itself; any traffic to the host’s Tailscale IP is ignored.
  3. I had one instance where my SMB shares stopped being accessible until I stopped Tailscale.
  4. Others reported issues with Tailscale preventing Kubernetes from starting up

This has been a tremendous disappointment to me. Tailscale has been so easy that I figured if I could get it installed on the host, everything on the host would be accessible on my Tailnet. But that didn’t turn out to be the case. I knew what I was trying wasn’t supported and I wouldn’t get much help for it, so I searched for a Tailscale feature request(s) on iXsystem’s JIRA page to upvote and found this:

If I want to host something on my NAS and share it via Tailscale, it will need to go in its very own virtual machine. This is how I was handling sharing things with TrueNAS CORE, and I’m content to keep doing it while using TrueNAS SCALE. I’m not thrilled with this, but I can accept it—for now.

Will Brian be using TrueNAS SCALE in 2023?

For as long as I have been using TrueNAS CORE (or FreeNAS), it has met all of my requirements. I’ve never even seriously considered looking at anything else. I like the ZFS file system, I like how easy TrueNAS CORE makes it to manage ZFS, and I remain excited about the possibility of completely consolidating my Homelab and NAS machines onto one platform.

But the minute I saw the “hyperconverged” buzzword in SCALE’s marketing material, my expectations for SCALE shifted a bit. After installing SCALE, I expected that I’d be able to retire my Homelab machine. I also expected that I’d be able to access and share all the applications hosted on my TrueNAS machine using Tailscale. Unfortunately for me, those expectations haven’t been met—at least, not yet. If the following happens this year, I’ll be really excited:

  • Incorporate support for Tailscale on the host for accessing/sharing the core SCALE services and any apps.
  • Enable features to leverage KVM’s pre-existing support for passthrough of unique USB devices to a Virtual Machine.
  • Expansion of the list of “official” apps in the TrueNAS SCALE catalog.
  • A maturation of the TrueCharts community to err on the side of inclusion, rather than exclusion.
  • Development of other catalogs of apps to compliment and compete with the current catalogs.

At the end of this year, I fully expect that I’ll still be using TrueNAS SCALE. By the time 2023 rolls around, I fully expect many of these items either have been resolved or a roadmap laid out for their resolution under SCALE.

Final Thoughts

Don’t let the words I’ve invested in the two areas that I’ve been disappointed with TrueNAS SCALE mislead you. Overall, I’m incredibly satisfied with TrueNAS SCALE. It fell short on a couple of my expectations, but I was also expecting that SCALE would far exceed what I have historically asked my own NAS to do. If I were asked to make a recommendation, I’d encourage others to take a look at TrueNAS SCALE first. I think its hardware support and its unrealized potential make it a great choice for DIY NAS builders today.

That being said, SCALE did not live up to my expectations, too. If my expectations aren’t being met by the end of this year, it will be time to seriously consider alternatives like UNRAID, Proxmox, or even building my own homebrew server from scratch.

After reading about my experience with TrueNAS SCALE, what do you all think? If you’re a TrueNAS CORE (or FreeNAS) user, are you excited about switching over to TrueNAS SCALE, or are you sticking with TrueNAS CORE? If you’re a prospective new builder, do you have a preference for SCALE or CORE? I’m interested to hear what you think down in the comments!

DIY NAS: 2022 Edition

| Comments

Early on in 2021, I started thinking about my own DIY NAS. I had 3D-printed a case, the MK735, a year or so earlier and I had been waiting for a good excuse to transplant my NAS into this awesome case.

As my DIY NAS kept reliably functioning, I decided that I would stop being patient and I would force the issue via upgrades. I calculated that the best upgrade for my NAS would be to max out the RAM. At the time, I was operating under the assumption that there would be all sorts of inexpensive secondhand DDR3 UDIMMs on eBay. But much to my chagrin, there was none to be found—especially inexpensively. Buying the four 16GB DDR3 UDIMMs to upgrade to 64GB of RAM (from 32GB) was going to cost me $800!

I wasn’t opposed to spending $800 (or more!) to upgrade my NAS, but I would need to get more value out of the upgrade then just doubling the amount of RAM, so I decided that the purpose of this DIY NAS build would be to replace and upgrade what has served me so well the past few years.

Update (01/13/2022): Pat and I devoted an episode to the DIY NAS: 2022 Edition on the Butter, What?! channel on Youtube. Check it out!

Motherboard and CPU

Every year, I spend a ton of time searching for the ideal motherboard for a DIY NAS build—and every year it is a challenge. Once I decided that I would be keeping DIY NAS for my personal use, it became almost impossible to find the perfect motherboard. I spent hours trawling through manufacturers’ websites, online vendors’ advanced search functions, and staring at my bank balance, looking for a motherboard that met this criteria:

  • Mini-ITX form factor
  • Integrated CPU, preferably passively cooled
  • Onboard support for at least 9 SATA drives
  • A substantial processor upgrade over my old Avoton C2550
  • No felonious assault of my bank account

I was searching for a unicorn. After trawling through every single MiniITX motherboard from 5—6 manufacturers, I concluded that my ideal motherboard doesn’t exist. My criteria narrowed it very quickly down to motherboards that were prohibitively expensive or would’ve required considerable concessions to my criteria. To make matters worse, the motherboards that were closest to meeting my criteria simply could not be found in stock anywhere.

Ultimately, I wound up choosing the Supermicro X11SDV-4C-TLN2F (specs), as it checked off most of the boxes for my criteria:

  • Mini-ITX form factor
  • Intel Xeon D-2123IT (4-cores, 8 threads, 2.2GHz, 8MB cache, and 60W TDP)
  • Up to 512GB ECC LRDIMM
  • Up to 8 SATA devices (4 onboard, 4 with OCuLink)
  • 1x PCI-E 3.0 X8
  • Vendors had listed it as low as $599

Unfortunately, the Supermicro X11SDV-4C-TLN2F would not support all 9 of my HDDs/SSDs. However it did carry a feature my current motherboard doesn’t have—two onboard 10Gb network interfaces—which freed up the PCI-e slot for an HBA, which kept a viable candidate. The integrated Xeon D-2123IT was quite a bit more powerful than my current Atom C2550, and the CPU is passively cooled. The fact that a few vendors had it listed at a somewhat reasonable price ($550—$600) encouraged me to place an order.

If finding the motherboard was difficult, actually buying it at a reasonable price was damn near impossible. Everywhere I found it listed either had an exorbitant price tag or the vendors with a reasonable price tag wanted you to wait for them to custom order it from Supermicro.

At first, I went the special order route—but chose to cancel my order after weeks went by with no updates and the supposed estimate kept incrementally going up the more time went by. I wound up finding the Supermicro X11SDV-4C-TLN2F listed on eBay and decided to buy it from there for about $650, but that listing has since climbed up up to nearly $800!

Case

Most years, shopping for the case for the year’s DIY NAS build blog is as much of a challenge that the motherboard is. Not because good options don’t exist, but because I seem to have used most common NAS cases already. When a manufacturer makes a good design, it will continue to be popular for years to come without much (or any) refinement.

The minute that I decided that I’d be keep this year’s DIY NAS to replace the DIY NAS I built back in 2016, I knew right away what case I’d be using: the 3DWebe.com MK735. The MK735 is so perfectly suited for my personal NAS that I printed my own MK735 and have been waiting for a chance to use it. Among the MK735’s many features, here are my favorite:

  • It is 3D-printable!
  • 7x 3.5” drive bays
  • 2x 2.5” drive trays
  • Drives under motherboard design
  • 3 independently cooled chambers, motherboard, hard drives, and power supply.
  • Case’s grill-laden design allows for excellent airflow.

Don’t have a 3D printer? Or don’t want to spend the time (and money) to manufacture and build your own DIY NAS case? I don’t blame you, 3D-printing a DIY NAS case is no small feat! If it is helpful, I wrote a blog about my favorite DIY NAS cases that contains a few ideas of other cases you might want to try.

Power Supply

Picking out a power supply for this year’s DIY NAS build was easy, too! Thanks to all of my DIY NAS building over the years, I’ve accidentally bought one or two ATX power supplies when I actually needed an SFX power supply instead. I have been saving those extra power supplies, assuming I’d eventually be able to use one of them.

In this case, I had purchased a SilverStone ET550-HG (specs) for a prior DIY NAS build. The 550 watt power supply should be more than ample to power my current DIY NAS which, is currently being powered by a 300 watt power supply.

I’m hopeful that the SilverStone ET550-HG’s 80 Plus Gold certification will be an upgrade in terms of power efficiency and that its fans run quieter than the 1U Power supply that I’m currently using today.

RAM

The whole reason I decided to keep the DIY NAS: 2022 Edition for myself can be found in RAM. Originally, I wanted to upgrade my DIY NAS from 32GB of RAM up to 64GB. As part of doing that upgrade, I decided I’d swap my NAS into the MK735.

As part of the new NAS build, I still wanted to reach 64GB of RAM. I achieved this by using 4 of the Micron 16GB DDR4-2666 ECC RDIMM (specs). While 2666MHz RDIMMs are supported by the Supermicro X11SDV-4C-TLN2F, they will only operate at the fastest speed supported by the motherboard, 2400MHz. It just turned out that when I was shopping, 2666MHz RAM was priced more competitively than its 2400MHz counterparts.

Host Bus Adapter and Cables

Because I had nine SATA drives that I wanted to bring over into the DIY NAS: 2022 Edition, a Host Bus Adapter (HBA) card was going to be required. If I were building my own NAS from scratch today, I probably would not have gone this route. Instead, I probably matched up a pair of SSDs with 6x 12TB (or larger) HDDs to achieve the same amount of storage that I currently have.

To achieve what I wanted, I was going to need to buy a HBA. In the DIY NAS: 2020 Edition, I purchased an IBM M1015 HBA and used it to make sure that every possible drive bay of its case could be filled. In order to use the IBM M1015 with TrueNAS, it is suggested that you reflash it in IT Mode to give the filesystem unfettered access to the drives themselves. Flashing that firmware wasn’t a tremendously difficult task—but it was challenging enough that I searched for something that had already been flashed for use with ZFS and I found this on Amazon: LSI 9211-8i P20 IT Mode for ZFS FreeNAS unRAID 6Gbps SAS HBA (specs). Spending $10—20 more than I spent on the IBM M1015 was an easy decision when it took me two to three hours just to flash the IBM M1015.

To go along with the LSI 9211-8i, I picked up a pair of Mini SAS to SATA cables (SFF-8087 to SATA Forward Breakout). All seven of my data drives would be plugged into the HBA, the motherboard’s SATA ports would be used for the OS drives, and between the HBA and motherboard there’d be 7 empty SATA ports for potential future use.

Storage

TrueNAS Scale Drives

Up until last year, I’ve been running the OS on USB flash drives (often mirrored across two thumb drives) on all of my DIY NAS builds. For a while now, this has been in contradiction to the suggested hardware recommendations. There’s apparently enough being written to the OS drive now that USB drives are likely to wear out, especially inexpensive flash drives.

Inexpensive SSDs are not difficult to find—but the really inexpensive ones are selling out and never being sold again, which makes it a challenge for me to recommend a particular model. Equally challenging is that SSDs are increasing their capacity and value. Assuming you can find an appropriately sized SSD (32-64GB) or SATA DOM, you’re not really getting the best value when buying it.

For me, that was just too much potential value in buying a pair of Crucial MX500 1TB to store the operating system on. At the time I bought the SSDs, I figured that I’d try and maximize their value by pre-partitioning the SSDs to use for the OS, ZFS cache (L2ARC or SLOG), and/or some fast storage on the NAS.

Do think this will work out? Or do you think these drives will wind up being 99.97% wasted? Keep on reading to find out!

NAS Hard Disk Drives

The most important part of your NAS build are the hard drives. Regardless of your budget, your storage will likely wind up being the most expensive component of your DIY NAS. It’s also impossible to make a one-size-fits-all recommendation to every prospective DIY NAS builder. Rather than make specific recommendations of what hard drives to purchase, I like to make suggestions:

  1. Measure and project your storage needs: How many hard drives you need for your DIY NAS ultimately depends on how much storage you currently need and how much you project that will increase over time. Oftentimes, spending more money now will be cheaper in the long run.
  2. Understand your data’s importance: Knowing the importance of your data is critical to choosing how many HDDs. Ask yourself, “What happens if someone steals my NAS?” Your answer to this question should help you understand what your hardware redundancy and backup plan should be. If your data is critical to you, your budget should include both hardware redundancy and some sort of off-site backup.
  3. Buy CMR Hard Drives for use with ZFS: Many Western Digital customers were shocked when their so-called “NAS-grade” drives were unreliable in their NAS systems as a result of Western Digital surreptitiously sneaking SMR technology into their Red products. Typically, the easy way to avoid this is by purchasing NAS-grade hardware, but the better advice is to buy drives using Conventional Magnetic Recording (CMR). Personally, I don’t really care if my HDDs are NAS-grade or not; if I find consumer-grade CMR HDDs, I’d happily store my data on them.
  4. Consider Shucking External HDDs: Usually the most cost-efficient method of buying hard drives is by buying external USB hard drives and removing the hard disk drive from inside the enclosure.

Here are a few recent deals that I’ve seen which have been compelling. I’ll be maintaining and updating this throughout as I learn about good deals on hard drives:

I purchased some hardware you probably won’t need, too!

Once I decided to keep this DIY NAS for my own use, I knew I would be buying a few components that I wouldn’t necessarily recommend that others buy.

RJ45 to SFP+ Transceivers

Five years ago, I built out an inexpensive 10Gb network using some secondhand 10Gb SFP+ network cards and setting up a point-to-point network between my desktop, my Homelab machine, and my NAS. About a year ago, I added a 10Gb SFP+ to my network, which simplified things quite a bit.

Unfortunately (for me) the Supermicro X11SDV-4C-TLN2F’s 10Gb interfaces are both RJ-45. As a result, I bought a 2-pack of iPolex 10G SFP+ RJ45 Copper Transceivers to allow the DIY NAS: 2022 Edition to work with my network.

LEDs

I’ve been impressed with putting LEDs inside computers. I think, when done well, they look pretty awesome. But I have never been tempted to do it myself. I don’t think a computer should be a focal part of your office’s decor. My preference has always been that computers are black or beige boxes, preferably out of view. But ever since starting The Butter, What?! Show with Pat—I’ve been looking for ways to make the background of my office a bit more interesting to look at and to try and feature my enthusiasm for DIY NAS topics. As a result, I had a brainstorm and asked myself “What if I put LEDs into my DIY NAS?”

We livestream the recording of The Butter, What?! Show on the first Tuesday of every month at 9 p.m. Central Time. What we wind up recording gets broken up into weekly episodes that are published to YouTube on Mondays. Come join us!

Naturally, there’s no hardware on Supermicro X11SDV-4C-TLN2F for controlling the LEDs, so I picked out the Nexlux WiFi Wireless LED Smart Controller to drive the LEDs. I purposefully picked something that I thought would work well with my Home Assistant server and began thinking of ways that I could incorporate my NAS into my Home Automation.

I bought the same LEDs that I’m currently using on the back of the sound-absorbing panels in my “recording studio.” I’ve been happy with how the SUPERNIGHT LED Strip Lights have worked out so far. I’m pretty certain that I’ll have quite a few feet of LEDs left over. What sort of LED lighting projects should I incorporate into my Home Automation? Let me know in the comments below! To round out my LED product purchases, I picked up a 4-pin molex to 2.1mm barrel jack to power the LED controller and LED strip from the computer’s power supply.

Final Parts List

Component Part Name Qty Cost
Motherboard Supermicro X11SDV-4C-TLN2F specs 1 $775.00
CPU Intel Xeon Processor D-2123IT specs N/A N/A
Memory Micron 16GB DDR4-2666 ECC RDIMM (MTA18ASF2G72PDZ-2G6E1) specs 4 $67.98
Case 3Dwebe.com MK735 specs 1 $19.99*
Host Bus Adapter LSI 9211-8i P20 specs 1 $89.00
Power Supply SilverStone SST-ET550-HG specs 1 $64.63
OS Drive Crucial MX500 1TB 2.5” SSD specs 2 $99.99
SAS Cable Internal Mini SAS SFF-8087 to Mini SAS High Density HD SFF-8643 N/A 2 $11.99
Misc. Network 10G SFP+ RJ45 Copper Transceiver 2 $50.98
LED Controller Nexlux WiFi Wireless LED Smart Controller 1 $10.99
LED Lights SUPERNIGHT LED Strip Lights, 5M SMD 5050 1 $14.99
Molex to 12V Power Cable CRJ 4-Pin Male Molex to 12V DC 5.5mm x 2.1mm 1 $5.99
TOTAL: $1,565.41


All the Parts MK735 MK735 with Door Open Supermicro X11SDV-4C-TLN2F Micron 4x16GB DDR4-2666 ECC RDIMM LSI 9211-8i HBA and Cables 2x Crucial MX500 1TB 2.5 LED Lights an WiFi LED Controller SilverStone ET550-HG Power Supply


Hardware Assembly, BIOS Configuration, and Burn-In

Assembly

Normally, when describing the assembly of a DIY NAS build, I strive to be really detailed for people who might choose to build their own DIY NAS using some of the parts from my blueprint, especially the case. 3D-printing and building the MK735 has been one of my favorite projects, but it took a lot of time, effort, and materials. I don’t expect that many people reading this blog will be using the MK735, so I am keeping the assembly notes a bit brief.

The two most difficult parts of assembling my DIY NAS in the MK735 was mounting the motherboard itself and the HBA. The trouble that I ran into in mounting the motherboard was twofold. First, it fits like a glove—there’s very little play for the motherboard once it is down in its tray. Second, you wind up threading the holes as you drive the screws into them for the first time. That makes putting the motherboard’s screws in pretty challenging—if I had to do it all over again, I’d remember to pre-thread each of these holes. However, it’s important to not over-tighten the screws too. Otherwise, you might strip the hole itself.

There’s enough room for the HBA, but I found that I couldn’t connect the cables when the HBA was installed. This is due to the MK735’s very compact nature—and I always expect these kinds of challenges when working with a small form factor case. Removing the HBA, installing the cables, and then bending the cables 180 degrees around the edge of the HBA allowed me to install the HBA.

I wound up deciding that I would save the installation of the LED controller and LED lights for a future blog. I think there’s an entire blog’s worth of content between installing the LEDs, incorporating them into my home automation via HomeAssistant, and automating some NAS-specific tasks. Plus there’s some exciting overlap with our Ooberlights project too!

Here is a time-lapse video of the hardware assembly, TrueNAS installation, and the configuration I did to set up a simple share on the DIY NAS: 2022 Edition. And if you’re really interested, I also made a nearly real-time version of the assembly video too after a few people had requested it previously.

BIOS Configuration

In the DIY NAS builds of the past, I seem to recall a sneaky BIOS configuration change that needed to be made in order to get the DIY NAS to function. I noted that and said to myself, I need to remember this when I write the blog! Every year since, I’ve kept this heading in the blog for similar sneaky configurations, but I’m still relieved that pretty much the only change I ever make in the BIOS is to change the order of which devices it’ll boot from—and this year’s NAS build was no different.

Burn-In

Typically, my DIY NAS builds aren’t in my possession for very long, so I like to torment the hardware a little bit to make sure there’s nothing wrong with it. However, since I’m planning to use the DIY NAS: 2022 Edition as a replacement for my current NAS, I can be a little more relaxed with my burn-in. I plan to run the DIY NAS: 2022 Edition (using some old drives) next to my current NAS for an extended period and evaluate its performance. Once I’m fully confident in the new machine, I will migrate my drives and settings over to the new NAS.

Regardless, I always burn-in any computer I build by running Memtest86+ for three single-threaded passes, and this year’s NAS build was no exception.

TrueNAS SCALE

I have been excited about TrueNAS SCALE since its announcement. What is TrueNAS SCALE and how does it differ from TrueNAS CORE (formerly known as FreeNAS)? I haven’t really used TrueNAS SCALE enough to pretend to have any expertise, but here’s a really high-level at stab some important features from my point of view (a DIY enthusiast):

  • Maturity: TrueNAS CORE (FreeNAS) has been around for a long time and TrueNAS SCALE just recently produced its first release candidate.
  • Operating System: TrueNAS CORE is built atop FreeBSD and TrueNAS SCALE is built atop Debian
  • Both use ZFS: OpenZFS is at the root of both CORE and SCALE. In my opinion, that makes them both pretty interchangeable for my usage.

Personally, I’m excited about TrueNAS SCALE for a few reasons:

  1. SCALE’s “hyperconverged” approach seems ideal for what many DIYers—myself included—are hoping to do with their NAS and/or Homelab builds.
  2. I’m less incompetent using Linux than I am using FreeBSD.
  3. SCALE should allow me to retire my separate Homelab server
    1. Migrating KVM VMs over to my TrueNAS SCALE box
    2. Replacing existing VMs (Plex, HomeAssistant, Nextcloud) with Linux Containers

Installing TrueNAS SCALE

In picking out components for the DIY NAS: 2022 Edition, I chose to buy a pair of Crucial MX500 1TB SSDs because the value of larger SSDs couldn’t be passed up. But in its current form, TrueNAS SCALE partitions and uses 100% of the OS drive’s space—which negates that potential value.

When I decided to use a pair of 1TB SSDs to act as the OS drive, I was determined to try and find a method that would allow me to create an appropriate-sized partition the SSDs for TrueNAS SCALE and then partition the remaining space to use for fast storage and/or ZFS caching (L2ARC and SLOG).

At any rate, I found this guide to partitioning a SSD as part of the TrueNAS Scale install in the /r/truenas sub-Reddit. Effectively, you modify the TrueNAS installation script to constrain the size of the boot partition created during the installation. After installing TrueNAS, you use the command-line to partition the remaining space, create a ZFS pool from those new partitions, export the pool, and finally import the pool using the TrueNAS SCALE web-interface. Following this guide worked perfectly for me, with one wrinkle: the specific line which needed editing. In the 9 months since this post was shared on reddit, the contents of /usr/sbin/truenas-install have changed. The author probably predicted this would change and shared the script’s location in the source code repository—so it was easy to go figure out where that line could be found in the current version of the script.

I named the pool made out of partitions on the SSDs “fast” and then created a new pool out of 7 high-mileage spare HDDs left over from my upgrades to larger hard drives. I created a RAID-Z2 pool named “slow” out of these old drives. After that, I created a new Local User for myself, created a Local Group called shareusers, and added my account to that new group. On each pool, I created datasets (fast-share and slow-share) and set the datasets’ permissions so that the shareusers group could read, write, and delete files contained in that dataset.

Finally, I opened up each share in Windows explorer, created a file, renamed that file, and then deleted that file to confirm that the share and file permissions had been set up correctly.

Just like that, my very first TrueNAS SCALE machine was up and running and ready to meet some network-attached-storage needs!

What’s Next, Brian?

In most years, I’d spend the end of the blog sharing some of the conclusions that I’ve reached based on the performance of the DIY NAS build, its cost, and its comparison to other products like it. I’m every bit as confident in making these declarative statements with the DIY NAS: 2022 Edition, but I’d rather take the opportunity to show it in 2022. So what’s next? More DIY NAS blogs, that’s for sure! I’m going to be running the DIY NAS: 2022 Edition in parallel alongside my current NAS for a while. Typically, I rush to do some throughput testing in an effort to demonstrate the performance of the new machine for prospective buyers. But because I’m keeping the DIY NAS: 2022 Edition for my personal use, I’ve got a few ideas for new blog topics:

  1. Comparing/contrasting the performance of my old NAS and the new one.
  2. My migration from TrueNAS CORE to TrueNAS SCALE
  3. Using TrueNAS SCALE’s “apps” to retire Virtual Machines
  4. Consolidating my NAS and Homelab onto the same hardware
  5. Incorporating my NAS into my Home Automation

What other kinds of DIY NAS/Homelab content would you be interested in? If you’ve been interested in building your home server, what’s been holding you back? I would love to hear about your own DIY NAS and Homelab journeys in the comments below. Even better, come join the Butter, What?! Discord server and share your thoughts in the #diynas-and-homelab channel!

Building my own Drone Race Timer: RotorHazard on Delta 5 Hardware

| Comments

For most of the past five years, one of my favorite hobbies has been flying first-person view (FPV) quadcopters. It’s been a fun little obsession which ticks off a lot of my favorite things: going fast, doing tricks, building things, and perpetually tweaking them.

Back in 2017, Pat and I taught a class at a local makerspace for building quadcopters which caught the eye of the local drone-racing group, Dallas Drone Racing, and we got a chance to chat with the group’s leadership. Since then, I’ve spectated several races, including watching Pat race his own quadcopter.

I’ve also had the chance to hang out a few times with Alex Vanover, of Drone Racing League fame, and each time he’s encouraged me (and everyone else) to get out to Dallas Drone Racing’s, field and start racing. I give Alex credit for convincing Pat to try racing a couple years ago—and one of these days, I’ll join them in racing. I’ve got parts for two racing quadcopters that have been waiting to get assembled for a couple months now.

However, I am NOT a Racer

Let’s get this admission out of the way early: I am slow—very slow. Real quadcopter racing looks like this DVR recording of Alex Vanover’s FPV feed. This is what Alex saw in his goggles while running some laps on a track set up for a recent MultipGP race:

I won’t even burden you with a comparison video of my own DVR footage what I call “racing.” Among the people that I fly with regularly, just about everybody is better and faster than I am. Ultimately, the person that I’m competing against is myself. As long as I’m consistently improving, I’m perfectly happy with how I’m performing. If I just so happen to beat Pat along the way, well, that’s just icing on the cake!

I feel the need, the need to understand my speed!

If I’m to improve, there’s no shortcut. I need to spend more time flying fast, predetermined courses. Being able to invest the time it takes to improve will always be my biggest challenge. I have far too many responsibilities, hobbies, and other distractions working against me. However, there are some efficiencies to be found.

A few of us race in Velocidrone every Thursday night for fun. I’m convinced that this extra “stick time” has been a tremendous help improving my flying both virtually in the simulator and also in real life. If you’re interested in joining us on simulator night, come join our Discord server’s #drone channel!

But what’s also been helpful is the timing built in to the simulator. Being able to discern between what feels faster and what’s actually faster is huge. Recently, Joshua Bardwell reviewed the ImmersionRC LapRF Puck on his YouTube channel. I was nearly ready to buy the LapRF Puck based on Joshua’s review, but then he mentioned that a do-it-yourself open-source lap timer existed and provided links to it. Knowing that DIY solutions existed piqued my curiosity, so I checked it out!

Enter the Delta 5 Race Timer

Disclaimer: While I ultimately had a great experience with the hardware for the Delta 5 Race Timer, the software wound up being a bit of a different story. It seems as if the Delta 5 Race Timer is a bit of a digital ghost town. There haven’t been any recent changes in Github, no responses to issues submitted, the Facebook group seems to be either gone or closed, and most of the tutorials are a bit dated. If you’re interested in building your own DIY race timer—please make sure to read all of my blog to avoid what I ran into!

The Delta 5 Race Timer seemed like it was right up my alley! It’s an open-source project to build your own DIY race timer using a custom PCB and some fairly common electronics components. Here’s a parts list and prices of the components (or equivalent components) that I wound up ordering. Thanks to my stash of bits and pieces from prior Raspberry Pi, Arduino, quadcopter, and custom computer projects, I was able to avoid having to buy several of the components.

Product Qty Brian’s Vendors Alternative Vendors
Delta 5 Race Timer PCB (set of 5-10) 1 $4.90 N/A
Raspberry Pi 4 (2GB version) 1 $56.99 $35.00
Geekstory Mini Nano V3.0 ATmega328P (set of 4) 1 to 2 $14.99 ($3.75/ea) $3.96/ea
3D-printed D5RT Housing – 4 and 8 Node by FreeFormFPV 1 N/A N/A
Boscam RX5808 Video Receivers 4 to 8 $5.64 $5.99
Pololu 5V, 2.5A Step-Down Voltage Regulator 1 $11.95 N/A
Pololu 3.3V, 2.5A Step-Down Voltage Regulator 1 $11.95 N/A
Elegoo Resistor Kit Assortment, 0 Ohm-1M 1 $11.99 $15.99
M3x30mm Hex Socket Flat Head Countersunk Bolts Screw (set of 25) 1 $9.49 $6.83
Dupont Connectors, Male-to-Male and Male-to-Female Kit 1 $4.49 $6.94
Jumper Wires: Female to Female 1 $6.49 $3.49
XT-60 Pigtail (Male) 1 $4.22 $3.22
2-pin Standard Jumpers (set of 120) 1 $5.99 $1.25

Just by looking at the Delta 5 Race Timer’s parts list, it was fairly obvious to me that I could potentially get some great value out of investing a few extra dollars and a little bit of work at my soldering station! Especially when I realized that I already owned most of the wires and connectors, a Raspberry Pi, and a spare Arduino Nano or two. Regardless of how many components I already owned, I still think the DIY race timer is a great value:

  • ImmersionRC LapRF Personal Race Timing System: $99.99 (price per node: $99.99)
  • 4-Node DIY Race Timer (Delta 5 hardware): $166.01 (price per node: $41.50)
  • 8-Node DIY Race Timer (Delta 5 hardware): $203.56 (price per node: $25.45)

Delta 5 Hardware Assembly

All things considered, the assembly of the hardware went smoothly, despite my clumsy solder-work. Part of what compelled me to take on this project was realizing that it would also provide some great practice to increase my soldering skill by a little bit.

TweetFPV on YouTube has a great three-part tutorial on assembling the Delta 5 Race Timer’s hardware and software.

It took me an entire Saturday evening after my son went to bed to assemble the Delta 5 Race Timer hardware using 4 nodes. But I’m reasonably confident that if I were to do it again, I would be considerably faster. Nothing about the build was complicated, but my soldering is pretty slow, inefficient, and required a lot of double-checking. While I may have been slow, the work paid off! After soldering everything together, but before putting in the video receivers and Arduino Nanos, I powered it up and measured the voltages with my multimeter. I was pleasantly surprised to see that the appropriate voltages were being read on the different points in the PCB. Later on that week, I put it all together and powered it on for the first time:


Delta 5 Software Installation and Configuration

My biggest disappointment in this project was that the Delta 5 Race Timer appears to be abandoned. The project itself has many open issues, hasn’t had any new changes in the Github project in over 2 years, and there are a number of issues that have been opened without really receiving much attention. At first I was a little concerned, but I figured that I had enough knowledge that I could follow the old directions and get it working.

I wound up running into one of the open issues on Github, and while I managed to apply some Google-fu and seemingly moved past a problem, I’m not really certain if I actually solved that problem or caused a different problem. I never could get the Delta 5 Race Timer web interface functioning. The logs were full of error messages that made me think that my Raspberry Pi’s operating system had packages that were much newert than what would work with the Delta 5 Race Timer project. After tinkering with it for a few hours across a couple days, I was beginning to worry that I’d need to seek out someone in the FPV community to let me copy the SDCard image from their working Delta 5 Race Timer.

Thankfully for me, one of my Google searches lead me down a rabbit’s hole and I stumbled across someone asking questions very similar to my own. Somebody had a very simple answer to that question; they suggested that the person go take a look at a project called RotorHazard instead of trying to get the Delta 5 Race Timer functioning.

RotorHazard Race Timer Saves the Day

The minute that I saw the RotorHazard project page, I breathed a sigh of relief knowing that I was not going to have to fumble through resuscitating a project that had gone stale and that I wasn’t well-equipped to accomplish. You might be asking, “What’s RotorHazard?” and I read this answer from someone on their Facebook page: “RotorHazard is basically Delta 5 Race Timer 2.0”. Given what I’ve seen of the RotorHazard so far, I think that answer might be a bit basic and too modest!

After skimming through RotorHazard’s project page, I began to wish that I had come across it much sooner. Thankfully, there wasn’t any harm done, as the Delta 5 Race Timer hardware setup is 100% compatible with RotorHazard. If I had the chance to do it all over again, I probably would be inclined to build my own using the RotorHazard S32_BPill PCB instead. The hardware design seems more sophisticated, and I especially liked the fact that the RotorHazard PCB can support up to 8 video receivers as opposed to daisy-chaining two Delta 5 Race Timer PCBs together to get 8 nodes. The RotorHazard S32_BPill PCB’s features include replacing the 4 to 8 Arduino Nanos with a single STM32 processor.

Beyond these key attributes, there’s an assortment of other features that I would have also been interested in:

  • Better 3D-printed case design with more cooling
  • Support for power/battery monitoring
  • Compatibility with LED strips and other LED options

All that being said, I don’t plan to make any changes to my hardware in the near term!

Let’s Race!

The weekend prior to publishing this blog, Pat came over and we set up an indoor track inside my house. We set up my RotorHazard race timer, and within a few minutes, we were racing laps around the track that we laid out in my house. We had to tweak the calibration of the video receivers and adjust the placement of the RotorHazard timer once or twice. But once we had everything dialed in, we were having a riot racing around the house.


It was awesome listening our times get read off by my laptop as we zipped around the somewhat-complicated course that we set up. Moreover, I can totally see bringing this out to an open field and setting up a crude course outside and doing the same exact thing.

What’s Next?

Thanks to having quite a few of the parts laying around the house already, I was able to buy enough parts to build an 8-port race time for about $125-150. Because of the size of our group and the extra effort it takes to get eight quadcopters in the air at the same time, I opted to only include 4 ports in mine. But if I ever wanted to grow mine to eight ports, I have enough spare components to make it happen.

I plan on doing a bit more research. My next goal is to use the Raspberry Pi 4’s wireless interface to act as a WiFi access point. That way when we’re in the field, I can fire up the race timer and access it from my laptop, without any Internet access.

I really enjoyed our racing session. We didn’t really do any “real” racing, we wound up using the “Open Practice” mode. Nevertheless, it was a riot racing Pat (and myself) around the house. As more of my drone-flying buddies are fully vaccinated, I’m looking forward to inviting a couple more people over and see what it’s like with four pilots racing at the same time!

What do you all think? If you’re interested in racing, what do you do to try and measure your own speed when casually practicing? Is this a project that you’d be interested in building on your own? Is a product like the ImmersionRC LapRF Personal Race Timing System more suited for what you’re wanting to do? Or maybe you’re timing yourself a different way? Let us all know in the comments below!

Self-Hosting my own Cloud Storage: FreeNAS, Nextcloud, and Tailscale

| Comments

Until recently, I’ve never really felt the urge to access the contents of my DIY NAS from outside of my own network. The way that I’ve used my NAS, it has been simpler to use services like Google Drive to have access to my most critical data on my various computers, tablets, and phones.

Between COVID-19 and my new job being 100% remote, I have spent nearly all of my time on my own network. If anything, I have less of a need to access my NAS’s contents remotely. However, I’m hopeful about our vaccination efforts and I’m cautiously optimistic there might be light at the end of that tunnel. My growing catalog of content (primarily video footage) has long since eclipsed what can be stored on Google Drive or Dropbox—and even if I could buy the space, I’d much rather invest the cash into improving my NAS!

In my blog about implementing Tailscale at home, I installed Tailscale on my OpenWRT router and used Tailscale’s relay node feature to allow my other Tailscale nodes to use all of the resources on my network. But SMB’s performance over the Internet isn’t that great, and I wanted to make sure that important content—like my blog—continued to be synchronized across all of the machines that I’d want to be able to access it from.

I wound up deciding that I’d try and host my own cloud storage using Nextcloud and access it via Tailscale. This decision was instigated by the shift in what I was asking of my own DIY NAS and encouraged by the numerous questions I’ve been asked about self-hosting cloud storage over the years.

My DIY NAS is running FreeNAS-11.2-U8, has an Avoton C2550 CPU, 32GB of RAM, and a 10Gb NIC. Everything in this article was written in the context of using my NAS. I would expect that the same—or similar—steps would work with different versions of FreeNAS (now known as TrueNAS Core), but your mileage may vary.

Please share your experiences with different versions in the blog’s comments!

Plugin vs. Jail vs. Virtual Machine

The most difficult decision I made was whether to use the Nextcloud plug-in for FreeNAS/TrueNAS, to create a FreeBSD Jail, or to host it within a Bhyve virtual machine. Each option had its own benefits and drawbacks, and I as I explored this topic, I experimented with each of the three options.

  • Nextcloud Plug-in: Setting up the plug-in for FreeNAS/TrueNAS was incredibly easy; I had Nextcloud up and running in mere moments. I even briefly exposed my Nextcloud VM to the Internet through my router. I had hoped it would be a simple task (for me) to add the Tailscale client to the jail created by the Nextcloud plug-in, but quickly learned that wouldn’t be the case.
  • FreeBSD Jail: Having run into challenges tinkering with the Nextcloud plug-in, I figured I could just install and host my own Nextcloud alongside Tailscale from inside a jail on my NAS. Setting up Nextcloud in this jail was easy—but the Tailscale client wound up being difficult. Tailscale would crash any time I launched it, and I wasn’t having much success debugging it on my own—or finding helpful information to help me stumble through resolving it. Unfortunately for me, I’m simply not a savvy enough of a FreeBSD user to get Tailscale functioning.
  • Bhyve Virtual Machine: I would have preferred running Nextcloud and Tailscale in a FreeBSD Jail, mostly because it’s less resource-intensive. But after not having much luck before, I ultimately decided that I would use the Bhyve hypervisor to host a virtual machine. For that virtual machine, I’d choose Ubuntu 20.04 for its operating system. I am a bit much more familiar with Linux vs. FreeBSD and infinitely more capable of finding answers to questions that I run into since Ubuntu is fully supported by both Tailscale and Nextcloud.

Why is Brian hosting the Nextcloud Virtual Machine on his DIY NAS instead of his Homelab server?

This is a good question! All things being equal, I would choose to host Nextcloud on my homelab server. But I’ve made exceptions to this rule before! For example, there are things (Home Assistant, Octoprint, etc.) in my house running on Raspberry Pis that probably should be running in a virtual machine on my Homelab server instead.

Ultimately, the reason my Nextcloud VM is running on my DIY NAS is: all of you! I’m quite grateful that so many people find their way to my blog when researching various DIY NAS topics. I know that hosting your own cloud storage is a huge point of interest in the NAS community, so it made sense to do it in a way that a fellow NAS enthusiast might want to follow!

Besides, I routinely advocate building over-the-top DIY NAS machines with more processing power than a NAS might need. Using that extra CPU power to host a cloud storage VM is a great way to leverage that extra capability.

Creating a Virtual Machine to run Nextcloud and Tailscale

On my DIY NAS, I logged into the FreeNAS interface and created a Virtual machine. I allocated the VM with a single Virtual CPU, 512MB of RAM, created a new 64GB disk image, pointed it to the Ubuntu 20.04 ISO I picked and, started the virtual machine up.

Using the FreeNAS UI’s built-in VNC viewer, I attempted to begin the installation and immediately ran into this error:

1
2
3
Initramfs unpacking failed: write error
Failed to execute /init (error -2)
Kernel panic - not syncing: No working init found. Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst for guidance.

The culprit to this error was the fact that I only allocated 512MB of RAM (the minimum suggested for Nextcloud) and Ubuntu Server’s minimum hardware requirements list a gigabyte of RAM. Increasing the Virtual Machine’s RAM to a gigabyte moved me past this error.

For the most part, I simply paged my way through the Ubuntu 20.04 Server setup picking primarily default options, naming my server nextcloud and setting up my root username and password. On the Featured Server Snaps screen, I was delighted to see that there was an option for Nextcloud, but as a result of my experimentation I decided to recommend skip over install the Nextcloud Snap at this point.

The installation rebooted the virtual machine, and after the completion of the reboot, the VM was a little annoyed at me and said it wasn’t able to unmount the CDROM and then told me to eject the installation media. My solution for this was probably a bit brutish: I shut down the virtual machine and deleted the CDROM device entirely. A bit crude, perhaps, but still very effective.

I started the Virtual Machine up one more time, confirmed that I was able to log in via FreeNAS’s built-in VNC client, and then switched over to using SSH for the remainder of my work.


Choosing VM Wizard Type Guest OS, Servername, Boot Method, etc... Allocation of Virtual CPUs and RAM Creating a Virtual Disk Image Virtual NIC Setup Mounting the Boot Media First Boot of the VM in VNC Viewer


Installing and Configuring Tailscale on the Nextcloud VM

On my freshly installed Ubuntu 20.04 virtual machine, the first thing that I set out to do was to get Tailscale up and working. As I had learned from my prior experience with setting up Tailscale on other devices, it was incredibly easy.

I simply followed the few steps from the Setting up Tailscale on Ubuntu 20.04 LTS (focal) documentation and my new virtual machine was up and running without any fuss.

Set up a ZFS Dataset and NFS Share on my NAS to house my Nextcloud Data

My intention all along was to create a dataset on my DIY NAS, to share it with the Nextcloud VM, and then to configure Nextcloud to use that shared path to store all of its data, primarily because what I valued was the data that was being stored and less so the virtual machine. In the long run, having the dataset will save me some storage space (example: space wasted in over-allocating space in a virtual hard drive and saving some administrative work in growing the virtual hard drive as more data is stored on Nextcloud), it enables me to set up ZFS snapshots for my Nextcloud data, and I’m poised to include this Nextcloud dataset among what I’m already backing up on Backblaze B2.

  1. Created a user/group to grant permissions to access the Nextcloud Data directory as.
  2. Created a new ZFS dataset on my NAS for the Nextcloud data.
  3. Set the permissions on the ZFS dataset, setting the User and Group to match what was created in the prior steps.
  4. Created an NFS share, pointed it to the Nextcloud dataset’s path (/mnt/volume1/nextcloud), restricted it to the Nextcloud VM’s hostnames (nextcloud, nextcloud.lan, and nextcloud.briancmoses.com.beta.tailscale.net), and set the Mapall User and Mapall Group to the User and Group created in the first step.

Setting up user/group Adding ZFS Dataset Setting Dataset Permissions Creating NFS Share


Mount the NFS share to the Nextcloud Virtual Machine

Having created the dataset and NFS share on the NAS, I swapped over to the Nextcloud VM to mount the share. I went through a few different iterations before landing on these steps. I wound up learning that the local directory to mount the NFS share in needed to belong under /mnt or /media thanks to how the Snaps work.

  1. Installed the nfs-common package on the Nextcloud VM (sudo apt update and sudo apt install nfs-common)
  2. Created a new Nextcloud data directory (mkdir -p /mnt/nextcloud) on the Nextcloud VM.
  3. Change ownership to root of the new Nextcloud data directory (sudo chown -R root:root /mnt/nextcloud) on the Nextcloud VM.
  4. Changed Permissions on the new Nextcloud data directory (sudo chmod 0770 /mnt/nextcloud)
  5. (Optional) Confirmed that the new NFS share existed, could be mounted, and that files could be viewed/edited/deleted on the Nextcloud VM.
    1. Validate presence of NFS share via showmount -e drteeth.lan.
    2. Manually mounted the NFS share via sudo mount drteeth.lan:/mnt/volume1/nextcloud /mnt/nextcloud.
    3. Created, viewed, and deleted a test file in the /mnt/nextcloud/ path.
    4. Manually unmounted the NFS share via sudo umount /mnt/nextcloud.
  6. Set up mounting the NFS share at boot by editing /etc/fstab.
  7. Executed sudo mount -a to mount the newly added line from /etc/fstab.

Installing the Nextcloud Snap and configuring it to use the custom Nextcloud Data Directory

In my stumbling and tinkering, I deleted everything and started over from scratch a few times. I had made the mistake of too excitedly installing Nextcloud and immediately started using it before figuring out how make it use the network share that I had mounted for its data storage.

But in following the Nextcloud Snap’s directions on changing the data directory to use another disk partition, I wound up overlooking two equally important details:

  1. I needed to connect the removable-media Snap to Nextcloud.
  2. The local path to my share needed to exist beneath either /mnt or /media on my Nextcloud VM.

Not understanding these two details had me scratching my head at a couple different points, running into permissions errors, and flailing trying to get Nextcloud working

  1. Executed the Nextcloud Snap Installation (sudo snap install nextcloud)
  2. Connected the removable-media to Nextcloud Snap (sudo snap connect nextcloud:removable-media)
  3. Edited the Nextcloud autoconfig (/var/snap/nextcloud/current/nextcloud/config/autoconfig.php) and updated the directory variable to ‘/mnt/nextcloud’.
  4. Restarted the Nextcloud PHP Service (sudo snap restart nextcloud.php-fpm)
  5. From my browser I opened the Nextcloud VM’s URL
  6. I set up the primary administrator’s account in the Nextcloud UI.

Now What?!

The possibilities are really endless! First, I need to get the Nextcloud client installed on my desktop computer, laptop, my phone, and tablet. But after that, I’m curious about using Tailscale’s sharing to maybe provide some cloud storage to family and friends. I’m very interested in untangling my rat’s nest of synchronization tasks and cloud storage providers and relocating my blog’s storage into Nextcloud. The same goes for my recent FPV quadcopter footage, I’d like it to get uploaded to Nextcloud so that I can more easily edit those videos.

Final Thoughts

This blog was supposed to be mostly about Nextcloud, but I can’t stop raving about Tailscale. Prior to using Tailscale, hosting my own cloud storage solution was going to be too much investment—of dollars and time! The sum total of effort in setting up a VM, configuring up Nextcloud, maintaining SSL certificates, dealing with the opening ports on my firewall, and any fallout from aggravating my Internet service was just too much. Combining Nextcloud with Tailscale eliminated or mitigated the hassles associated with many of those factors.

I’m excited to tinker with Nextcloud, and at the rate that Tailscale keeps announcing new features, I’m excited to see what feature I’ll get to try on this VM next! For the time being, I’m going to prioritize how I can leverage Nextcloud to make my most important data more ubiquitous, but I’m open to any possibilities!

Are you using Nextcloud with your NAS in order to host your own cloud storage? How have you tackled keeping your data synchronized between many devices? Are you interested in Nextcloud—but haven’t yet taken the plunge? What’s standing between you and adoption of Nextcloud? What sort of functionality would you like to see featured in future blogs? Let me know down in the comments below, I’d love to hear your thoughts!

Tailscale: A VPN that even Brian can use!

| Comments

I built and blogged about building my own DIY NAS back in 2012 and I’ve been repeating building a new DIY NAS once or twice a year ever since. One of the most frequently asked questions across those blogs has been, “How do you access your DIY NAS from the Internet, Brian?”

My answer to that question has always been “I don’t.” For the most time I’ve had a DIY NAS, I just didn’t have much need or interest in accessing the contents of my NAS from outside my home. At first, I only used my DIY NAS for backing up the computers that I had at home. But slowly over time, I’ve transitioned to using my DIY NAS as my primary storage for all of my data.

Consequently, accessing my data remotely has been more and more important over the years. Primarily, I’ve used services like Google Drive or Dropbox to access critical data and synchronize changes made to it across all my machines. For quite a while this has been both easy and cheap. But I started creating more content, especially videos for my YouTube channel, and this solution slowly began to break down and become more expensive.

Over the past two years, I’ve made a few halfhearted attempts to install and configure a VPN endpoint within, or at the edge of, my own network. Every time that I tried, I ran into issues—mostly all related to my lack of expertise—and set it aside to figure out another day.

For a while now, a few people have been encouraging me to check out Tailscale. In fact, Pat’s been telling me routinely about how he’s made his life easier with Tailscale and insisting that it was really simple to set up. While I have had no reason to doubt Pat’s assessment, I’ve also learned that there’s a cornucopia of topics that Pat thinks are painfully simple which completely short-circuit my brain.

What is Tailscale?

Over on Tailscale’s website they describe Tailscale as “A secure network that just works. Zero config VPN. Installs on any device in minutes, manages firewall rules for you, and works from anywhere.”

You create an account with Tailscale, you install a client on each machine, associate those clients with your account, and Tailscale encrypts traffic between any of your endpoints.

Tailscale was as easy as Pat made it sound every time he told me about it. In fact, I think it was easier!

How am I using Tailscale?

Out of curiosity, I set up Tailscale on a few different devices and without any effort I had Tailscale up and running on my phone, my tablet(s), two different Raspberry Pis, my laptop, my desktop computer, and on my OpenWRT router.

Remote Desktop access to my Tradewars 2002 game server

A couple years ago, I wrote a nostalgic blog about Tradewars 2002 which convinced me to spin up my own Tradewars game server on a small virtual machine hosted in Azure. For a long time, I’ve lived with exposing more than just the Tradewars game server’s port in order to remotely access the machine.

Implementing Tailscale on the game server let me close down the port(s) that I had exposed—and probably never should have—on the TW2002 virtual machine.

This is something that I could’ve accomplished in the Azure portal on my own by setting up some firewall rules in Azure and also on the virtual machine. But it was so much easier to just completely close it down and use Tailscale instead.

As an added benefit, my super-secret outer space trading strategy is now happening over an encrypted tunnel!

Pi-KVM

If you’ve read my two-part series of blogs about Pi-KVM then you’re already aware that Pi-KVM is an awesome little project that allows you to build an inexpensive KVM-over-IP using a Raspberry Pi 4 Model B (2gb version), a video-capture device, and an assortment of USB cables.

Combining the Pi-KVM and Tailscale is a really compelling pairing which demonstrates the value of both products. If I had a friend who was having a computer problem in another location (near or far), I could give them the Pi-KVM, they could hook it to their computer and network, and thanks to Tailscale I could access the Pi-KVM interface remotely. I could access the machine’s BIOS, boot from an ISO, or remotely access the native operating system without much effort at all.

A shotgun approach to access the contents of my DIY NAS remotely

Tailscale’s recommended approach is to put the Tailscale client on all of the devices and assemble a mesh network of connected endpoints. This assumes that you’re able to install the client directly on each machine. Unfortunately, FreeNAS (or TrueNAS) does not include the Tailscale client in their base operating system and they don’t really want you tinkering with the operating system at all, so the Tailscale-recommended approach is not as straightforward.

In evaluating my options, I knew I’d need to use Tailscale’s relay node if I wanted to be able to access my NAS using Tailscale. It seemed like I had a few options using Tailscale as a relay node:

  1. A jail hosted on my FreeNAS server.
  2. A virtual machine on my homelab server.
  3. On my OpenWRT router.

For the time being, I have opted for the third option. I knew that running it on my own OpenWRT router was possible thanks to Pat’s blog about putting Tailscale on his Mango OpenWRT router.

I configured the relay node to relay for the subnet that my NAS (and the rest of my home network) is on. I wound up deciding that if I was going to compromise by moving away from Tailscale’s mesh of encrypted network endpoints, then I would do so in a way that provided the maximum of possible utility. In this way, I was using Tailscale pretty similarly to a traditional VPN—everything on my network at home is accessible from outside of my network—provided the computer I’m using it from (my laptop) is running a Tailscale client too.

So what do I think?

Ultimately, I would’ve preferred adhering to the full-Tailscale method and installing client(s) on every machine that I want to be reachable from my other machines running Tailscale. My lack of understanding of FreeBSD and how FreeNAS is architected were significant enough of obstacles that I made a compromise.

I expect that it is possible that a FreeNAS enthusiast could create a jail, install the FreeBSD Tailscale package in the jail, and tighten the scope of the Tailscale relay node down to only the IP of the NAS. Perhaps this will be a topic of a future blog? Or even better—maybe somebody will answer this question in the comments below?!


Regardless of my obstacles with FreeNAS, Pat absolutely was correct—Tailscale makes all of this so much easier! I’ve installed Tailscale on devices with very different hardware, running a number of different operating systems, and numerous different use cases. All of it was really straightforward and easy to set up. I didn’t have to create any firewall rules—it all just worked.

I’m excited about the possibilities that Tailscale presents. They recently added endpoint sharing as a Public Beta, which is a really useful concept that I’m going to be exploring as part of future blogs. So please stay tuned!

Pi-KVM: Controlling a 4-port KVM and setting up Tailscale

| Comments

In a previous blog, I raved about how awesome Pi-KVM is. If you’re not aware, Pi-KVM is an open-source project that allows you to turn a Raspberry Pi into an IP-KVM.

You plug the Pi-KVM into your network and into a computer and then from anywhere else on that network, you can remotely control that computer as if you’re sitting in front of it, including doing things like accessing the remote machine’s BIOS.

When I did my initial research, I learned that building a Pi-KVM can be done incredibly inexpensively—it would cost less than $80 to build one from the recommended parts list. By building mine from a CanaKit Raspberry Pi 4 Model B (4GB) Pro Kit, I wound up spending quite a bit more than that $80. Regardless of how much I spent, I felt like I got a great value out of what I built.

Enhancing my Pi-KVM Setup

After building a Pi-KVM for Pat for Christmas, I was hooked and quickly built one for myself. Following that, I built another one that I could easily use with other computers outside of my office. Most notably, I wanted one to use in my “recording studio” where I do most of the work in assembling my DIY NAS builds.

I also began contemplating improving the Pi-KVM I was planning to use with my DIY NAS and my homelab servers in my office. When I started out, I planned on just sharing the same Pi-KVM with both machines, but the more I thought about it, the more I realized that I wanted to avoid having to swap the cables between the two servers.

This got me thinking. Either I’d need to build yet another Pi-KVM, or find a KVM switch that I could trigger from Pi-KVM to swap between the two hosts.

Adding a 4-port KVM Switch, the ezcoo EZ-SW41HA-KVM

Thanks to Novaspirit Tech on YouTube and his Q&A video about the Pi-KVM, I was already aware of what I wanted to try next. Among the topics discussed in the video was an inexpensive 4-port KVM switch made by ezcoo, the ezcoo EZ-SW41HA-KVM. This KVM switch has its own USB management interface which allows for firmware updates and switching which port is active on the KVM switch.

What set this particular switch apart from others was that Pi-KVM has functionality built-in that can issue the commands to the ezcoo EZ-SW41HA-KVM. You can configure the web interface’s elements to suit your needs and then pick which of the four ports is active with your Pi-KVM.

By using my Pi-KVM, the ezcoo EZ-SW41HA-KVM, some HDMI cables, some USB A to B cables, a couple VGA-to-HDMI adapters, and a generous helping of obsessive-compulsive cable management, I now have a 4-port IP-over-KVM setup functioning.

Parts List

A Pi-KVM can be built for under $80, and for an additional $150, your Pi-KVM can be extended to work with up to 4 different computers. This was ideal for me because I often wind up tinkering with a third—and sometimes fourth—computer at my desk, particularly when I’m working on one of my DIY NAS builds.

Setting up Tailscale on my Pi-KVM

Tailscale is a simple and easy-to-use VPN service built atop Wireguard. By installing and configuring a device, you can access that device over a secure VPN connection on any machine that you’re running the Tailscale client on. Pi-KVM has incorporated Tailscale as a configurable option. Once it is set up and on your Tailscale account, you can access your Pi-KVM from another machine as long as it is connected to the Internet.

Being able to remotely access your Pi-KVM is a handy way to remotely access a machine without having to directly expose it to the Internet. This isn’t a critical feature to me, but I think it’s really quite interesting. I don’t really access anything on my network from outside of my house, but having that as an easy option is really intriguing.

Pat’s been telling me to check out Tailscale for ages. Every time that he tells me about it, it has sounded really interesting—but I didn’t really have a good use-case for it. But now that I’ve set it up for using with my Pi-KVM, I know that it’s a matter of time before I’m using it with other machines too!

What’s Next?

After writing two blogs about it, I’m still quite excited about the Pi-KVM project. Pi-KVM is working on their own hardware to build in a bunch of extra features and are preparing to have the hardware manufactured. I’m excited enough about what’s been shared that I’ve signed up to preorder that hardware. As I have been working on this second blog, Novaspirit Tech published a video reviewing the Pi-KVM v3 hardware. Here are a few things I plan to do with my Pi-KVM:

  • Build another Pi-KVM for use on my workbench in my studio
  • Set up network passthrough to share the Raspberry Pi’s wireless interface for use where a network drop isn’t convenient.
  • Upgrade one (or more?) of my Pi-KVMs to the version 3 of the hardware
  • Other ideas? What would you do with a Pi-KVM that I haven’t considered yet? Please share your ideas in the comments, I’d love to see them!

Final Thoughts

Between building and using three different Pi-KVMs, reading about the Pi-KVM version 3 hardware, and now seeing it in action, I am doubling down on my previous position. I have no reservations whatsoever about capitalizing on the opportunity to pre-order the version 3 hardware once I can. I’m also happy that I can help support Pi-KVM on Patreon.

Pi-KVM is an awesome project. Pi-KVM has joined OctoPi and HomeAssistant at the top of my list of favorite Raspberry Pi projects. I’m excited to see what’s in store for the future of Pi-KVM.

Giveaway

I think that Pi-KVM is an interesting enough project that as part of this pair of blogs, I’m going to give away three kits which will hopefully get someone well on their way to building their own Pi-KVM. Each kit will contain:

Here are the giveaway details (Note: There’s a new way to enter that’s unique to this blog!_):

briancmoses.com: Raspberry Pi 4 Model B (2GB version) with Customized 3D-Printed Case Giveaway

Pi-KVM: an inexpensive KVM over IP

| Comments

I recall griping at Pat one day he was over for dinner that I wished everything had an IPMI interface or that nothing did. Only two of the computers at my house have an IPMI interface, all the rest did not. When the DIY NAS: 2020 Edition was burning in, I had to get up and go into the other room to see what it was up to, but I didn’t think it was worth the price premium to move up to a motherboard which included IPMI among its features.

Pat nodded in agreement and remarked, “There’s this neat project, Pi-KVM, that lets you build a cheap IP-KVM out of a Raspberry Pi! You should check it out.” We talked about different possible uses for it and then got busy repairing a quadcopter or playing some video games.

A few days later, I recalled this conversation later when wracking my brain for a Christmas gift idea for Pat and immediately decided then that I’d build a Pi-KVM for Pat. As a bonus, I’d get to play with his Christmas present a bit before deciding if I wanted to build one of my own!

KVM over IP (IP-KVM)

A KVM (or KVM switch) is basically a device that allows you to share a keyboard, video, and mouse between two computers. For a very long time, I had my own DIY NAS, primary workstation, and work laptop all plugged into a KVM switch. Whenever I needed to use one of those three computers, I’d hit a button and the KVM would switch me between the three computers.

An IP-KVM is very similar: you plug the IP-KVM device into the keyboard, video, and monitor of a computer but then you access it over a network. The keyboard, mouse, and display that you’re using aren’t actually plugged into the remote computer.

Both a KVM and IP-KVM are superior to remote access (TeamViewer, Remote Desktop, VNC, etc.) because you’re accessing the actual hardware remotely. You’re able to see the machine POST, access the BIOS, and watch it load the operating system. Most remote access options require the operating system to be up and running first. In enough scenarios, that’s simply not enough.

IPMI

The most controversial parts of my DIY NAS build blogs is how frequently I recommend consumer-grade hardware. When people advocate for using enterprise hardware, the two reasons that resonate the most to me are support from the manufacturer (potentially including longer warranties) and that most server-grade motherboards have an IPMI interface.

Simplifying things a bit—maybe tremendously—IPMI is quite similar to having a built-in IP-KVM incorporated on the motherboard. The motherboard has a dedicated network interface that gets plugged into a switch and assigned an IP address by your router. Using a web browser or an IPMI client, you’re able to access this interface and interact with the hardware as if you were sitting in front of the computer with a keyboard, mouse, and monitor plugged into it.

The DIY NAS: 2016 Edition was the first motherboard I used with an IPMI interface. That feature (among others) helped convince me to use the same motherboard when I upgraded my personal NAS later the same year. That upgrade also meant that I was able to retire my KVM switch and all of its cables. When I built my homelab server, I made sure to pick a motherboard that included an IPMI interface. Since then, you know how much I’ve successfully used those IPMI interfaces? Almost zero!

Thankfully, I’ve rarely actually needed to use the IPMI interfaces. Both machines have been tremendously stable and do their jobs without much interaction from me. However, each time that I have attempted to access their IPMI interfaces, I have run into minor issues. When I encountered these difficulties, I simply reverted to old behavior and grabbed my spare monitor and keyboard from my closet for the following reasons: it’s less effort, and the IPMI’s web interfaces have been pretty terrible.

When I bought an extension for my desk, a new switch for my 10Gb network, and moved around both my DIY NAS and homelab servers, I decided to just leave the IPMI interfaces disconnected and removed the network cables on each machine. It’s almost like I knew that a couple months later, I would be tinkering with something I liked way better than any IPMI interface that I’d used.

Pi-KVM

So what’s Pi-KVM all about anyways? It’s an open-source project for building your own IP-KVM. So far, Pi-KVM has been through a couple different hardware variations. All of the hardware variations have been built around different Raspberry Pi models and a varying amount of do-it-yourself electronics. The current hardware version (v2) can be built around either a Raspberry Pi 4 Model B (2GB or higher) or a Raspberry Pi Zero W. Depending which Raspberry Pi option you pick, you’ll also need a video-capture device and some USB cables/adapters.

When it’s all said and done, the Raspberry Pi is connected to your computer’s display and USB ports. You pull up the Pi-KVM web interface in your browser, and you’re then in control of remote computer as if you’re physically standing right there. It’s really quite fantastic!

I looked for a comparable off-the-shelf piece of equipment, but there’s really nothing quite like it. I suspect that there’s just not much consumer demand for IP-KVM hardware right now. For most consumer users, there are acceptable enpough methods for accessing computers remotely, like VNC, Remote Desktop, and many others.

Nevertheless, after assembling the Pi-KVM that I gave to Pat for Christmas, I was immediately convinced that I wanted at least one for myself. Seeing these features in action were what sold me:

  • Incredibly easy to build the hardware (version 2)
  • The web interface was really responsive and easy to use.
  • The latency was low.
  • CD-ROM or Flash Drive emulation to pass through to the connected host.


These features aren’t all-encompassing either! They’re just the features that I was immediately zeroed-in on. There’s a whole cargo container full of other features that I haven’t leveraged yet too. The ATX controls sound really intriguing—having the ability to remotely press the power and reset buttons seems like it could come in really handy. Securely accessing my Pi-KVM from the Internet sounds interesting, but I’d rather not open ports on my router’s firewall in order to do so. However, there is a Tailscale client available. The idea of being able to access Pi-KVM from any device that I have a Tailscale client running on seems fascinating. Plus, Pat keeps telling me about how Tailscale makes these kinds of things easy, so this sounds like an excellent opportunity to prove Pat correct!

Brian’s Pi-KVM Parts List

When I ordered parts for Pat’s (and then again for my own) Pi-KVM, I made a mistake (or did I?) and bought a Raspberry Pi 4 Kit with 4GB of RAM. A Raspberry Pi 4 2GB meets Pi-KVM’s hardware requirements and would’ve worked just fine.

I decided that having 2GB of extra RAM might be useful in case there was other functionality that I wanted to add to my Pi-KVM down the road. Maybe one of these five awesome headless Raspberry Pi uses are good candidates to run alongside my Pi-KVM?

Brian Spent Too Much Money!

It’s important to keep in mind that {“I wasn’t a very thrifty shopper and wound up spending way more than I needed to. A Pi-KVM can easily be built for about $80!”} This can be done by more closely following the suggested hardware list:

I mentioned before that I couldn’t really find a comparable product when I searched for one. About the closest thing I could find were USB Crash Cart Adapters, like this one from Startech.com. This crash cart adapter is over $200 more than what I paid, doesn’t allow remote access over the network, is only VGA, has a much smaller set of features, and requires a custom application installed on the machine you’re accessing the remote machine from.

A price tag of under $80 is inexpensive enough that I’d gladly lend my Pi-KVM to friends who need my help with something on their PCs. It’s cheap enough that I’m going to definitely build another one just to have a loose spare for whenever it might be handy. For example, when I’m working through one of my DIY NAS builds!

What’s Brian think? I’m all in on Pi-KVM!

This is all Pat’s fault; he suggested I look into Pi-KVM awhile back. Once I did, I knew I wanted to build my own. In building one for both Pat and I, I’ve also learned that Pi-KVM is working on their own hardware and I now know that I want that too. Their hardware will include an extra ethernet interface to act as a pass-through, its own low-latency video capture ability, wider hardware support for finicky BIOSes, and many other features.

I’ve signed up to pre-order it and I’ve also become a Patron of pikvm on Patreon. The next iteration of hardware is going to make a fine upgrade—and another blog—down the road when I get my hands on it. It might be a fun project to design my own 3D-printed case for, or even maybe collaborate with Pat and mill something on his CNC machine.

But wait, there’s more!

In Novaspirit Tech’s YouTube review of the Pi-KVM and his subsequent video Q&A about Pi-KVM, he mentioned that Pi-KVM can also interact with a traditional KVM to allow you to switch between numerous different machines. I have purchased the ezcoo EZ-SW41HA-KVM 4-port KVM switch, a couple VGA-to-HDMI adapters, HDMI cables, and USB cables to hook into it.

Adding the ezcoo KVM switch on to my own Pi-KVM is something I’m looking forward to building and blogging about in the very near future!

Giveaway

briancmoses.com: Raspberry Pi 4 Model B (2GB version) with Customized 3D-Printed Case Giveaway