Building a virtualized NAS & DVR with ESXi 6.0 / Part 4: Setting up and running FreeNAS

Introduction

I’ve been running FreeNAS virtualized on my VMware ESXi 6.0 whitebox for quite a while — since last spring, I believe. I’m actually quite surprised how simple and trouble-free the system has been. In this post I’ll describe what I did and how the system has been set up.

If you’re not familiar with the previous parts in this series, you can check them out via the links below.

Setting up the FreeNAS VM

I started by creating a new virtual machine via vSphere Client. I selected FreeBSD 64-bit as the guest OS and gave the VM two CPU cores and eight gigabytes of memory. For the FreeNAS OS, I created an eight-gigabyte virtual disk in the datastore, as that is the minimum size according to the FreeNAS 9.3 documentation.

The important bit when setting up the FreeNAS VM is configuring PCI passthrough for the SATA controller that is hosting the disks for the NAS. It essentially means handing the VM exclusive control of the device without having any virtualization layers in between. Allowing a NAS VM direct access to a SATA controller and its disks is necessary to have the system work reliably and with good performance. Building a NAS with ZFS on virtual disks is a recipe for disaster, at least according to most people in the FreeNAS community.

Configuring PCI passthrough in itself is easy. I simply chose to add a PCI device and selected the physical SATA controller from the list.

Installing and configuring FreeNAS

After downloading the FreeNAS installation media, I transferred it to the ESXi datastore via SSH and SCP. To get the VM to boot from the media, I mounted the ISO file in the virtual DVD drive (Edit VM Settings -> Hardware -> CD/DVD drive -> Device Type -> Datastore ISO File) and set the device to connect at power on. I also set the VM to start in BIOS (Options -> Boot Options -> Force BIOS Setup) so I could get the machine to boot from the virtual DVD drive.

The actual FreeNAS installation was very straightforward. I only had to choose on which disk to install, what root password to set and press enter a few times. After that, the VM booted, FreeNAS started and the console told me which IP address I could use to access the web-based configuration interface.

From here on, it was a case of configuring FreeNAS as on any other machine. The single Western Digital Red 3 TB SATA drive, which was connected to the passed-through SATA controller, was recognized by the OS without any problems. I was able to wipe it, create a ZFS volume and some ZFS datasets, all from within the web interface. I also created user accounts, configured required services (e.g. CIFS/Samba for sharing, SSH for remote access and S.M.A.R.T for disk health checks) and set up periodic ZFS snapshots.

I didn’t run into anything out of the ordinary. I was able to connect to the CIFS shares and read and write files with speeds around 90-100 MB/s. Everything seemed to work perfectly.

Adding a second drive for ZFS mirroring

I still had my old dedicated NAS (see Part 1 / Background) running during this build – I didn’t want to retire it until I was absolutely sure about the new virtual NAS setup. After testing the new NAS for a few weeks without any problems, I felt I had reached that point. This meant that I could decommission the old NAS and add its Western Digital Red 3 TB disk to the new NAS for ZFS mirroring.

The plan was to attach the second disk, wipe (erase) it and convert the single-disk ZFS pool to a mirror. I started by identifying the current disk in the new NAS to make sure I wouldn’t wipe the wrong disk. By checking Storage -> Volumes -> View disks I could see that the current disk in the pool was ada1. I also took note of the disk serial. Looking at View volumes -> Select volume -> Volume status I could see the status of the single-disk ZFS pool, which was displayed as “stripe”.

Single-disk ZFS pool status
Single-disk ZFS pool status

After connecting the second disk to the SATA controller and booting up the machine, FreeNAS recognized the disk and named it ada0. I could then proceed to wipe the disk.

Wiping the disk to be added
Wiping the disk to be added

Next it was time to add the fresh disk to the ZFS pool and enable mirroring. Unfortunately, the FreeNAS web interface did not support this so I had to do it from the console with the help of instructions I found on the FreeNAS forums.

First, the disk had to be partitioned with GPT, with a small swap partition and the main partition (note: make sure you point to the correct disk!).

[root@freenas ~]# gpart create -s gpt /dev/ada0
ada0 created

[root@freenas ~]# gpart add -i 1 -b 128 -t freebsd-swap -s 2g /dev/ada0
ada0p1 added

[root@freenas ~]# gpart add -i 2 -t freebsd-zfs /dev/ada0
ada0p2 added

Checking the partition result:

[root@freenas ~]# gpart list

Geom name: ada0
...
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 2147483648 (2.0G)
   ...
   rawuuid: 96264414-cd7a-11e5-b97a-000c29370f26
   ...
2. Name: ada0p2
   Mediasize: 2998445412352 (2.7T)
   ...
   rawuuid: a77dde2c-cd7a-11e5-b97a-000c29370f26
   ...

Next I checked the current pool status.

[root@freenas ~]# zpool status
  pool: Tank
 state: ONLINE
 ...

 NAME                                         STATE READ WRITE CKSUM
 Tank                                         ONLINE   0     0     0
   gptid/50af541b-95fa-11e5-a5d3-000c29370f26 ONLINE   0     0     0

I also printed the labels of the disks so I could copy-paste them to the upcoming ZFS command.

[root@freenas ~]# glabel status
                                      Name  Status  Components
gptid/50af541b-95fa-11e5-a5d3-000c29370f26     N/A      ada1p2
gptid/96264414-cd7a-11e5-b97a-000c29370f26     N/A      ada0p1
gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26     N/A      ada0p2

After that came the scary bit: attaching the new disk to the pool. Here is an excerpt from the zpool manual:

     zpool attach [-f] pool device new_device

Attaches new_device to an existing zpool device. The existing device cannot be part of a raidz configuration. If device is not currently part of a mirrored configuration, device automatically transforms into a two-way mirror of device and new_device.  If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately.

So, here goes nothing…

[root@freenas ~]# zpool attach Tank /dev/gptid/50af541b-95fa-11e5-a5d3-000c29370f26 /dev/gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26

[root@freenas ~]# zpool status
  pool: Tank
 state: ONLINE
status: One or more devices is currently being resilvered. The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress
        2.30G scanned out of 1.56T at 78.5M/s, 5h46m to go
        2.30G resilvered, 0.14% done

 NAME                                         STATE READ WRITE CKSUM
 Tank                                         ONLINE 0 0 0
  mirror-0                                    ONLINE 0 0 0
   gptid/50af541b-95fa-11e5-a5d3-000c29370f26 ONLINE 0 0 0
   gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26 ONLINE 0 0 0 (resilvering)

After a couple of hours:

[root@freenas ~]# zpool status
 pool: Tank
 state: ONLINE
 scan: resilvered 1.56T in 4h17m with 0 errors

 NAME                                         STATE READ WRITE CKSUM
 Tank                                         ONLINE 0 0 0
  mirror-0                                    ONLINE 0 0 0
   gptid/50af541b-95fa-11e5-a5d3-000c29370f26 ONLINE 0 0 0
   gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26 ONLINE 0 0 0

Impressions

After running FreeNAS virtualized with this setup for close to a year, I haven’t run into a single problem. It has been completely reliable. It generally performs very well, although for some reason the read speeds fluctuate a bit between 90 and 115 MB/s while write speeds remain more constant.

The FreeNAS community seems to be quite vary about virtualizing FreeNAS — the general opinion seems to be “don’t attempt it, you will regret it”. Granted, I can’t comment on larger, more complex use cases with higher loads and requirements, but in my case I really couldn’t be happier with my setup.

Building a virtualized NAS & DVR with ESXi 6.0 / Part 3: Installing ESXi

Introduction

It’s time for installing ESXi 6.0 on my VMware ESXi whitebox build. VMware ESXi, as you most likely already know, is a hypervisor which runs multiple virtual machines in a single physical computer. In my case I’m going to use it to run FreeNAS and MythTV, among other things.

If you haven’t read the previous parts in this series, be sure to check them out.

Creating a customized installation ISO

In order to install VMware ESXi, I needed an installation ISO package. It can be downloaded from VMware’s site, but the problem is that ESXi has very limited hardware support out-of-the-box. Without a supported NIC (which the one on my consumer-grade ASRock motherboard isn’t), ESXi can’t be installed.

As I mentioned in part 1, the H97M Anniversary motherboard’s Realtek NIC and the ASM1061 SATA controller can be made to work with ESXi. This is done by creating a customized installation ISO which is injected with community-supported drivers (see here and here).

Creating the ISO is made easy with the excellent ESXi-Customizer-PS script. However, the script requires installing Microsoft PowerShell and VMware PowerCLI first.

A further thing to note is that PowerCLI requires the PowerShell execution policy to be RemoteSigned.

powercli-install-exec-policy

If the execution policy is not set, ESXi-Customizer-PS results in the following error:

File ESXi-Customizer-PS-v2.4.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at http://go.microsoft.com/fwlink/?LinkID=135170

The execution policy can be set by running the following command in PowerShell:

PS S:\ESXi> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process

Execution Policy Change
The execution policy helps protect you from scripts that you do not trust. Changing the execution policy might expose
you to the security risks described in the about_Execution_Policies help topic at
http://go.microsoft.com/fwlink/?LinkID=135170. Do you want to change the execution policy?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"): y

After this, ESXi-Customizer-PS can be run without problems. Note the net55-r8168 and sata-xahci parameters which add those packages to the new ISO.

PS S:\ESXi> .\ESXi-Customizer-PS-v2.4.ps1 -v60 -vft -load net55-r8168,sata-xahci -log 'S:\ESXi\ESXi-Customizer-PS.log'

Script to build a customized ESXi installation ISO or Offline bundle using the VMware PowerCLI ImageBuilder snapin
(Call with -help for instructions)

Logging to S:\ESXi\ESXi-Customizer-PS.log ...

Running with PowerShell version 4.0 and VMware vSphere PowerCLI 6.0 Release 3 build 3205540

Connecting the VMware ESXi Online depot ... [OK]

Connecting the V-Front Online depot ... [OK]

Getting Imageprofiles, please wait ... [OK]

Using Imageprofile ESXi-6.0.0-20160204001-standard ...
(dated 02/20/2016 01:45:53, AcceptanceLevel: PartnerSupported,
For more information, see http://kb.vmware.com/kb/2132154.)

Load additional VIBs from Online depots ...
   Add VIB net55-r8168 8.039.01-napi [New AcceptanceLevel: CommunitySupported] [OK, added]
   Add VIB sata-xahci 1.34-1 [OK, added]

Exporting the Imageprofile to 'S:\ESXi\ESXi-6.0.0-20160204001-standard-customized.iso'.
 Please be patient ...

All done

Now the customized ISO was ready to be used.

Creating a bootable USB drive

The next step was to create a bootable installation drive. The idea is to boot from a USB drive and install ESXi onto the same drive.

There are several tools available for creating bootable USB drives from ISOs but Rufus seems to work best with UEFI systems.

format-usb-drive-for-esxi

After Rufus was finished, I was able to boot the soon-to-be ESXi box with the newly formatted USB drive.

Installing and configuring ESXi

The actual ESXi installation went smoothly.

After ESXi was up and running, I could connect to it with the vSphere software.

empty-esxi

First I installed the free license (you can get it from VMware’s website). Then I set up the SSD as a datastore.

esxi-add-storageAfter that I set up the host cache on the SSD. Host caching means that memory-constrained servers can use a fast SSD for swapping. I chose to use 30 GB for the cache; don’t ask me why, it just felt like a good number. To be honest I don’t actually know if I need host caching since I have lots of memory and my one and only datastore is an SSD (so there will be no swapping to slower HDDs anyway). I just wanted to try the feature out.

esxi-host-cacheI also went through the other settings, including time configuration where I set up NTP to make sure the server stays on time.

Finally I wanted to make sure the server starts up without problems with just the power and LAN cables connected. It did. The ESXi box ran fine and was ready for some VMs, the first being FreeNAS.

Building a virtualized NAS & DVR with ESXi 6.0 / Part 2: Hardware Assembly

Introduction

In this post I describe the hardware assembly of my ESXi 6.0 whitebox, which (hopefully) should run FreeNAS and MythTV. If you haven’t read the first part of this series, be sure to check it out as it goes into more detail about what I’m doing and how I selected my hardware.

Preliminary assembly and testing

As you may remember from the first part of this series, I decided to base my build on these components:

Some of you might wonder why my list doesn’t include a computer case. The reason is simple: I wanted to limit my risks by testing the core components first. I didn’t want to end up in a situation where, for example, the motherboard wouldn’t play nice with ESXi and I’d have to swap it for something else, perhaps with a different form factor.

I started out by assembling the motherboard, CPU, cooler and RAM on an old testing bench. Then I hooked up the PSU along with the power brick and made sure the computer started and I could get into UEFI without problems. Once there, the first thing I wanted to check was VT-d support, which I was able to enable successfully (phew!). I also took a look at the temperature sensors and noticed that the 212 EVO cooler could easily handle the idle temperature load of the CPU when using the lowest fan speed setting, which made the cooler pretty much silent.

Because I was still actively using my old MythTV server, and didn’t want to risk losing the data on my old FreeNAS disk, I continued my tests without the tuner card and with only one of the storage HDDs (the new one). My plan was to start with the FreeNAS VM anyway (since the MythTV VM would use shared storage from FreeNAS) and I could always add the second HDD later on.

Selecting a case

After playing around with the test bench setup for a few days, I was ready to order a case. One of my goals for this project was to build a small ESXi server. I wasn’t able to use a mITX motherboard since I needed at least two PCIe slots (one for the tuner card and one for the SATA controller) but I still wanted to get as close to mITX size as I could.

The biggest problem I had was that 99% of cases have space for a traditional PSU. That was a waste for me since I had my PicoPSU and external power brick. There are smaller mITX cases which are designed to be used with external PSUs, but I couldn’t find similar mATX cases – which of course is understandable since mATX computers are typically using more power-hungry components.

Finally I found the SilentiumPC Brutus Q30 which seemed to have everything I needed: an extremely compact size (28 cm height, 26 cm length, 20 cm width), space for three disks (two HDDs and one SSD), and the PSU slot over the motherboard, meaning I might be able to fit my huge CPU cooler in there since I didn’t have a PSU. I emailed the Polish case manufacturer about the maximum cooler height when not using a PSU, and they replied promptly with measurements indicating my cooler would fit just barely, with something like 0,5-1 cm to spare. As an added bonus, the case was very affordable – slightly over 40 euros at a local retailer. I was sold. Or rather the case was.

Putting it all together

When I received the case, I was pleasantly surprised about its looks and build quality. It’s simple but stylish. The only minor issue I had was that I’d have liked to mount both HDDs on the case floor, but unfortunately it only had mounting holes for one 3.5″ and one 2.5″ disk. Oh, and the front logo sticker was halfway peeled off. Not a big deal.

I had no problems assembling the components in the case. The CPU cooler fit perfectly with the heatpipes clearing the side panel and the fan blowing air through the heatsink toward the PSU opening at the back.

Regarding cooling, you may note that the case has no separate intake fan. A fan can be mounted on the floor but that option is gone if you need space for HDDs. Also, normally the PSU fan sucks hot air from inside the case and pushes it out the back. I only have to rely on the CPU cooler fan, which, despite being aimed at the PSU opening, can lead to hot air circulating inside the case.

I decided to solve this by creating a duct which would guide the hot air from the CPU cooler (and the case) straight through the PSU opening. Some fairly rigid paper, scotch tape and a pair of scissors was all it took, and it worked great.

Hardware impressions

The box works flawlessly and is completely silent when idling. The CPU fan can’t be heard – only when the HDDs are accessed can you hear that something is going on.

After some light hardware testing, trying out some stuff with a standalone FreeNAS installation and dabbling with a quick VMware ESXi test installation, my power meter shows that the box consumes a total of 40 watts when idling with a peak of 75 watts.

So far I’m very happy with these results.

In the next parts of this series I’m going to be covering ESXi and FreeNAS with SATA controller passthrough.

Building a virtualized NAS & DVR with ESXi 6.0 / Part 1: Introduction and Hardware

Introduction

In this series I describe my attempt to learn VMware ESXi by building a small, low-power virtualization box running FreeNAS and MythTV. I have pretty much no previous knowledge of virtualization and ESXi, so I’ve started basically from scratch by googling and reading stuff around the web. This means that I learn as I go and I fully expect to make some mistakes along the way.

Background

I’ve been running a MythTV DVR (digital video recorder) for 6+ years. It’s a small, silent box in the corner of my living room that records TV programs, plays back videos and acts as a music server. It has gone through several hardware changes and currently contains, among other things, an ITX form factor motherboard, a low-power Sandy Bridge i3 CPU and a quad DVB-T2 PCIe tuner.

A year or two ago I built myself a small NAS server based on FreeNAS. I tried to keep it as simple as possible since I didn’t need anything too special: I just wanted a centralized place to hold some documents, videos and music, and some storage space for backups. So unlike the eight-disk monster builds with SAS controllers, ECC memory and whatnot, which you can read about around the web,  I bought a second-hand motherboard with integrated AMD E350 CPU, 4 GB of cheap RAM, a single Western Digital Red 3TB drive and a chinese PicoPSU knockoff (meaning I run the computer using a laptop power brick). Worked like a charm.

Now, both of these systems work fine. Great even. So I didn’t have any immediate need to change anything.

I had become interested in virtualization some time ago and started wondering if I should try out some stuff just to learn. Sure, you can download and install VirtualBox on just about any computer, which I had tried a while back, but it didn’t seem interesting and exciting enough. I wanted to learn ESXi, and what better way than to build a dedicated virtualization whitebox which would incorporate the functionalities of the MythTV PVR / HTPC and my NAS.

So I decided the requirements for the build:

  • It should support running a MythTV backend with DVB recording capabilities
  • It should support running FreeNAS with ZFS, and have good performance (meaning an 80+ MB/s transfer rate)
  • It should have some spare capacity for playing around with other virtualized computers
  • It should be quiet and power-efficient, and for an extra challenge, be small in size so I can hide it in the corner of my home office (so no huge full tower case, thank you)
  • It shouldn’t cost a fortune

Selecting the hardware

I knew from my research that VT-d was the important keyword when selecting the hardware for the build.  It stands for Intel Virtualization Technology for Directed I/O and it’s Intel’s implementation of IOMMU. It allows the virtualization host to assign certain hardware resources directly to a VM (a virtualized machine). In ESXi, the feature is called PCI passthrough – you can essentially give a VM direct access to PCI devices, providing that the hardware supports VT-d.

Why is this important? For one, the MythTV VM needs access to the physical DVB tuner hardware to be able to record TV broadcasts. Also, giving FreeNAS direct, exclusive access to a SATA controller should yield the same sort of performance and reliability as when building a standalone, non-virtualized NAS box.

Both the CPU and the motherboard has to support VT-d. Finding a VT-d compatible CPU is easy; most i7 and i5 CPU’s support it. Finding a reasonably priced consumer motherboard that supports VT-d is a bit harder. Most motherboards support VT-x, meaning basic virtualization technology, but not VT-d. Even if the motherboard chipset supports VT-d, it is not enough since it has to be supported by the motherboard UEFI. Most motherboards do not have VT-d UEFI support; some support it in one UEFI version, only to have it removed in a later update. No idea why.

After some digging I found out that I’d have the best chance with ASRock’s motherboards. I finally found the H97M Anniversary, which had everything I needed: VT-d support (I checked the UEFI section of the manual), a Realtek RTL8111GR NIC which can be made to work with ESXi, 3 PCIe slots, mATX form factor and a very reasonable price (around 70 euros).

asrock-h97m-anniversary
ASRock H97M Anniversary

I also found a good deal on a second-hand i7-4785T CPU, which has 4 cores, 8 threads and a very low 35W TDP, which translates to low power usage and silent operation. Along with the CPU came a Cooler Master Hyper EVO 212 cooler which, due the huge size of its heatsink, should be able to cool the low-power CPU with a very low fan RPM – or perhaps even without a fan at all (might be worth trying out at some point).

Cooler Master Hyper EVO 212 cooler
Cooler Master Hyper EVO 212 cooler

For RAM I bought 16 GB of DDR3, which I thought should be enough for a start: I wanted to give FreeNAS 8 GB, the MythTV backend 2-4 GB and the rest would be free for playing around with VMs. I still had room for two more sticks of RAM in case I needed more.

I decided to use my PicoBox Z3-ATX-200 (chinese PicoPSU knockoff) for power supply (PSU). It has many advantages over traditional PSUs: silent operation, high efficiency, takes up very little space etc. Now I know some of you gasp at the thought of running a virtualization box with an i7 four-core CPU on a chinese PicoPSU that cost around thirty bucks. I realize I’m taking somewhat of a gamble here, but the PSU worked without a hitch in the FreeNAS box and I’m running it with a high-quality laptop power brick. The combination is rated for 120W which is much less than what I anticipated the entire box would use.

Picobox Z3-ATX-200 PSU
Picobox Z3-ATX-200 PSU

For host storage I decided to use a combination of a USB stick for the ESXi installation and an old SSD I had lying around for logs and VM datastore. My reasoning behind installing on a USB stick was that I could buy two sticks and take regular backups (clones) of the one currently in use. If something would happen and the primary stick would fail, I could just pop in the backup stick.

I know some people have built all-in-one ESXi & FreeNAS setups where the ESXi host is booted from a USB stick, the motherboard SATA controller is passed through to a FreeNAS VM from where storage is shared using iSCSI or NFS over a virtual storage network back to the host. My first thought is that it sounds risky – if the FreeNAS VM is down for any reason, ESXi loses access to its datastores, which I assume means other VMs go down as well. I didn’t like the idea of being that dependent of the FreeNAS VM, so I decided the safest bet would be to have a separate datastore disk for ESXi. This meant that ESXi would use the onboard SATA controller and I had to get a separate PCIe controller for the FreeNAS VM.

Finding a suitable SATA controller was a bit tricky since it has to be supported by ESXi. Natively ESXi supports fancy LSI controllers that cost three-figure amounts, but no cheap consumer-grade alternatives. However, I found out that by using some ESXi installation modifications, courtesy of the ESXi homelab community, support for various consumer SATA chipsets could be added. So I decided to try my luck and order a no-name PCIe SATA controller based on the ASM1061 chipset from eBay for a very reasonable 10 euros. As for disks, I wanted to use my old WD Red 3 TB disk, but this time ZFS mirrored, so I went out and bought another one.

ASM1061-based PCIe SATA controller
ASM1061-based PCIe SATA controller

As for the MythTV VM, I decided to use my existing TBS 6285 quad tuner DVB-T2 PCIe card from my old MythTV build. The VM would primarily use the NAS for storage but run on a virtual disk big enough to have space for a few days worth of recordings, in case there are problems with the NAS.

TBS 6285 Quad DVB-T2 PCIe tuner
TBS 6285 Quad DVB-T2 PCIe tuner

Here’s a summary of the hardware I ended up using:

I’ll be putting the hardware together in the next part of the series so stay tuned.