Building a virtualized NAS & DVR with ESXi 6.0 / Part 4: Setting up and running FreeNAS

Introduction

I’ve been running FreeNAS virtualized on my VMware ESXi 6.0 whitebox for quite a while — since last spring, I believe. I’m actually quite surprised how simple and trouble-free the system has been. In this post I’ll describe what I did and how the system has been set up.

If you’re not familiar with the previous parts in this series, you can check them out via the links below.

Setting up the FreeNAS VM

I started by creating a new virtual machine via vSphere Client. I selected FreeBSD 64-bit as the guest OS and gave the VM two CPU cores and eight gigabytes of memory. For the FreeNAS OS, I created an eight-gigabyte virtual disk in the datastore, as that is the minimum size according to the FreeNAS 9.3 documentation.

The important bit when setting up the FreeNAS VM is configuring PCI passthrough for the SATA controller that is hosting the disks for the NAS. It essentially means handing the VM exclusive control of the device without having any virtualization layers in between. Allowing a NAS VM direct access to a SATA controller and its disks is necessary to have the system work reliably and with good performance. Building a NAS with ZFS on virtual disks is a recipe for disaster, at least according to most people in the FreeNAS community.

Configuring PCI passthrough in itself is easy. I simply chose to add a PCI device and selected the physical SATA controller from the list.

Installing and configuring FreeNAS

After downloading the FreeNAS installation media, I transferred it to the ESXi datastore via SSH and SCP. To get the VM to boot from the media, I mounted the ISO file in the virtual DVD drive (Edit VM Settings -> Hardware -> CD/DVD drive -> Device Type -> Datastore ISO File) and set the device to connect at power on. I also set the VM to start in BIOS (Options -> Boot Options -> Force BIOS Setup) so I could get the machine to boot from the virtual DVD drive.

The actual FreeNAS installation was very straightforward. I only had to choose on which disk to install, what root password to set and press enter a few times. After that, the VM booted, FreeNAS started and the console told me which IP address I could use to access the web-based configuration interface.

From here on, it was a case of configuring FreeNAS as on any other machine. The single Western Digital Red 3 TB SATA drive, which was connected to the passed-through SATA controller, was recognized by the OS without any problems. I was able to wipe it, create a ZFS volume and some ZFS datasets, all from within the web interface. I also created user accounts, configured required services (e.g. CIFS/Samba for sharing, SSH for remote access and S.M.A.R.T for disk health checks) and set up periodic ZFS snapshots.

I didn’t run into anything out of the ordinary. I was able to connect to the CIFS shares and read and write files with speeds around 90-100 MB/s. Everything seemed to work perfectly.

Adding a second drive for ZFS mirroring

I still had my old dedicated NAS (see Part 1 / Background) running during this build – I didn’t want to retire it until I was absolutely sure about the new virtual NAS setup. After testing the new NAS for a few weeks without any problems, I felt I had reached that point. This meant that I could decommission the old NAS and add its Western Digital Red 3 TB disk to the new NAS for ZFS mirroring.

The plan was to attach the second disk, wipe (erase) it and convert the single-disk ZFS pool to a mirror. I started by identifying the current disk in the new NAS to make sure I wouldn’t wipe the wrong disk. By checking Storage -> Volumes -> View disks I could see that the current disk in the pool was ada1. I also took note of the disk serial. Looking at View volumes -> Select volume -> Volume status I could see the status of the single-disk ZFS pool, which was displayed as “stripe”.

Single-disk ZFS pool status
Single-disk ZFS pool status

After connecting the second disk to the SATA controller and booting up the machine, FreeNAS recognized the disk and named it ada0. I could then proceed to wipe the disk.

Wiping the disk to be added
Wiping the disk to be added

Next it was time to add the fresh disk to the ZFS pool and enable mirroring. Unfortunately, the FreeNAS web interface did not support this so I had to do it from the console with the help of instructions I found on the FreeNAS forums.

First, the disk had to be partitioned with GPT, with a small swap partition and the main partition (note: make sure you point to the correct disk!).

[root@freenas ~]# gpart create -s gpt /dev/ada0
ada0 created

[root@freenas ~]# gpart add -i 1 -b 128 -t freebsd-swap -s 2g /dev/ada0
ada0p1 added

[root@freenas ~]# gpart add -i 2 -t freebsd-zfs /dev/ada0
ada0p2 added

Checking the partition result:

[root@freenas ~]# gpart list

Geom name: ada0
...
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 2147483648 (2.0G)
   ...
   rawuuid: 96264414-cd7a-11e5-b97a-000c29370f26
   ...
2. Name: ada0p2
   Mediasize: 2998445412352 (2.7T)
   ...
   rawuuid: a77dde2c-cd7a-11e5-b97a-000c29370f26
   ...

Next I checked the current pool status.

[root@freenas ~]# zpool status
  pool: Tank
 state: ONLINE
 ...

 NAME                                         STATE READ WRITE CKSUM
 Tank                                         ONLINE   0     0     0
   gptid/50af541b-95fa-11e5-a5d3-000c29370f26 ONLINE   0     0     0

I also printed the labels of the disks so I could copy-paste them to the upcoming ZFS command.

[root@freenas ~]# glabel status
                                      Name  Status  Components
gptid/50af541b-95fa-11e5-a5d3-000c29370f26     N/A      ada1p2
gptid/96264414-cd7a-11e5-b97a-000c29370f26     N/A      ada0p1
gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26     N/A      ada0p2

After that came the scary bit: attaching the new disk to the pool. Here is an excerpt from the zpool manual:

     zpool attach [-f] pool device new_device

Attaches new_device to an existing zpool device. The existing device cannot be part of a raidz configuration. If device is not currently part of a mirrored configuration, device automatically transforms into a two-way mirror of device and new_device.  If device is part of a two-way mirror, attaching new_device creates a three-way mirror, and so on. In either case, new_device begins to resilver immediately.

So, here goes nothing…

[root@freenas ~]# zpool attach Tank /dev/gptid/50af541b-95fa-11e5-a5d3-000c29370f26 /dev/gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26

[root@freenas ~]# zpool status
  pool: Tank
 state: ONLINE
status: One or more devices is currently being resilvered. The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress
        2.30G scanned out of 1.56T at 78.5M/s, 5h46m to go
        2.30G resilvered, 0.14% done

 NAME                                         STATE READ WRITE CKSUM
 Tank                                         ONLINE 0 0 0
  mirror-0                                    ONLINE 0 0 0
   gptid/50af541b-95fa-11e5-a5d3-000c29370f26 ONLINE 0 0 0
   gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26 ONLINE 0 0 0 (resilvering)

After a couple of hours:

[root@freenas ~]# zpool status
 pool: Tank
 state: ONLINE
 scan: resilvered 1.56T in 4h17m with 0 errors

 NAME                                         STATE READ WRITE CKSUM
 Tank                                         ONLINE 0 0 0
  mirror-0                                    ONLINE 0 0 0
   gptid/50af541b-95fa-11e5-a5d3-000c29370f26 ONLINE 0 0 0
   gptid/a77dde2c-cd7a-11e5-b97a-000c29370f26 ONLINE 0 0 0

Impressions

After running FreeNAS virtualized with this setup for close to a year, I haven’t run into a single problem. It has been completely reliable. It generally performs very well, although for some reason the read speeds fluctuate a bit between 90 and 115 MB/s while write speeds remain more constant.

The FreeNAS community seems to be quite vary about virtualizing FreeNAS — the general opinion seems to be “don’t attempt it, you will regret it”. Granted, I can’t comment on larger, more complex use cases with higher loads and requirements, but in my case I really couldn’t be happier with my setup.

2 thoughts on “Building a virtualized NAS & DVR with ESXi 6.0 / Part 4: Setting up and running FreeNAS”

  1. This is almost exactly what I was going to do. Keen to see your experience with Mythtv on ESXI, part 5 ?

    Very clear and well written.

Leave a Reply

Your email address will not be published. Required fields are marked *