NAS Part 2

In December of 2010 I married the love of my life.  She brought forth into my home the woman’s touch.  She also brought forth all her data!  My poor struggling original NAS was already 80% full.  There was no way I was going to RIP all her movies, MASH episodes, and everything else onto my poor old NAS.  About this time I set out in search of new solutions.

In a previous life I was a systems administrator.  I had a very bad day on the job once.  When I arrived at work, one of the drives in our 18 drive RAID5 Sun Solaris raid array had failed.  Being the good administrator I was, I carefully replaced the drive and started a rebuild.  About half-way through the raid rebuild another drive in the RAID failed.  RAID5 protects against 1 drive failure.  The second drive failure was too much for the raid array and it went down never to come back up.

In hind sight, creating a RAID5 array with 18 drives was not a smart move and we shouldn’t have done that.  We had incorrectly convinced ourselves that the enterprise drives we were using were robust and the likely-hood of two drives failing at the same time was low.  What we hadn’t fully considered is how much stress the rebuild would put on the other drives.

Haunted by that experience, I was worried that if one of the drives in my raid5 failed another might fail in the rebuild and I would lose my data.  I had backups of it, so it wouldn’t be a disaster, but it sounds like a pretty major inconvenience.  Add this to the fact I am a nerd and I didn’t want to take the risk.

Armed with this information I wrote a list of requirements for my new NAS:

  • At least 8 TB of storage
  • Ability to have 2 drives fail and no data loss
  • Hot swap capable
  • Still quiet and energy efficient

About this point in time I started reading about the various options available as far as operating systems/software is concerned.  I read about Unraid, ZFS, FreeNAS, OpenMediaVault, and others.  In the past, I had worked a lot with Solaris and the *BSDs.  I have to admit that I was missing playing with that technology.  This pushed me towards OpenSolaris or FreeNAS so that I would use ZFS with a RAIDZ2 array.  Using RAIDZ2 would allow me to lose two drives and still have no data loss.  ZFS supports up to 16 exbibytes of data (giga->tera->peta->exa).  ZFS also added scrubbing capabilities to protect against bit rot.  All in all, it had a TON of features I wanted to play with that my Linux md raid didn’t have.

I placed a order for the following components:

  • 1 x LIAN LI PC_Q25B Black Aluminum Mini-ITX Tower Computer Case
  • 1 x APC BE650G1 650 VA Back-UPS 650
  • 1 x SILVERSTONE 500W ATX Power Supply
  • 2 x 8 GB Crucial Ballistix DDR3 1600 Ram Sticks
  • 1 x ASUS C60M1-I AMD Fusion CPU C-60
  • 6 x Seagate Barracuda 7200 ST3000DM001 3TB 7200 RPM Hard Drives
  • 1 Intel EXPI9301CT 1000Mbps PCI-Express Card
  • 1 x Silverstone PP05 Short Cable Set for Modular PSU

One of the coolest things about the C60M1-I was that it had 6 onboard SATA connectors so I wouldn’t need any sort of IO expansion card.

I assembled the system after all the components arrived.  I can’t say enough good things about the Lian-Li case.  It is a very quality piece of equipment and worth the price.  I have never had such a nice case before and I will seriously be considering them for my next PC build.

I built this server in May of 2013.

IMG_7417 IMG_7416 IMG_7415 IMG_7413

Overall I was very happy with this build except for the errors I made which I will outline below.


  • ~ 12 TB of usable storage
  • Ability to lose two drives without loss of data
  • Lian-Li case/backplane is Hot Swap capable
  • Fast
  • Quiet
  • Doesn’t use much electricity
  • Better Airflow = Cooler Drives (Hover around 37 degree C)
  • ZFS Awesomeness


  • Lian-Li case can’t hold more than 8 drives
  • No ECC
  • No Encryption
  • Limited Expandability

Honestly the biggest issues with this NAS device stemmed from my lack of understanding of ZFS at the time I built it.  ZFS can do some really bad things if memory gets corrupted. See the ECC vs non-ECC RAM and ZFS discussion on for more information.

Another related issue is my processor (AMD C-60) doesn’t support the AES-NI instruction set.  This means I couldn’t encrypt my drives and still have acceptable performance.

The lack of ECC and AES-NI could be fixed by moving to an AMD Kabini GX mini-itx board, but as of January 2014, they still are not readily available.

At the core, all of these issues come about because I chose commodity hardware over server hardware.  My next build will overcome these limitations, but it will come at a cost of more expensive components, more electricity usage, and more noise; but that is a post for another day.