How many times have you thought “gee, if I had an SSD on my build box things would sure go a heck of a lot faster”?

Sometimes it’s true, sometimes it’s not, and the results can be a bit surprising.

In these last days I’ve done a full build of SmartOS from source three times. In all cases the build machine was an HP DL165G7 with 32 cores of AMD Opteron 6274, 192gb of RAM and an LSI 9211-8i disk controller. The SmartOS guest build environment was identical for each build and configured to have 64gb of RAM.

The only difference was in the configuration of the disks. The three configurations were:

  • 8 x 600gb 10k RPM SFF SAS drives, configured as a single RAIDZ2 (i.e. RAID6)

  • 6 x 600gb 10k RPM SFF SAS drives + 2 x 200gb SAS SSD, configured as three mirror vdevs (a stripe across three mirrors) and a mirror SSD slog

  • 8 x 200gb SAS SSD, took defaults on config which resulted in two three-device RAIDZ vdevs (a stripe of two raid5) and two spares.

Here are the results:

smartos build - 8 disk raidz2, no slog

lofi compressing usr file system

real    3h19m16.88s
user    17h17m0.14s
sys     15h55m26.87s$ 

smartos build - 3 x 2-drive mirror vdevs, mirrored ssd slog

lofi compressing usr file system

real    3h22m16.50s
user    16h51m32.19s
sys     14h44m19.74s$ 

smartos build - 2 x 3-drive raidz1, two spares, SSD (took default at installation for 8 drives)

lofi compressing usr file system

real    3h16m3.91s
user    16h54m0.59s
sys     14h26m54.59s$ 

Counterintuitively, a good filesystem with decent cacheing and write aggregation characteristics (as well as plenty of RAM to do the cacheing in and no synchronous writes) is almost as good as having only SSDs, at least in this application.

The obvious take-away from this exercise is that there’s no substitute for carefully benchmarking your intended application and seeing if the technology you intend to apply gives you the results you desire.