One of the things we're doing a fair amount of these days is shlepping around ZFS snapshots (both incremental and full) between machines in the same or adjacent racks. Unfortunately, sending them over an SSH connection like this:
zfs send zones/med@now | ssh remotehost "zfs recv zones/med"
gets a little pokey because the crypto in the ssh implementation on SmartOS is single core. It's unnecessary in my particular case too (this is a choice that should be made wisely).
I wish the SmartOS people would get, uh, smart and migrate to current OpenSSH HPN-SSH patches with the ability to turn off encryption (i.e. only use crypto for the authentication handshake and not for the payload). I'm sure the argument against this is that people would be stupid about what they used it for and have data breaches. As for me, I'm pretty sure that the NSA isn't in my rack.
Until then, though, there's netcat. Written by my pal Hobbit a couple of decades ago, nc is a nice way to hook tcp/ip sockets to Unix pipes. This gets us goodput between a couple of DL160G6es with four disk RAIDZ2 of around 90 MByte/sec (via gigabit ethernet). My suspicion is that the gating factor is how fast we can write to the disks. Nice:
18:16:13 1.04G zones/med@201410291800-KEEPME 18:16:14 1.14G zones/med@201410291800-KEEPME 18:16:15 1.23G zones/med@201410291800-KEEPME 18:16:16 1.32G zones/med@201410291800-KEEPME 18:16:17 1.41G zones/med@201410291800-KEEPME 18:16:18 1.50G zones/med@201410291800-KEEPME
You probably want to block off outside access to the port you use at your router/firewall so as to avoid shenanigans. Here's the cheat sheet:
On the receiving machine, set up a listener:
nc -l -p 9999 | zfs receive zones/med
Then on the sending machine set up a zfs send (verbose so you can gawk):
zfs send -v zones/med@now| nc -w 20 192.0.2.10 9999