I’ve been shopping around for a while for a decent log-eater-and-analysis tool for use in the BroColo.

I gave some cursory thought to using Splunk, but Splunk License vs. Second-Hand 1-Ton Diesel Pickup Truck seems to be an either-or proposition money-wise.

Graylog2 seems to have some mindshare so I figured I’d give it a whirl and put some syslog (mostly mail logs) into it for subsequent analysis.

The web gui is fairly intuitive, and there’s elasticsearch under the hood so the search grammar is Lucene. Already found some interesting things in my mail environment that needs to be cleaned up (subtly misconfigured stuff that was gunking up the logs periodically, etc). So far so good, right?

Wrong. Graylog2 is off the table for security reasons.

Aside from the fact that it’s a big honking Java .jar (which always makes me worry), I had my first misgivings about it when I found this little gem in /etc/graylog/server/server.conf:

# You MUST specify a hash password for the root user (which you only need to initially set up the
# system and in case you lose connectivity to your authentication backend)
# This password cannot be changed using the API or via the web interface. If you need to change it,
# modify it in this file.
# Create one by using for example: echo -n yourpassword | shasum -a 256
# and put the resulting hash value into the following line
root_password_sha2 = e3c652f0ba0b4801205814f8b6bc49672c4c74e25b497770bb89b22cdeb4e951

What year is it anyway? Apparently some people still haven’t learned the lesson of LinkedIn’s epic fail that you don’t use unsalted hashes for your password storage. Moreover, there’s no reason to roll your own KDF for password storage, given that there are plenty of splendid alternatives out there such as bcrypt, scrypt, and pbkdf2. Read the popular press and some blog entries and pick one that tickles your fancy; don’t implement your own KDF or hash function unless your name is Rivest or you work for the NSA or something, mkay? Oh wait, those guys get it wrong too. I am not joking here, this sort of thing makes me foam at the mouth.

I figured I could suck it up and still make myself reasonably happy via the simple expedient of front-ending the whole affair with nginx to handle the ssl layer and demand a client side X.509 key (a HOWTO for the latter will be the subject of a future blog post, promise!).

Turns out, though (after some head-scratching and difficulty with initial turnup) that this is not the case. See, the client javascript or whatever that’s running in the browser demands to talk directly to Graylog2’s REST interface on the IP address and port which has been directly configured in server.conf (TCP/12900 by default), rather than passing its API calls over the same TCP/443 connection that you (think you) logged in over.

ssh tunneling my entire communication with graylog2 seems to be a non-starter. Putting in a shim there would be an undertaking - basically making graylog2 have some split brain configuration that the developers didn’t really imagine. I don’t have the option of not exposing the API to the whole world by binding to 127.0.0.1 (my original plan since my ingest was to be exclusively syslog).

I could maintain fine-grained firewall rules to cover the places I might happen to be, and do whatever’s necessary to get https:// working with the REST API, because God only knows what kind of material might be getting passed back and forth over it… or I could consign myself to use of a VPN whenever I wanted to use it.

This is clearly “enterprise grade” architecture at its worst. Unacceptable for those of us from the SP world.

The search continues…