BLTs and Monitoring Systems: Best When Homemade (or Why You Should Be Using Nagios)
I'm tempted to grab your attention by opening with something jaded and cool, like "Monitoring sucks", or maybe even something jaded, cool, and punchy like "Your monitoring sucks", and then work back toward how, if you purchase my Nagios book and follow my patented 1024 step plan, they will cease to suck, or at the very least, exhibit some quantifiably lower coefficient of suckiness. Along the way I'd rant about a gaggle of my own straw-man pet peeves with systems monitoring—all of which, for reasons you'll see in a few paragraphs, would be of questionable relevance to you.
But we're all engineers here, and beyond (I hope) the need for punchy marketing chicanery. That's a relief, because I genuinely enjoy systems monitoring, to the extent that I would fail utterly in any attempt to be jaded, cool, and irrelevant where that subject is concerned. Especially irrelevant; my heart just wouldn't be in it, and anyway my book doesn't work that way. Well, it is possible (dare I say probable) that if you read it, your monitoring might suck less, but it isn't about replacing your methodology or tools with mine.
If you don't have a systems monitoring methodology, or any tools, feel free to replace that gaping void in your—probably harried and unhappy—life with those outlined in my book, but my point is that systems monitoring is a different sort of undertaking from say, running web servers. If you need to run a web server, you want a book on Apache, or IIS, and you'll want it to outline the various configuration options in a way that's somehow superior to the official documentation (which is not a high bar for either Apache or IIS), and you'll want a side order of do's and don'ts, and pithy war stories to go with that, and maybe even some nice appendices you can rip out and tape to your wall for quick reference.
If, by comparison, you want to MONITOR a web server, then you have a few thousand choices to make. Your decisions will be highly dependent on your goals, knowledge, the server architecture, the infrastructure within which it resides, and what exactly you're using your web server for (cleartext/ssl, static/dynamic content, simple/3-tier, java/php/ruby, etc...). An IIS admin at a fortune-500 company with a server room will have a different set of requirements than an Apache admin at a cloud-based startup. In fact, two different admins in the exact same environment can have wildly varying answers, and they might both be right. The perfect monitoring system is, I humbly suggest, not a thing you can buy. It is a point on the slope of a line in a graph of flexibility as a function of ease of use. My point ≠ yours.
And here is an important corollary: Monitoring systems that work are always customized. They tend not to be turn-key monolithic programs with menus and check-boxes; they are modular frameworks, loosely tied together, often composed of pieces run by disparate departments, and sometimes visualized as a singular entity. In the same way that restaurant BLTs are always inferior to the ones you make at home, functional monitoring systems—the ones that make everyone happy—are homemade.
Now you can see why my own gaggle of pet peeves would be irrelevant to you; your ideal monitoring system isn't the same as mine. So the classic tool-book approach where you and I have a sort of tacit understanding of what you're trying to achieve, sort of fundamentally breaks down for monitoring tools. Monitoring is just too big a thing. If we drew a Venn diagram of every possible computer engineering related pursuit, monitoring would be a huge circle in the middle that intersected everything. You might be monitoring the chlorine levels in the City of Fresno's water supply for all I know. Monitoring IS the "feedback" in the "feedback loop," and there's just no telling what craziness your loop might entail.
We can agree, however, that a book detailing the various configuration options of a single tool isn't what you need if a great monitoring system is what you want. But that's OK because, as I've already implied, that isn't the book I've written. To be fair, it is a Nagios book entitled "Nagios", so I understand how that might have confused you, but if you knew a little bit about Nagios, I think you'd understand how it was possible to write a tool-book that wasn't a book about a tool.
To that end, here's a pithy fictional war story. Once upon a time, a server failed. It was an important server and it was down all night, and everyone was upset, and they all blamed Ted. Ted was a clever man, so he blamed his predecessor and vowed publicly that if it ever happened again, he would be notified so he could fix it before anyone noticed. So he wrote the following shell script and threw it in cron:
ping -c1 server.place.com >/dev/null || echo ZOMG | mail ted@place.com
This worked OK for a while, but Ted eventually hit a few problems. One day, an unrelated problem with the network caused 40 short-lived connectivity failures between Ted's monitoring box and the server, which in turn caused 40 pages for Ted. Later, Ted got a page that the server was down in the middle of the night, and decided to get a few more hours of sleep before fixing it. But then he forgot all about it and got blamed all over again. This caused a few other admins and a manager to demand to be added to the notification script, but the manager didn't want to be notified in the middle of the night, so Ted had to setup (and maintain) a separate script just for him.
Ted has a good functional test that does what he wants, how he wants it. All he needs is a better way to run it—something like a task-specific scheduling and notification engine that will take care of scheduling his test and handling the complications involved in notification and escalation.
Nagios is a task-specific scheduling and notification engine that wants to run user-provided test scripts. It gives Ted what he needs to define scheduling, notification details, escalation, and time periods for his tests, and takes care of running them, interpreting their output, and taking actions like notifying people. It has no real idea what the test results mean; it merely takes pre-defined actions based on your test scripts' exit code. It tracks dependency relationships so Ted gets paged once for the network problem instead of 40 times for a non-existent server problem. It provides follow-up notifications, escalation, and auto break-fix, so he can't forget about outages. And it manages users and group memberships, so people get paged at appropriate times without any extra work.
Nagios is also the single most successful monitoring system, commercial or otherwise. You might suspect this has something to do with its price tag, but there are quite a few free monitoring systems out there. I think it has been so successful because it does less. Where most other monitoring systems out there are limited to a subset of functionality—some list of things that they can monitor—Nagios just lets Ted be Ted. Further, because it both begs to be built upon and demands to be customized, a mountain of Nagios customization has steadily grown up around it over the years, such that Nagios can be made to suit about any imaginable need, from turn-key easy to use with commercial support, to bleeding-edge international multi-site redundant Nagios clusters with state-sharing.
Did I say mountain? A more apt metaphor is that Nagios is the central star of a solar system of interrelated tools. Many of these orbiting tools (a few thousand or so) are asteroids and small planetoids, but a few, like Check_MK, DNX, OMD, Merlin, and mod_Gearman are planets of their own, and books could be written about them individually. Locked in a sort of extra-solar gravitational field are myriad other stand-alone monitoring tools like RRDTool, Cactai, Graphite, Ganglia, Collectd, and Riemann; they are not specifically created to interoperate with Nagios, but are excellent tools that can feed or be fed by Nagios. Nagios plays very well with other tools.
Nagios is also a domain-specific language for prototyping systems monitoring solutions. Once you understand Nagios and the various ways it's been extended, you pretty much understand the problem domain, by which I mean you know what humanity knows about how to monitor computational entities. You also have a good mental model of the data structures required in the field; how much and what kind of metric and availability data need to be transmitted, parsed and stored. So even if you don't use Nagios in your environment, learning about how it works makes you a better engineer—one who is adept at designing and communicating monitoring solutions to other engineers in an implementation agnostic way—and that is something I cannot say about the preponderant quantity of monitoring systems I've used. Further, vast swaths of Nagios-related software can be "snapped off" and used in other monitoring contexts. Knowing about these pieces WILL help you avoid re-inventing the wheel with whatever monitoring system you're stuck with.
By now I hope I've given you some idea of why a good Nagios book can't strictly be a book about Nagios, at least not if it sincerely wants you to be successful and happy with your systems monitoring efforts like mine does. If you can imagine a tool-book that arms you with what you need to design and build your own BLT using the specific combination of state-of-the-art monitoring tools that uniquely fit your needs, a book packed with hard-won nuggets of wisdom that covers not just configuration, but also best practices, theory, and integration details for the entire solar-system of software orbiting Nagios, my book is probably not quite what you're imagining, but it's close enough to warrant the 30 some-odd bucks.
Take it easy.