You Keep Using That Word ...
When monitoring doesn't mean what you think it does
I don't like to get all "me too" on a topic, but I really enjoyed the back and forth on Richard Bejtlich's blog as Bejtlich and commenters discussed what "monitoring" means. It's pretty clear that everyone's using the word, but they're differing on the purpose of the monitoring itself.
One of the use cases I see often is "monitoring for risk." Now, risk is a slippery word in and of itself, as the squabbling on risk analysis mailing lists can demonstrate. It should always be accompanied by the word "of" to describe what you mean by it. Is it the risk of not being compliant? Is it the risk of not having your systems configured the way you want them? Or is it the risk of a breach?
When you are "monitoring" something, it needs be a transitive verb that comes with an object as well as a purpose. You are monitoring the network to look for changes in routing configuration. Or you're monitoring your users to look for activities that appear unusual. You are monitoring your security program to make sure you are spending your budget on the right things.
Some other uses (or perhaps misuses) of the word "monitoring" can refer to checking continuously against a set of requirements to make sure they're still being met. This is where "monitoring systems for compliance" comes in. If you are monitoring business processes to make sure they are being completed, or monitoring a vulnerability list to see whether new entries apply to your software, then that's different from monitoring system performance so that you can respond to latency or crashes.
The one thing that really bugs me is the conflation of "monitoring for compliance" with "monitoring for effectiveness." You may be scanning your systems to see whether they are all configured the way you want them to be (or the way you last left them), and you may scan them to see whether they have new vulnerabilities. But that's not the same as measuring the effectiveness of your controls -- as in, whether they are actually preventing or detecting intrusions. You may be 100 percent compliant with your intended controls -- but if their control levels are set to "stupid," then they're not going to stop an attack. And if your controls are set to be as restrictive as possible, but they're still getting bypassed in another way, then they're ineffective.
This, by the way, is also the difference between an audit and a penetration test, which is why compliance does not equal security. One tests the state of controls, and the other tests the results.
So I can certainly see how it can be frustrating to have a purported monitoring program in place that is checking for some things, but not others. Monitoring your program to see whether you are monitoring for intrusions is not only meta enough to make my teeth hurt, it's also saying nothing about how well you're doing it or whether you're already compromised. It's the difference between having syslog running (as required by a checklist), collecting those logs (check), collecting enough of them over the right period of time (check), actually reading them often enough to catch something, and understanding what you're reading. I'd even add "being able to act on that understanding" because monitoring without response capability is pretty useless.
Next time you're tempted to talk about "monitoring risk," try to do so without using either of those words. It may end up taking more sentences to describe what you mean, but it'll probably be clearer in the end. And talk about it in terms of purpose: why you are doing it, and what you are supposed to get out of it. That should differentiate the goals of compliance and effectiveness.
Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.
Read more about:
2013About the Author
You May Also Like