Log All The Things
How the growing granularity in computing is going to affect monitoring
Yes, computers are smaller now. From the behemoths of old to systems on a chip, we've seen a change in the form factor such that these days, you could just about lose millions of SSNs if your USB navel-piercing fell down the shower drain. And that's not even counting the sprawl when it comes to virtual machines.
But there's another trend, as well. Mainframes, VAXen, PDP-11s, and so on were multiuser and multipurpose (even if, in some cases, you could only load one card stack at a time). Then came distributed computing, and the computing power was split between the client and the server 00 notwithstanding diskless workstations and the whole "the network is the computer" thing. This meant that the resources were being dedicated: some for the local work, some for the remote, and they weren't easily transferable.
Unix servers would often be centrally used – covering whole departments – but as time went on, enterprises split them for particular purposes, such as storage, processing, routing, mail, and more. Windows servers became even more granular, depending on how much they could handle at one time. Add in high availability requirements, and you suddenly had hundreds of servers to go with hundreds of endpoints.
This is about the time where logging and monitoring got complicated. Keeping clocks in sync and deduping the huge amount of similar data generated by each of those systems are just some of the challenges that gave rise to the industry we now know and love: log management. We're just now starting to figure out how to monitor mobile devices in a scalable and flexible manner.
Things only got worse when VMs came along. Cheap and easy to spin up and down, like blowing bubbles, these systems not only have to be monitored in and of themselves, but their lifetimes and hypervisor management need coverage, as well. Their great advantage is being dynamic, but a dynamic environment also means more things are happening – and when more things are happening, there's more to monitor. It's easier to monitor one big lumbering elephant than it is a thousand squirrels, but what we've got here, right now, are squirrels.
Now we even have the concept of micro-VMs, where individual processes and tasks are wrapped and managed separately. If you thought VM sprawl was a problem now, wait until you have to track events associated with security policies for, say, each tab of your browser. And don't forget the Internet of Things, in which all our millions, or perhaps undecillions, of smart components might require some sort of monitoring.
We're getting to the point where, as Jonathan Swift once wrote:
"
So, naturalists observe, a flea
Has smaller fleas that on him prey;
And these have smaller still to bite 'em;
And so proceed ad infinitum.
"
Increasing granularity of computing may be good for performance, and it may be good for some aspects of security, but it's not good for everything. If security monitoring is going to have to address the problem of infinite fleas, then we'd better get cracking.
Wendy Nather is Research Director of the Enterprise Security Practice at the independent analyst firm 451 Research. You can find her on Twitter as @451wendy.
About the Author
You May Also Like