Don’t Be Fooled: In Cybersecurity Big Data Is Not The Goal
In other words, the skills to be a security expert do not translate to being able to understand and extract meaning from security data.
With any hyped-up topic, there is a disconnect between what people think can be accomplished and what can actually be accomplished. The latest example is “Big Data.” There’s something different about big data because when executed properly, modern data analysis can seem like nothing short of magic to outsiders. It’s this perception of magic that is so attractive to businesses looking to pull the next rabbit out of a hat.
What’s also unique about “big data” is the way the term is perceived and used in the security industry, which is much different than the approach of data scientists. This blog is an attempt to bridge the divide and talk about “big data” behind the scenes because when it comes down to it, big data is not a goal, it is logistics, and many of security’s big data problems are being solved by small data solutions.
Big data is just hiding the small data
In many instances, “Big Data Analytics” is an embellishment, a little white lie with good intentions. In reality, the primary task done directly with big data is figuring out how to turn it into small data. In many cases, big data will be reduced down by counting, comparing or some other aggregation. Another technique is to pull a small subset or sample from the large data set to get a more manageable data set. Either way, many of the big data jobs are done to produce small data.
For example, if you are working with log data from thousands of systems, you may want to produce a small data file where each system is reduced to a single line with counts of sources and login statuses (or whatever is being measured). Don’t be fooled by the use of “small” here, the output may still be in the megabyte or even gigabyte range, which may be just small enough to load into memory on a laptop for analysis.
If that deflated the mystique of big data for you, don’t worry, there is a small set of big data implementations that will perform analytics at scale, for example, if you need to build a unique and specific model on each application across all of your servers (This is a technique we use to develop security ratings.) While each application may represent small data, doing analysis across all applications represents a challenge. And finally there is a very small sliver of analytics doing complex computations at scale, but these are rare and chances are your problems are just not that special.
Big data enables [over]confidence
While much of the big data analysis is being done on small data, that doesn’t mean it’s the same old small data analysis. Big data is ushering in its own sets of challenges because most of the classic statistics were developed using pencil and paper with just a few dozen samples, perhaps going into the hundreds. The good news is that most of the techniques are actually improved with more data. Analyses can find ever more subtleties as more data is used. Smaller differences can be discovered and more and more nuanced patterns can be detected. This is where some of that magic comes from: there can be big gain from small advantages. However this is not without side effects. One of the classic measures of statistical significance, the p-value, is often irrelevant on large samples. Because analysis can uncover smaller and smaller differences in big data, in reverse, smaller differences now become significant. The effect is that big data can fool those unaware of this effect.
Big data, small samples
Even though big data still has that new-hype smell about it, underlying that hype are techniques that can be traced back centuries -- and all point to a single constant: good data analysis. It’s a hard lesson to learn, but good data analysis is a different skill than domain expertise. In other words, the skills to be a security expert (for example) do not translate to being able to understand and extract meaning from security data. Neither is good data analysis an intuitive skill; it is not picked up by proximity to data day after day. It is only learned by intentional study of statistics and related fields.
As an example, in the age of big data where a million data points is labelled as small, a sample of a few hundred and even a few thousand seems meager and perhaps pathetic to the uninitiated. I’ve seen people dismiss research results because they thought the samples were too small (and they were over a thousand). We must keep in mind that data analysis is not intuitive and even though we have a lot of data, it may not take a lot of data to provide insight or support a business decision. So shake off this notion that big data is a goal and get down to the business of learning from data.
About the Author
You May Also Like