Security in a Complex World
Innovation and complexity can co-exist; the key is to use innovation to make ever-expanding complexity comprehensible and its effects predictable.
In 1999, security technologist Bruce Schneier published "A Plea for Simplicity." In the blog, he famously wrote, "You can't secure what you don't understand" and "the worst enemy of security is complexity." Schneier explained that analyzing a system's security becomes more difficult as its complexity increases. His goal was to convince the technology sector to "slow down, simplify, and try to add security."
More than 20 years later, Schneier's plea seems naive and even quaint. Innovation has become a force of nature; it will neither stop nor slow down. More innovation means more features, which inherently means more complexity. We all want secure systems, but no one is willing to slow the march of progress to make that happen.
In "We Work the Black Seam," Sting sings, "They build machines that they can't control and bury the waste in a great big hole." Although he was singing about nuclear energy, the lament is true for many modern technologies — especially for computer systems and networks. The modern computer network is almost unbelievably complex. Thousands of nodes connect through millions of potential network paths. Most networks are not designed so much as they evolve. Corporations grow, contract, connect to suppliers, and merge with competitors. As they do, their network expands, shrinks, and morphs like a living entity. At any moment, no one is sure what devices are on it, exactly how they are all connected, or what all the security implications are. It is humanly impossible to keep track of thousands of access controls or fully understand the aggregate effects.
At first, many believed adopting cloud technologies would make security easier. Unlike the operating systems of the '80s and '90s, public cloud platforms were designed with security in mind. If the customer configures them correctly, Amazon, Microsoft, and Google promise, their infrastructure is secure. So far, that promise seems to be holding true. But innovation breeds complexity, and that immutable law of nature turns out to be true in the public cloud as well.
So public cloud platforms are secure if configured correctly, but that is much easier said than done. Network segmentation rules in the cloud are at least as complex as in traditional networks, and most companies use cloud environments from more than one vendor, each with their own terminology and rules. You can set segmentation boundaries around almost anything — creating a web of access restrictions that, in aggregate, is almost incomprehensible. Consider that AWS alone includes 175 native services, and that number grows monthly. To be secure, most cloud providers require you to follow specific best practices as you implement them. Again, it is beyond human capability to keep up and foresee all the implications.
Fight Fire With Fire
So, all these years later, how do we console the Bruce Schneiers of the world? How do we have innovation and security? There is one possibility. Since innovation won't slow, the key is to use innovation to make ever-expanding complexity comprehensible and its effects predictable. In other words, fight fire with fire.
Computers are very good at modeling complex and dynamic systems. They chart the course of hurricanes, climate change, ocean currents, infectious diseases, and economies. Increasingly, modeling technologies are unwinding the complexity of computer systems and identifying security holes. A human can't read thousands of firewall rules across hundreds of firewalls and predict the aggregate effect, but a computer can. No one can decipher thousands of segmentation rules and access permissions in a large cloud environment, but a computer model can.
Why Modeling's Limits Don't Apply to Security
Critics might argue that modeling technologies don't have a perfect track record. Weather models have come a long way but are still accurate only three to four days into the future. The models for COVID-19 transmission have been far from ideal. Why would security modeling be any better?
A model's accuracy is limited by two things: its understanding of the system's internal workings, and the number and quality of its inputs.
The first limitation makes it hard to forecast COVID-19's progression because the disease is new and its internal behaviors are largely unknown. But this is less of an issue in cybersecurity. Security controls' behaviors are readily understandable on an individual basis. It is only when there are thousands of them across hundreds of systems that they become incomprehensible.
The second problem is a bigger obstacle with cybersecurity. To model a system, you have to be able to query all security products for their rules and configurations. A model must be able to gather inputs from a huge array of security devices and applications. That means creating hundreds of individual connectors. Although this is not technically difficult, it is a huge logistical and economic challenge. But with standard sets of interfaces, this problem can be overcome.
In the end, we don't need innovation to slow down. We just need to agree to a few ground rules for how innovation will proceed. That is the way humans always make progress. We break through barriers of complexity by agreeing on basic standards and ground rules and then letting innovation proceed. In effect, we don't curtail or retard innovation, we just focus it — and use technical innovation to understand it.
About the Author
You May Also Like