Automating Ethics for Cybersecurity
Having a code of ethics and enforcing it are two different things.
Doctors, accountants, and lawyers all operate under a code of ethics, but what about security professionals? Shouldn't they, too? While cybersecurity breaches don't necessarily have the life-and-death consequences of, say, brain surgery, the more vicious cyberattacks can and do cripple livelihoods, often en masse. Witness last year's WannaCry ransomware attack, the Equifax breach, and the more recent processor flaws Spectre and Meltdown.
A number of IT security organizations do have codes of ethics — SANS, ISSA, and GIAC, for example. They spell out the do's and don'ts that should already be inscribed in the heart of every security professional. Things like, I will not advance private interests at the expense of end users, colleagues, or my employer; I will not abuse my power; and I will obtain permission before probing systems on a network for vulnerabilities.
But having a code of ethics and enforcing it are two different things. Some organizations may have security pros sign off on such frameworks, but this is little more than a move that allows employers to prosecute the signer if she later abuses her power or simply makes a mistake. And mistakes do happen. A novice or unskilled IT operator, like a novice or unskilled plumber, can screw up. Badly.
Don't Regulate — Automate
This question of enforcement is a tricky one. In a recent op-ed in The New York Times, cybersecurity executive Nathaniel Fick compares cybersecurity today to accounting in the pre-Enron era. Just as the Enron scandal inspired higher standards for corporate disclosure with the Sarbanes-Oxley Act, Fick proposes that cybersecurity breaches like WannaCry and Equifax should spur increased regulation of cybersecurity practices by the federal government.
While I applaud the intent here, governmental intervention does not solve the enforcement issue. Regulations can be ignored, subverted, forgotten. Even if enforced after the fact — by an army of auditors, for example — damage has already been done, and victims may not necessarily be made whole. On a personal level, I'm not much of a fan of interventionist approaches in general, and I will choose non-intervention every time. Instead of regulating, how about automating?
One of the best ways to implement an ethical framework is to automate it. It’s not yet perfect, but in the world of driverless cars, automation enforces traffic rules and regulations without giving the driver a chance to make a mistake. Automation keeps the vehicle in the correct lane, makes it adhere to speed limits, avoid pedestrians, cyclists, and kids darting from behind ice-cream trucks — regardless of the experience level or skill of the operator. Today, we benefit from automation of this kind in a wide variety of scenarios. Imagine what automation could do as a means of enforcing "the right thing" in large, complex data centers.
Instead, we entrust the running of these huge IT environments to system administrators, many of whom have been gaming, hacking, and cracking since their early teens. Their tech smarts are beyond reproach, but how many of them have the ethical foundation needed to handle such a responsibility? In some ways, it's like putting a regular motorist behind the wheel of a Formula One racecar.
Sometimes the enemy of doing the right thing is simply too much data — and here, too, automation can play a role. The Equifax breach is a case in point. The data was all there to indicate that something was going wrong, but the sheer amount of it was so overwhelming that the security teams couldn't separate the signal from the noise. Automation doesn't allow bad actors to take advantage of the system. With the right investment, automation could have prevented the breach.
Separate the Signal from the Noise
Many solutions today automate functions such as patching, software updates and determining whether or not a system is vulnerable before it attaches to the network. It helps keep human error and questionable ethics out of the equation. Take Microsoft Windows. One reason it's so vulnerable to attack is that, philosophically speaking, it was created to be open (unlike UNIX, which was created closed). Its creators worked under the assumption that Windows operators would have a strong ethical compass and would not be bad actors. As a result, many exploits in Windows have stemmed from people uncovering a door left open or system unpatched. Automation pre-empts bad actors from exploiting these vulnerabilities.
We live in an era where ethics are given scant regard — in which our leaders almost daily eschew the moral high ground. Compromise your ethics, hold your nose and get the vote to hold the party line seems to be the order of the day. How long before we see a trickle-down of this attitude into cybersecurity, if it hasn't already happened?
In the absence of an ingrained ethical framework and assurance of skill levels, security professionals needs tools to dynamically and in real time enforce good, skilled, and ethical behavior. The end-state we should seek is a strong ethical culture embedded in system and network administrators, in security practitioners, in database administrators — in short, in all those who have access to the keys of the kingdom.
Related Content:
Join Dark Reading LIVE for two cybersecurity summits at Interop ITX. Learn from the industry's most knowledgeable IT security experts. Check out the Interop ITX 2018 agenda here.
About the Author
You May Also Like