Innovation, Not Regulation, Will Protect Corporations From Deepfakes
If CEOs want to prevent their firm from being the next victim of a high-profile deepfake scam, they need to double cybersecurity funding immediately.
COMMENTARY
In a recent open letter, high-profile names from across the business, academic, and scientific worlds called for governments to intensify their regulation of deepfakes.
While their aims are admirable, their efforts are misplaced — it's innovation, not regulation, that will shore up our defenses against the deepfake threat.
The letter, titled "Disrupting the Deepfake Supply Chain," was signed by prominent thinkers, including Stephen Pinker, computer scientist Joy Buolamwini, US politician Andrew Yang, and the "godfather" of AI, Yoshua Bengio.
Specifically, the letter called for increased criminalization of deepfakes, the establishment of specific criminal penalties, and liability for software developers and distributors for the use of their products in deepfakes.
The letter correctly identifies the symptoms. Deepfakes are proliferating at a rapid rate — more than 900% annually, according to The World Economic Forum. This is ratcheting up the threat level across society.
Deepfakes are already an established weapon in the hacker arsenal. One finance worker recently paid $25 million to malicious actors after they used a deepfake of the employee's CFO in a video conference call. This should serve as a warning shot across the bow for the corporate world, and the open letter is correct to raise the alarm about the relative lack of action.
However, delegating responsibility to governments will only leave corporations more exposed. CEOs cannot afford to sit back and rely on regulators to stem the flow of deepfakes. They must take action and build their own defenses as soon as possible — and they are already starting from behind. Catching up will mean doubling their investment in innovative technologies that can counteract deepfakes.
Stone Age Policies
But why aren't governments up to the task themselves?
Most governments' cybersecurity departments and policies are in the Stone Age compared with these hackers. By the time legislation is drawn up, debated, and rolled out, it's often already antiquated and behind the pace of technological development.
Furthermore, relying on reactive regulation in one government also means trusting all governments to do the same. No matter how punitive the legislation in one country, hackers in another country won't worry about the penalties, so nations will have to work together to negate the threat of deepfakes. But right now, hoping the Chinese or Russian states will prevent hackers from disrupting business in the West is nothing short of naive.
CEOs and senior management must step up to the challenge and take responsibility for ensuring that they aren't the next victim of a deepfake scam.
Fortunately, management has several technologies available that can be rolled out to shore up their deepfake defenses, including advanced authentication, detection AI, and content watermarking.
CEOs can integrate enhanced authentication standards to insulate their firms from deepfake scams. Two-factor or multifactor authentication systems require users to provide additional verification beyond a password, adding information that deepfake scammers will need to access sensitive information.
Fighting Deepfakes With AI
Management must also deploy AI in the fight against deepfakes. AI tools can analyze media files for anomalies and inconsistencies that are evidence of tampering. They can also analyze the subjects of these files, such as facial expressions and vocal patterns, and then flag non-natural inconsistencies that the human eye may miss. They can even compare deepfakes against genuine employee biometric data to identify fake representations of company employees.
Another option available to corporations is content watermarking, which embeds invisible, unique identifiers into official company media files. This can be used to verify the authenticity and origin of media content that employees come into contact with.
These are just some of the many digital tools that management can use to insulate themselves from deepfake scammers and hackers. But they come at a cost.
Currently, cybersecurity funding lags far behind the scale of the threat that deepfakes pose. As with the finance worker above, company funds are at risk. Deepfake scams could also be used to access sensitive IP and trade secrets. Deepfakes of employees can also damage a firm's reputation, harming investor and consumer confidence.
Even in the face of these ominous threats, firms continue to woefully underfund cybersecurity. The recent Cisco Cybersecurity Readiness Index highlights how only 3% of organizations have the "mature" level of readiness needed to be resilient against cyber threats.
So, if CEOs want to prevent their firm from being the next victim of a high-profile deepfake scam, they need to double cybersecurity funding immediately. Relying on the government to do this for them only increases their exposure to the risk that deepfakes pose. With healthier financing, corporations can roll out authentication tools, AI identification systems, and content watermarking to properly insulate themselves from the deepfake threat.
About the Author
You May Also Like