Major Tech Firms Develop 'Tech Accord' to Combat AI Deepfakes

The accord covers initiatives to create more transparency regarding what tech firms like Meta, Microsoft, Google, TikTok, and OpenAI are doing to combat malicious AI, especially around elections.

Dark Reading Staff, Dark Reading

February 16, 2024

2 Min Read
A hand hovering before a screen that has different options to select including one that says "AI"
Source: WrightStudio via Adobe Stock

In what is being referred to as a "Tech Accord," major technology companies are showing that they are willing to work together in order to combat artificial intelligence (AI)-generated content that could threaten democratic elections globally this year.

The draft of the accord will be presented at the Munich Security Conference, which begins today, and companies including Meta, Microsoft, Google, TikTok, and OpenAI will present details. 

The agreement comes as 64 countries plus the European Union are set to hold national elections this year. According to Time Magazine, 2 billion eligible voters globally will head to the polls, representing about 49% of the global population. 

"In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters," major tech firms said in a joint statement. "Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective."

The pledge in this draft of the accord includes creating tools, such as watermarks and detection techniques, to help identify "deepfake" AI images and audio and debunk it. It also includes commitments to a more transparent conversation about how these technology giants are combating AI-generated information on their various platforms. 

Some in the tech community, however, are unsupportive of the initiative as it draws attention away from regulating these major firms.

Meredith Whittaker, co-founder of the AI Now Institute, reviewed the draft of the pledge and doesn't believe these tech companies can be trusted to oversee themselves. 

"Deepfake doesn't really matter unless you have a platform you can disseminate it on," she said, noting that the pledge does nothing to combat issues of social media platforms targeting specific demographics of voters. 

Political deepfakes are becoming more prevalent in an array of different countries, including the US and the UK. Just recently, an AI-generated deepfake robocall impersonating President Biden was released, urging voters in New Hampshire to abstain from the primary election. 

About the Author

Dark Reading Staff

Dark Reading

Dark Reading is a leading cybersecurity media site.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights