Breaking cybersecurity news, news analysis, commentary, and other content from around the world, with an initial focus on the Middle East & Africa and the Asia Pacific
'Unfaking' News: How to Counter Disinformation Campaigns in Global Elections
What cybersecurity professionals around the world can do to defend against the scourge of online disinformation in this year's election cycle.
COMMENTARY
In recent global election cycles, the Internet and social media have facilitated the widespread dissemination of false news, misleading memes, and deepfake content, overwhelming voters. Given that it is difficult to directly compromise election systems used to vote and count votes, adversaries turn to the age-old psychological manipulation technique to get the desired outcomes: no hacking needed. With the emergence of generative artificial intelligence (AI) tools, the impact of disinformation campaigns is expected to escalate further. This has led to increased uncertainty and ambiguity regarding reality, with personal biases often shaping perceptions of truth.
In a sense, disinformation is like a cyber threat: As security leaders, we realize that malware, phishing attempts, and other attacks are a given. But we put controls in place to minimize the impact, if not prevent it entirely. We develop defense strategies based on decades of historical knowledge and data to gain the best advantage.
Today's disinformation campaigns, however, are essentially a product of the last decade, and we have not yet designed a mature series of controls to counter it. But we need to. With 83 national elections in 78 countries taking place in 2024 — a volume not expected to be matched until 2048 — the stakes have never been higher. A recent wave of troubling incidents and developments illustrate the many ways that adversaries are attempting to deceive the hearts and minds of the world's voters:
In Europe, the French Foreign Minister accused Russia of setting up a network of more than 190 websites intended to spread disinformation to "destroy Europe's unity" and "make our democracies exhausted" in seeking to discourage support for Ukraine. The network, codenamed "Portal Kombat," has also sought to confuse voters, discredit certain candidates, and disrupt large sporting events like the Paris Olympics.
In Pakistan, voters have been exposed to false Covid-19 and anti-vaccination propaganda, online hate speech against religious groups, and attacks on women's movements.
The World Economic Forum ranks foreign and domestic entities' or individuals' use of misinformation and disinformation as the "most severe global risk" for the next two years — over extreme weather events, cyberattacks, armed conflicts, and economic downturns.
Let's be clear here about the difference between disinformation and misinformation: The latter is information that is wrong, but not intended for mass distribution. The "fake news" distributor may not even be aware of its inaccuracies.
Disinformation, on the other hand, occurs when an entity (such as an adversarial nation-state) knowingly leverages misinformation with the intent of viral distribution.
The psychological manipulation jeopardizes the stability of democratic institutions. Think of disinformation farms as a large office floor with hundreds or even thousands of people doing nothing but making up authentic-looking blogs, articles, and videos to target candidates and positions that contradict their agendas. Once unleashed on social media, these falsehoods spread rapidly, reaching millions and masquerading as real events.
How can citizens best protect themselves from these campaigns to maintain a firm grasp on what's real and what isn't? How can cybersecurity leaders help?
Here are four best practices.
DYOV: Do Your Own Vetting
A meme or GIF doesn't stand alone as a credible source of information. Not all professional-looking publications are credible or accurate. Not every statement from a trusted source may be their own. It’s too easy to create fake videos using AI-generated images. There are few arbiters of truth on the Internet, so buyer beware. Moreover, we can't depend on social media platforms to monitor and eliminate disinformation — regardless of whether we agree or embrace it. Section 230 has established immunity for online companies serving as publication resources for third-party content.
It's critical to look at different platforms and reconcile those with what government websites, real news outlets, and respected organizations such as the National Conference of State Legislatures (NCSL) are reporting. Inconsistencies should serve as a warning sign. Also, when seeking out biases from the information source, always ask, "Why should I believe this? Who is the author? What is their interest in this position?"
2. Avoid Becoming Part of the Problem
Social media makes it too easy to run with a post or video that presents a version of "truth" that is anything but. Architects of disinformation campaigns depend upon individual users to spread their messages, i.e., "It came from my sibling/boss/neighbor, so it must be true." Again, DYOV before passing anything along. Be judicious about clicking on "forward" and "like" buttons to avoid being an engine of these campaigns.
3. Follow Watchdogs
Organizations like the Netherlands-based Defend Democracy, the University of Pennsylvania-based FactCheck.org and Santa Monica, Calif.-based RAND Corp. offer resources to better help distinguish fact from fiction. In the academic community, San Diego State University's University Library and Stetson University's duPont-Ball Library maintain a list of watchdog groups, databases, and other resources.
4. Take a Leadership Stand
As cybersecurity professionals, we recognize that threats like brand impersonation and phishing occur beyond our controlled technology environments. We cannot block every email, and our controls won't block or even detect impersonations on technology that we don't control. Instead, we must actively promote cyber education and awareness so employees can learn about the latest phishing attempts and the dangers of clicking on unfamiliar links.
We should take a similar, education-focused approach with disinformation campaigns. We can create employee awareness programs so they understand what to look for, even when the attempts do not involve our technology. We can even promote this knowledge through various platforms — internal company communications, public-facing blogs, articles — where we have a prominent voice. Offer credible and contextual resources against which they can vet information.
Unfortunately, disinformation — especially during political seasons — cannot be avoided, forcing us to field all relevant "facts" through appropriate vetting. However, tools enable everyone to do this while educating employees and the public as cybersecurity leaders. If they do so, 2024 may be remembered as the year when the global community decided that the truth matters.
Read more about:
DR Global Middle East & AfricaAbout the Author
You May Also Like