Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.
We Secured the Election. Now How Do We Secure Trust in Results?
Disinformation campaigns are now designed not only to influence how voters fill out their ballots, but also how confident they are in the entire process. How do legislators, media organizations, security professionals, and voters respond?
It may have been a hotly contentious US presidential election, but attacks on vulnerable voting technology, at least, did not flare up this election cycle. The integrity of electronic voting equipment seemed to remain intact, with no reports of widespread cyberattacks altering ballots surfacing on US Election Day or after.
And yet the days after Nov. 3 have been clouded by questions, murmurs, claims of counting irregularities, and more.
"When we talk about hacking the actual voting infrastructure, it's just not what keeps me up at night. Disinformation is what keeps me up at night," says Justin Fier, the director of cyber intelligence and analytics at leading cybersecurity firm Darktrace. "It's easier to hack a person's opinion than it is to hack their technical device. And so I truly believe that hacking the voter is a better approach if someone was going to try to swing an election."
As the country (and the world) moves forward to securing election cycles beyond 2020, the idea of simply worrying about locking down electronic voting machines may seem quaint. The discord fomented by disinformation rampantly spread across the Internet during the counting process highlights some important lessons for election security in the coming years.
As we saw from this year's election aftermath, disinformation campaigns are now designed not only to influence how voters fill out their ballots, but also how confident they are in the entire process.
To that end, Fier explains, many stakeholders in society must step forward to not only secure elections but also the confidence of the people in the democratic process.
Social Media and News Organization Responsibilities
News organizations and social media companies – some of which have already made efforts to combat misinformation -- will need to assume even greater responsibility, Fier explains.
The days after the election saw many different debunked rumors on both sides of the political spectrum that spread like wildfire to influence people's perception of the counting process. Some rumors spread via memes or tweets, sometimes amplified by hasty news reports of those viral messages as reporters fell into the trap of feeding the second-by-second news cycle with unvetted claims. For example, a misinformed tweet about ballot fraud -- which was later removed by the poster and proved false by the Wisconsin Election Commission -- nevertheless continued to worm its way into the hearts and minds of many citizens as the tabulations unfolded.
Social media companies, in particular, must walk a delicate line between completely deleting verifiably false information and allowing for stories to unfold, says Fier. In October, when Twitter prohibited users from posting links to a New York Post story on Hunter Biden on the grounds that it violated Twitter's "hacked materials" policy, it caused a dust-up that prompted Twitter to amend its policy.
"Just taking down posts is not necessarily the right action. I like the approach that Twitter is taking now. [They're] not going to take [a post] down, but [they're] going to put a stamp on it that lets the viewer know that there's some pretty good chance that this is false information," says Fier. "That's putting the onus back on the consumers of the data. I can still consume it if I want, but there's this stamp on it."
Twitter's stamps warning "this claim about election fraud is disputed" have appeared on many of President Trump's tweets since Election Day.
Fier believes that as both social media and media aggregators move forward to vet information, the use of artificial intelligence (AI) against various claims and stories may be necessary to provide some kind of augmented data on the trustworthiness factor of any given story.
"And maybe there's even a risk score associated with it, that it is 75% deemed fake news because of X, Y, and Z," he says. "So I think it's a combination of transparency and leaving it up to the consumer."
Government Responsibilities
Improving transparency of the election process is also the job of lawmakers and political operators, says Fier. Evolving practices like vote-counting livestreams seen during the 2020 election signal a move toward this ideal. Making chains of custody practices over individual votes and groups of votes more uniformly transparent and easily auditable will also help combat disinformation that easily springs up from a disorganized or obfuscated process.
Additionally, legislators need to think carefully about how they handle laws that govern the spread of political messages containing verifiably fake news and deepfaked media.
"Why is it OK for a politician to put out a deepfake? There are laws against slander and defamation. And it's beyond me to understand why a public figure is legally allowed to post these things," Fier explains. "We have so many campaign finance laws. I can't spend more than a few dollars on something without documenting that, so why aren't there similar laws on spreading of untruthful information?"
Voter Responsibilities
Finally, voters must also take responsibility for how they consume information, verify claims, and amplify messages they've heard, says Fier. In this instance, he has three main suggestions:
Understand the tools of misinformation, and look out for signs of deepfakes, hoax emails, and other tricks of the trade.
Work to diversify your news feed to get a fuller picture of every situation.
Stay on the lookout for false claim warnings and dig in to understand why a story may have been flagged.
It's a caveat-emptor situation for citizens, where we are not buying product but instead information. So buyer beware.
About the Author
You May Also Like