Combat Misinformation by Getting Back to Security Basics

One volley of fake news may land, but properly trained AI can shut down similar attempts at their sources.

Dan Spurling, Senior Vice President, Product Engineering, Teradata

December 14, 2021

4 Min Read
Figure pushing a rock labeled "misinformation"
Source: GoodIdeas via Alamy Stock Photo

For generations, technology has generally been viewed as an enabler of positive social change, rarely creating more problems than it solves. But in recent years, there's an emerging realization that technology has a darker side: Connecting everyone through the Internet has enabled widespread and instantaneous distribution of fake news, weaponizing misinformation as an existential threat to democratic society. During the last 18 months, distortions regarding the COVID-19 virus, the pandemic, and vaccines have spread like wildfire, polarizing citizens and unnecessarily multiplying death tolls across the world.

While misinformation is not a new problem, technologies such as social media and artificial intelligence (AI)-generated content have elevated it to new levels. Malicious actors are leveraging technology to strategically spread disinformation to online groups of like-minded people, empowering large masses to unknowingly spread misinformation globally. This vicious cycle reinforces biases and tears communities apart. Human fact checkers working by themselves simply cannot keep pace with the quantity of false content and links that are now being shared every day.

Thankfully, recent advancements in data analytics and AI have emerged as effective solutions to combat problematic content at scale. Unlike human moderators, who cannot hope to block every link or post, data science technologies present tremendous opportunities for curbing fake news before it goes viral, detecting not only inaccurate information but also the accounts and patterns behind misinformation campaigns. Basic versions of the technology are already leveraged to stop spam calls and emails. Going forward, one volley of fake news may land, but properly trained AI will shut down similar attempts at their sources.

Given the variability in data and models, as well as human conformation or anchoring biases, these beneficial data science tools are not without challenges: Misinformation-squashing systems will only be trustworthy if the integrity of the data, sources, and analysis is guaranteed with full transparency. This means core fundamentals of data science and data management will be essential in driving solutions that help consumers separate fact from fiction.

To ensure the integrity of information and ultimate protection for consumers in the digital age, companies must start by making upfront investments in foundational resources, namely the standards and seamless structures that support modern data governance, integration, and architectures. With that foundation in place, we can develop solutions that intentionally circumvent human bias using scientific methods: validating claim sources, weighing the validity of source data, scrutinizing supporting data journey and custody, and using a diverse group for critical peer review.

While CIOs and internal data scientists may know that their systems' foundations and structures are sound, they won't succeed without the public's trust. For that reason, they must build open source scoring systems, including traceable lineage of source data and weighting, so that the methods and data used by AI fact checkers is human readable and understandable. This is the only way to guarantee objective credibility when scoring the accuracy and validity of claims made in any article, op-ed, comment, speech, or legal decision — credibility that may prevent unintentional spreaders of misinformation from echoing whatever they see.

The benefits of addressing and mitigating misinformation are not limited to individuals, given the influence and purchasing power of large, mobilized groups. With effective data foundations in place, and a focus on building community trust, modern firms can create or participate in open solutions that encourage consumer confidence, effectively decreasing brand or reputation risk while also minimizing the spread of misinformation. While individual companies will undoubtedly seek to create closed systems, today's leaders should embrace open platforms — both for corroboration and consumption.

From a corroboration perspective, we expect that firms will provide validation capabilities, as well as raw data sets, to combat the spread global misinformation — for both altruistic and monetary reasons. These validation capabilities will likely follow an open model for programmatically answering “questions” around claims tied to the company, effectively operating as a digital spokesperson for the firm.  Additionally, firms likely will offer standard integration capabilities to and from their existing platforms, including streaming models, publish/subscribe models, and even batch data loading, in order to provide authenticated data that enables third-party amalgamated fact-checking systems to provide consumers with trusted answers.

On the consumption side, we expect firms will integrate their existing decisioning systems with external curated data sources to make educated decisions based on global data.  We have seen early examples through shifts in supply chain strategies, innovation investments, inventory management, and even travel and vaccine mandate decisions. Firms that leverage and disclose this curated mix of local and global data used in decision-making will build further trust with their employees, partners, and communities.

Though human moderators alone could not prevent the recent proliferation of fake news from spreading across the Internet, AI solutions built with data analytics will grow over the next decade. These AI solutions will transition from helpful tools to primary combatants in the war over truth. Ultimately, trustworthy data science will be the foundation that helps companies reverse the trends of mass misinformation and restore objective facts to their rightful, dominant role in keeping the public informed.

About the Author

Dan Spurling

Senior Vice President, Product Engineering, Teradata

Daniel is responsible for Teradata’s Engineering organization, ensuring the successful architecture, development, and testing of the products that we provide to our customers.  Daniel brings over 25 years of experience leading engineering and technology organizations at companies like Slalom, Getty Images, T-Mobile, and JPMorgan Chase. Daniel holds an MBA from the Foster School of Business and multiple other technical and process certifications, however most of his learnings have come from successes and failures in real life.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights