Cybersecurity In-Depth: Feature articles on security strategy, latest trends, and people to know.

White House Outlines AI's Role in National Security

A national security memorandum on artificial intelligence tasks various federal agencies with securing the AI supply chain from potential cyberattacks and disseminating timely threat information about them.

Jennifer Lawinski, Contributing Writer

October 30, 2024

3 Min Read
Giant blue letters spelling out AI in a dark blue background
Source: Marcos Alvarado via Alamy Stock Photo

President Joe Biden issued the first national security memorandum on artificial intelligence (AI), recognizing that advances in the field will have significant implications for national security and foreign policy. The memorandum builds on the administration's policies to drive the safe, secure, and trustworthy development of AI. 

The White House directed the US government to create systems that ensure the country will lead in the global race to develop AI technology for national security purposes and to advance international regulations and governance. The memorandum also seeks to ensure that AI adoption reflects democratic values and protects human rights, civil rights, civil liberties, and privacy while encouraging the international community to adhere to the same values. 

"While the memorandum holds broader implications for AI governance, cybersecurity-related measures are particularly noteworthy and essential to advancing AI resilience in national security applications," wrote R Street cybersecurity fellow Haiman Wong in an analysis of the memorandum

The memorandum, issued last week, tasks the National Security Council and the Office of the Director of National Intelligence (ODNI) with reviewing national intelligence priorities to improve the identification and assessment of foreign intelligence threats targeting the US AI ecosystem, Wong noted. A group of agencies, including ODNI, the Department of Defense, and the Department of Justice, are responsible for identifying critical nodes in the AI supply chain that could be disrupted or compromised by foreign actors, ensuring that proactive and coordinated measures are in place to mitigate such risks.

The memorandum tasks the Department of Energy with launching a pilot project to evaluate the performance and efficiency of federated AI and data sources in order to refine AI capabilities that could improve cyber threat detection, response, and offensive operations against potential adversaries, Wong said. The Department of Homeland Security, the FBI, the National Security Agency, and the Department of Defense are tasked with publishing unclassified guidance on known AI cybersecurity vulnerabilities, threats, and best practices for avoiding, detecting, and mitigating these risks during AI model training and deployment, as well.

"Our competitors want to upend U.S. AI leadership and have employed economic and technological espionage in efforts to steal U.S. technology," the White House said in a statement. "This NSM makes collection on our competitors' operations against our AI sector a top-tier intelligence priority, and directs relevant U.S. Government entities to provide AI developers with the timely cybersecurity and counterintelligence information necessary to keep their inventions secure." 

These guidelines are an important step in making sure that AI is leveraged in safe, thoughtful ways for both industry and national security, stated Jeffrey Zampieron, distinguished software engineer at defense technology firm Raft, in an email to Dark Reading.

"Fundamentally, this is quality control," he said. "We want to ensure that AI behaves in a manner that is safe and efficacious for the application of interest. Guidelines provide creators with structured consistent ways to evaluate their work and provide consumers with confidence that the AI will work as intended."

The risks of unregulated AI technologies could be severe, he said. 

"Risks lead to hazards and hazards lead to harms," he said. "The primary risk is that we give AI control of some critical behavior and it acts in a way that causes harm: physical, property, financial. It's very application-specific. What's the risk of using AI to tell jokes? Not much. What's the risk of using AI to fire ordinance? Quite high."

About the Author

Jennifer Lawinski

Contributing Writer

Jennifer Lawinski is a writer and editor with more than 20 years experience in media, covering a wide range of topics including business, news, culture, science, technology and cybersecurity. After earning a Master's degree in Journalism from Boston University, she started her career as a beat reporter for The Daily News of Newburyport. She has since written for a variety of publications including CNN, Fox News, Tech Target, CRN, CIO Insight, MSN News and Live Science. She lives in Brooklyn with her partner and two cats.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights