CISO Corner: Operationalizing NIST CSF 2.0; AI Models Run Amok
Dark Reading's roundup of strategic cyber-operations insights for chief information security officers and security leaders. Also this week: SEC enforcement actions, biometrics regulation, and painful encryption changes in the pike.
March 1, 2024
Welcome to CISO Corner, Dark Reading's weekly digest of articles tailored specifically to security operations readers and security leaders. Every week, we'll offer articles gleaned from across our news operation, The Edge, DR Technology, DR Global, and our Commentary section. We're committed to bringing you a diverse set of perspectives to support the job of operationalizing cybersecurity strategies, for leaders at organizations of all shapes and sizes.
In this issue:
NIST Cybersecurity Framework 2.0: 4 Steps to Get Started
Apple, Signal Debut Quantum-Resistant Encryption, but Challenges Loom
It's 10 p.m. Do You Know Where Your AI Models Are Tonight?
Orgs Face Major SEC Penalties for Failing to Disclose Breaches
Biometrics Regulation Heats Up, Portending Compliance Headaches
DR Global: 'Illusive' Iranian Hacking Group Ensnares Israeli, UAE Aerospace and Defense Firms
MITRE Rolls Out 4 Brand-New CWEs for Microprocessor Security Bugs
Converging State Privacy Laws & the Emerging AI Challenge
NIST Cybersecurity Framework 2.0: 4 Steps to Get Started
By Robert Lemos, Contributing Writer, Dark Reading
The National Institute of Standards and Technology (NIST) has revised the book on creating a comprehensive cybersecurity program that aims to help organizations of every size be more secure. Here's where to start putting the changes into action.
Operationalizing the latest version of NIST's Cybersecurity Framework (CSF), released this week, could mean significant changes to cybersecurity programs.
For instance, there's a brand-new "Govern" function to incorporate greater executive and board oversight of cybersecurity, and it expands best security practices beyond just those for critical industries. In all, cybersecurity teams will have their work cut out for them, and will have to take a hard look at existing assessments, identified gaps, and remediation activities to determine the impact of the framework changes.
Fortunately, our tips for operationalization of the latest version of the NIST Cybersecurity Framework can help point the way forward. They include using all the NIST resources (the CSF is not just a document but a collection of resources that companies can use to apply the framework to their specific environment and requirements); sitting down with the C-suite to discuss the "Govern" function; wrapping in supply chain security; and confirming that consulting services and cybersecurity posture management products are reevaluated and updated to support the latest CSF.
Read more: NIST Cybersecurity Framework 2.0: 4 Steps to Get Started
Related: US Government Expands Role in Software Security
Apple, Signal Debut Quantum-Resistant Encryption, but Challenges Loom
By Jai Vijayan, Contributing Writer, Dark Reading
Apple's PQ3 for securing iMessage and Signal's PQXH show how organizations are preparing for a future in which encryption protocols must be exponentially harder to crack.
As quantum computers mature and give adversaries a trivially easy way to crack open even the most secure current encryption protocols, organizations need to move now to protect communications and data.
To that end, Apple's new PQ3 post-quantum cryptographic (PQC) protocol for securing iMessage communications, and a similar encryption protocol that Signal introduced last year called PQXDH, are quantum resistant, meaning they can — theoretically, at least — withstand attacks from quantum computers trying to break them.
But organizations, the shift to things like PQC will be long, complicated, and likely painful. Current mechanisms heavily reliant on public key infrastructures will require reevaluation and adaptation to integrate quantum-resistant algorithms. And the migration to post-quantum encryption introduces a new set of management challenges for enterprise IT, technology, and security teams that parallels previous migrations, like from TLS1.2 to 1.3 and ipv4 to v6, both of which have taken decades.
Read more: Apple, Signal Debut Quantum-Resistant Encryption, but Challenges Loom
Related: Cracking Weak Cryptography Before Quantum Computing Does
It's 10 p.m. Do You Know Where Your AI Models Are Tonight?
By Ericka Chickowski, Contributing Writer, Dark Reading
A lack of AI model visibility and security puts the software supply chain security problem on steroids.
If you thought the software supply chain security problem was difficult enough today, buckle up. The explosive growth in AI use is about to make those supply chain issues exponentially harder to navigate in the years to come.
AI/machine learning models provide the foundation for an AI system's ability to recognize patterns, make predictions, make decisions, trigger actions, or create content. But the truth is that most organizations don't even know how to even start gaining visibility into all of the AI models embedded in their software.
To boot, models and the infrastructure around them are built differently than other software components and traditional security and software tooling isn't built to scan for or to understand how AI models work or how they're flawed.
"A model, by design, is a self-executing piece of code. It has a certain amount of agency," says Daryan Dehghanpisheh, co-founder of Protect AI. "If I told you that you have assets all over your infrastructure that you can't see, you can't identify, you don't know what they contain, you don't know what the code is, and they self-execute and have outside calls, that sounds suspiciously like a permission virus, doesn't it?"
Read more: It's 10 p.m. Do You Know Where Your AI Models Are Tonight?
Related: Hugging Face AI Platform Riddled With 100 Malicious Code-Execution Models
Orgs Face Major SEC Penalties for Failing to Disclose Breaches
By Robert Lemos, Contributing Writer
In what could be an enforcement nightmare, potentially millions of dollars in fines, reputational damage, shareholder lawsuits, and other penalties await companies that fail to comply with the SEC's new data-breach disclosure rules.
Companies and their CISOs could be facing anywhere from hundreds of thousands to millions of dollars in fines and other penalties from the US Securities and Exchange Commission (SEC), if they don't get their cybersecurity and data-breach disclosure processes in order to comply with the new rules that have now gone into effect.
The SEC regs have teeth: The commission can hand down a permanent injunction ordering the defendant to cease the conduct at the heart of the case, order the payback of ill-gotten gains, or implement three tiers of escalating penalties that can result in astronomical fines.
Perhaps most worrisome for CISOs is the personal liability they now face for many areas of business operations for which they have historically not had responsibility. Only half of CISOs (54%) are confident in their ability to comply with the SEC's ruling.
All of that is leading to a broad rethinking of the role of the CISO, and additional costs for businesses.
Read more: Orgs Face Major SEC Penalties for Failing to Disclose Breaches
Related: What Companies & CISOs Should Know About Rising Legal Threats
Biometrics Regulation Heats Up, Portending Compliance Headaches
By David Strom, Contributing Writer, Dark Reading
A growing thicket of privacy laws regulating biometrics is aimed at protecting consumers amid increasing cloud breaches and AI-created deepfakes. But for businesses that handle biometric data, staying compliant is easier said than done.
Biometric privacy concerns are heating up, thanks to increasing artificial intelligence (AI)-based deepfake threats, growing biometric usage by businesses, anticipated new state-level privacy legislation, and a new executive order issued by President Biden this week that includes biometric privacy protections.
That means that businesses need to be more forward-looking and anticipate and understand the risks in order to build the appropriate infrastructure to track and use biometric content. And those doing business nationally will have to audit their data protection procedures for compliance with a patchwork of regulation, including understanding how they obtain consumer consent or allow consumers to restrict the use of such data and make sure they match the different subtleties in the regulations.
Read more: Biometrics Regulation Heats Up, Portending Compliance Headaches
Related: Choose the Best Biometrics Authentication for Your Use Case
DR Global: 'Illusive' Iranian Hacking Group Ensnares Israeli, UAE Aerospace and Defense Firms
By Robert Lemos, Contributing Writer, Dark Reading
UNC1549, aka Smoke Sandstorm and Tortoiseshell, appears to be the culprit behind a cyberattack campaign customized for each targeted organization.
Iranian threat group UNC1549 — also known as Smoke Sandstorm and Tortoiseshell — is going after aerospace and defense firms in Israel, the United Arab Emirates, and other countries in the greater Middle East.
Notably, between the tailored employment-focused spear-phishing and the use of cloud infrastructure for command-and-control, the attack may be difficult to detect, says Jonathan Leathery, principal analyst for Google Cloud's Mandiant.
"The most notable part is how illusive this threat can be to discover and track — they clearly have access to significant resources and are selective in their targeting," he says. "There is likely more activity from this actor that is not yet discovered, and there is even less information on how they operate once they've compromised a target."
Read more: 'Illusive' Iranian Hacking Group Ensnares Israeli, UAE Aerospace and Defense Firms
Related: China Launches New Cyber-Defense Plan for Industrial Networks
MITRE Rolls Out 4 Brand-New CWEs for Microprocessor Security Bugs
By Jai Vijayan, Contributing Writer, Dark Reading
The goal is to give chip designers and security practitioners in the semiconductor space a better understanding of major microprocessor flaws like Meltdown and Spectre.
With an increasing number of side-channel exploits targeting CPU resources, the MITRE-led Common Weakness Enumeration (CWE) program added four new microprocessor-related weaknesses to its list of common software and hardware vulnerability types.
The CWEs are the result of a collaborative effort among Intel, AMD, Arm, Riscure, and Cycuity and give processor designers and security practitioners in the semiconductor space a common language for discussing weaknesses in modern microprocessor architectures.
The four new CWEs are CWE-1420, CWE-1421, CWE-1422, and CWE-1423.
CWE-1420 concerns exposure of sensitive information during transient or speculative execution — the hardware optimization function associated with Meltdown and Spectre — and is the "parent" of the three other CWEs.
CWE-1421 has to do with sensitive information leaks in shared microarchitectural structures during transient execution; CWE-1422 addresses data leaks tied to incorrect data forwarding during transient execution. CWE-1423 looks at data exposure tied to a specific internal state within a microprocessor.
Read more: MITRE Rolls Out 4 Brand-New CWEs for Microprocessor Security Bugs
Related: MITRE Rolls Out Supply Chain Security Prototype
Converging State Privacy Laws & the Emerging AI Challenge
Commentary by Jason Eddinger, Senior Security Consultant, Data Privacy, GuidePoint Security
It's time for companies to look at what they're processing, what types of risk they have, and how they plan to mitigate that risk.
Eight US states passed data privacy legislation in 2023, and in 2024, laws will come into effect in four, so companies need to sit back and look deeply at the data they're processing, what types of risk they have, how to manage this risk, and their plans to mitigate the risk they've identified. The adoption of AI will make that tougher.
As businesses map out a strategy to comply with all these new regulations that are out there, its worth noting that while these laws align in many respect, they also exhibit state-specific nuances.
Companies should expect to see many emerging data privacy trends this year, including:
A continuation of states adopting comprehensive privacy laws. We don't know how many will pass this year, but there surely will be much active discussion.
AI will be a significant trend, as businesses will see unintended consequences from its usage, resulting in breaches and enforcement fines due to the rapid adoption of AI without any actual legislation or standardized frameworks.
2024 is a presidential election year in the US, which will raise awareness and heighten attention to data privacy. Children's privacy is also gaining prominence, with states such as Connecticut introducing additional requirements.
Businesses should also anticipate seeing data sovereignty trending in 2024. Multinationals must spend more time understanding where their data lives and the requirements under these international obligations to meet the data residency and sovereignty requirements to comply with international laws.
Read more: Converging State Privacy Laws and the Emerging AI Challenge
About the Author
You May Also Like