Converging State Privacy Laws and the Emerging AI Challenge

It's time for companies to look at what they're processing, what types of risk they have, and how they plan to mitigate that risk.

Jason Eddinger, Senior Security Consultant, Data Privacy, GuidePoint Security

February 28, 2024

5 Min Read
Floating eye above a digital background
Source: Skorzewiak via Alamy Stock Photo

Eight US states passed data privacy legislation in 2023, and in 2024, laws will come into effect in four, including Oregon, Montana, and Texas, each with comprehensive state privacy laws, and Florida, with its far more limited Digital Bill of Rights law. Notably, these laws all share similarities and underscore a national trend toward unified data protection standards in the patchworked US privacy landscape.

While these laws align in many respects — such as exempting employer information and lacking a private right of action — they also exhibit state-specific nuances. For example, Montana's lower threshold for defining personal information, Texas' unique approach to small business definition, and Oregon's detailed personal information categorization illustrate this diversity.

Because of its small population of about a million people, Montana set its threshold much lower than the other states. Because of that decreased threshold, more people may be subject to it than would be otherwise. Montana's privacy law requires companies to conduct data protection assessments to identify high-risk areas where sensitive data is being captured and stored. This law compels businesses to have data protection assessments and processes to ensure that organizations are held accountable.

The Texas privacy law stands out as one of the first in the US to eschew financial thresholds for compliance, basing its criteria on the Small Business Administration's definitions. This innovative approach broadens the law's applicability, ensuring that a broader range of businesses are held accountable for data privacy.

Oregon's law expands the definition of personal information to include linked devices, illustrating the state's commitment to comprehensive data protection. It covers various digital footprints, from fitness watches to online health records. Oregon also includes specific references to gender and transgender individuals in its definition of sensitive information, showing a nuanced approach to privacy.

The laws demonstrate a compelling need for companies to evaluate and ensure data protection addendums in their processes. Accountability is a critical aspect of these laws, reflecting the increased rights and awareness of data subjects. Organizations must establish procedures to enable individuals to exercise their privacy rights effectively, which involves investing in management platforms and monitoring processing activities to ensure compliance.

Generative AI and Its Uses Are Receiving Considerable Attention and Scrutiny

The rise of generative artificial intelligence (GenAI) presents unique challenges in the privacy sector. As AI technologies become integral to businesses, the need for structured policies and processes to manage AI deployment is paramount. The National Institute of Standards and Technology (NIST) has developed a framework to manage AI risks, focusing on design and deployment strategies.

In terms of governance, we often see AI handed over to privacy instead of security because there is a lot of overlap, but in terms of tactical impacts, there are quite a few. Large language models (LLMs) and other AI technologies often utilize extensive unstructured data, raising critical concerns about data categorization, labeling, and security. The potential for AI to inadvertently leak sensitive information is a pressing issue, necessitating vigilant monitoring and robust governance.

It's also important to remember that these AI systems need training, and what they use to train AI systems is your personal information. The recent controversy surrounding Zoom's plan to use personal data for AI training highlights the fine line between legal compliance and public perception.

This year is also pivotal for privacy laws as they intersect with the burgeoning domain of GenAI. The rapid adoption of AI technologies poses fresh challenges for data privacy, particularly in the absence of specific legislation or standardized frameworks. AI's privacy implications vary, from bias in decision-making algorithms to using personal information in AI training. As AI reshapes the landscape, businesses must remain vigilant, ensuring compliance with emerging AI guidelines and evolving state privacy laws.

Companies should expect to see many emerging data privacy trends this year, including:

  • If you've looked at some of the maps of the US in particular, the Northeast is lighting up like a Christmas tree from privacy bills that are being introduced. One of the trends is a continuation of states adopting comprehensive privacy laws. We don't know how many will pass this year, but there surely will be much active discussion.

  • AI will be a significant trend, as businesses will see unintended consequences from its usage, resulting in breaches and enforcement fines due to the rapid adoption of AI without any actual legislation or standardized frameworks. On the US state privacy law front, there will be an increased area of enforcement from the Federal Trade Commission (FTC), which has been clear that it intends to be very aggressive in following through on that.

  • 2024 is a presidential election year in the US, which will raise awareness and heighten attention to data privacy. People are still somewhat unraveled from the last election cycle in terms of mail and online voting privacy concerns, which may trickle down to business practices. Children's privacy is also gaining prominence, with states such as Connecticut introducing additional requirements.

  • Businesses should also anticipate seeing data sovereignty trending in 2024. While there's always been that discussion about data localization, it's still broken down into data sovereignty, meaning who controls that data, its residents, and where it lives. Multinationals must spend more time understanding where their data lives and the requirements under these international obligations to meet the data residency and sovereignty requirements to comply with international laws.

Overall, this is a time for companies to sit back and look deeply at what they're processing, what types of risk they have, how to manage this risk, and their plans to mitigate the risk they've identified. This first step is identifying the risk and then ensuring that, with the risk that's been identified, businesses map out a strategy to comply with all these new regulations that are out there with AI taking over. Organizations should consider whether they are using AI internally, if employees are using AI, and how to ensure they are aware of and tracking this information.

About the Author

Jason Eddinger

Senior Security Consultant, Data Privacy, GuidePoint Security

Jason Eddinger has more than 25 years of consulting experience both in cybersecurity and privacy. A former healthcare CISO, Jason discovered a keen interest in privacy, which he now pursues exclusively. Jason holds both the CIPP/US and HCISPP credentials. A graduate of Harvard University, his passion for Privacy was inspired by Supreme Court Justice Louis Brandeis, a fellow Harvard alum, and his 1890 Harvard Law Review article, “The Right to Privacy”. Jason specializes in US Private Sector, Canadian, and Brazilian privacy law. When not navigating the challenging patchwork of privacy laws and regulations, and the complexity of cookies, consent, and consumer rights, Jason can be found pursuing his interests in aviation as a private pilot, and the Executive Mansion as an amateur White House historian.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights