Calif. Gov. Vetoes AI Safety Bill Aimed at Big Tech Players

Critics viewed the bill as seeking protections against nonrealistic "doomsday" fears, but most stakeholders agree that oversight is needed in the GenAI space.

5 Min Read
A stylized crystal-looking judge's gavel with the letters AI standing behind it
Source: CoreDESIGN via Shutterstock

California Gov. Gavin Newsom (D) has vetoed SB-1047, a bill that would have imposed what some perceived as overly broad — and unrealistic — restrictions on developers of advanced artificial intelligence (AI) models.

In doing so, Newsom likely disappointed many others — including leading AI researchers, the Center for AI Security (CAIS), and the Screen Actors Guild — who perceived the bill as establishing much-needed safety and privacy guardrails around AI model development and use.

Well-Intentioned but Flawed?

"While well-intentioned, SB-1047 does not take into account whether an AI system is deployed in high-risk environments, or involves critical decision-making or the use of sensitive data," Newsom wrote. "Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology."

Newsom's veto announcement contained references to 17 other AI-related bills that he signed over the past month governing the use and deployment of generative AI (GenAI) tools in the state, which is a category that includes chatbots such as ChatGPT, Microsoft Copilot, Google Gemini, and others.

"We have a responsibility to protect Californians from the potentially catastrophic risks of GenAI deployment," he acknowledged. But he made clear that SB-1047 was not the vehicle for those protections. "We will thoughtfully — and swiftly — work toward a solution that is adaptable to this fast-moving technology and harnesses its potential to advance the public good."

Related:News Desk 2024: Hacking Microsoft Copilot Is Scary Easy

There are numerous other proposals at the state level, seeking similar control over AI development amid concerns about other countries overtaking the US on the AI front.

The Need for Safe & Secure AI Development

California State senators Scott Wiener, Richard Roth, Susan Rubio, and Henry Stern proposed SB-1047 as a measure that would impose some oversight over companies like OpenAI, Meta, and Google, which are all pouring hundreds of millions of dollars into developing AI technologies.

At the core of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act are stipulations that would have required companies that develop large language models (LLMs) — which can cost more than $100 million to develop — to ensure their technologies enable no critical harm. The bill defined "critical harm" as incidents involving the use of AI technologies to create or use chemical, biological, nuclear, and other weapons of mass destruction, or those causing mass casualties, mass damage, death, bodily injury and other harm.

Related:Reachability Analysis Pares Down Static Security-Testing Overload

To enable that, SB-1047 would have required covered entities to comply with specific administrative, technical, and physical controls to prevent unauthorized access to their models, misuse of their models, or unsafe modifications to their models by others. The bill included a particularly controversial clause that would have required the OpenAIs, Googles, and Metas of the world to implement nuclear-like failsafe capabilities to "enact a full shutdown" of their LLMs in certain circumstances.

The bill won broad bipartisan support and easily passed California's state Assembly and Senate earlier this year. It headed to Newsom's desk for signing in August. At the time, Weiner cited the support of leading AI researchers such as Geoffrey Hinton (a former AI researcher at Google), professor Yoshua Bengio, and entities such as CAIS.

Even Elon Musk, whose own xAI company would have been subjected to SB-1047, came out in support of the bill in a post on X saying Newsom should probably pass the bill given the potential existential risks of runaway AI, which he and others have been flagging for many months.

Related:Sloppy Entra ID Credentials Attract Hybrid Cloud Ransomware

Fear Based on Theoretical, Doomsday Scenarios?

Others, however, perceived the bill as based on unproven doomsday scenarios about the potential for AI to wreak havoc on society. In an open letter, a coalition that included several entities including the Bay Area Council, Chamber of Progress, TechFreedom, and Silicon Valley Leadership Group called the bill fundamentally flawed.

The group claimed that the harms that SB-1047 sought to protect against were completely theoretical, with no basis in fact. "Moreover, the latest independent academic research concludes, large language models like ChatGPT cannot learn independently or acquire new skills, meaning they pose no existential threat to humanity." The coalition also took issue with the fact that the bill would hold developers of large AI models responsible for what others do with their products.

Arlo Gilbert, CEO of data-privacy firm Osano, is among those who views Newsom's decision to veto the bill as a sound one. "I support the governor's decision," Gilbert says. "While I'm a great proponent for AI regulation, the proposed SB-1047 is not the right vehicle to get us there."

As Newsom has identified, there are gaps between policy and technology, and the balance between doing the right thing and supporting innovation is one that merits a cautious approach, he says. From a privacy and security perspective, small startups or smaller companies that would have been exempt from this rule can actually present a greater risk of harm due to their relative access to resources to protect, monitor, and disgorge data from their systems, Gilbert notes.

In an emailed statement, Melissa Ruzzi, director of artificial intelligence at AppOmni, identified SB-1047 as raising issues that need attention now: "We all know AI is very new and there are challenges in writing laws around it. We cannot expect the first laws to be flawless and perfect — this will most likely be an iterative process, but we have to start somewhere."

She acknowledged that some of the biggest players in the AI space, such as Anthropic and Google, have put a big focus on ensuring their technologies do no harm. "But to make sure all players will follow the rules, laws are needed," she said. "This removes the uncertainty and fear from end users about AI being used in an application."

About the Author

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights