California Governor Gavin Newsom vetoed a first-of-its-kind bill aimed at regulating large-scale artificial intelligence (AI) models on Sunday, signalling a major setback for efforts to establish early Al safety measures in the U.S. The legislation would have introduced some of the first rules governing large scale AI models in the U.S., potentially setting the stage for broader national AI safety laws. Governor Newsom, speaking earlier this month at Dreamforce That while California must lead in AI regulation, the proposal “can have a chilling effect on the industry.”
In his veto statement, Newsom said, “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data.” He further added, “Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Instead of signing the bill, Newsom announced a partnership with Al experts, including renowned Al pioneer Fei-Fei Li, to develop safety guardrails around the technology. Li, who opposed the bill, will help guide the state in crafting a more detailed approach to Al safety.
California Must Lead, But with Caution
The United States is already behind Europe in AI regulation efforts. While California’s proposal wasn’t as comprehensive, it aimed to address growing concerns like job loss, misinformation, privacy invasion, and automation bias. Supporters believed it could have set important guardrails. Last year, several top AI companies agreed to act in accordance with the White House guidelines, including testing and sharing information about their models to mitigate risks.
The California bill would have mandated AI developers to follow requirements similar to those commitments, stated the measure’s supporters. Despite that, the critics, along with former US house speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open-source software, they said.
Earlier this summer, the governor highlighted California’s role as a global AI leader, pointing to the fact that 32 of the world’s top 50 AI companies are based in the state. He has promoted the use of generative AI to address issues of highway congestion, tax guidance, and homelessness solutions.
Governor Newsom’s decision to veto the bill marks another victory for major tech companies and AI developers in California. For the past year, many of these corporations, alongside the California Chamber of Commerce, have lobbied against advancing deadlines. These bills aimed to require AI developers to label AI-created content and block discrimination from AI tools for employment decisions.
Balancing Innovation and Regulation
Despite the veto, the conversation around Al regulation is far from over. Newsom’s administration is already working on voluntary agreements with companies like Nvidia to help train students, college faculty, developers and data scientists. Supporters of the vetoed bill, such as Daniel Kokotajlo, a former OpenAl researcher, expressed disappointment, citing the growing power and influence of Al systems. “This is a crazy amount of power to have any private company control unaccountably, and it’s also incredibly risky,” said Daniel Kokotajlo.
Governor Gavin Newsom recently signed some of the nation’s toughest laws to crack down on election deepfakes and to protect Hollywood workers from unauthorised AI use. Despite his veto of another AI safety proposal, these California initiatives are prompting lawmakers in other states to explore similar measures, stated Tatiana Rice, deputy director of future of Privacy Forum, a non-profit organisation that works with lawmakers on technology and privacy issues.