Poll: 7 in 10 Californians Support SB1047, Will Blame Governor Newsom for AI-Enabled Catastrophe if He Vetoes

As opponents angle for Governor Newsom to veto the bill, 65% of California voters say they would put at least partial responsibility on Governor Newsom for an AI-enabled catastrophe should he strike the bill down

40% of California voters, including 32% of Democrats, say they would be less likely to vote for Governor Newsom in a future presidential primary election should he veto SB1047

As SB1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, moves through the California state legislature, and opponents push for Governor Newsom to veto the bill, a new poll from the Artificial Intelligence Policy Institute (AIPI) shows the bill has overwhelming support from California voters, even after hearing the arguments from opponents and proponents. 

The poll shows that California voters are prepared to punish Governor Newsom should he veto the bill, namely by assigning him blame for any future AI-related catastrophe and reducing their likelihood of voting for him in future elections for higher office. If the bill is vetoed, California voters want the bill to be made a ballot measure or a legislative referendum by a +33 margin. 

70% of California voters support SB1047, a bill requiring that companies that develop advanced AI conduct safety tests and create liability for AI model developers if their models cause catastrophic harm and they do not take appropriate precautions. Support includes 60% of independents and 67% of Republicans. Only 13% oppose the bill, and only 17% are unsure. 

The poll informed voters about the arguments being made by supporters and opponents of the bill, namely that supporters argue that this is a light-touch bill targeting the biggest companies making the most powerful models, which sets up basic guardrails to protect the public from catastrophic damage caused by an AI system and that opponents argue that this bill would place unfair burdens on AI companies that would harm innovation and drive AI development away from California. 

Then, when asked If Governor Newsom were to veto this bill, 65% of California voters say they would assign at least partial responsibility to him if an AI-enabled catastrophe were to impact California within the next ten years. That includes 25% of voters who would assign full responsibility to him, 18% who say he would be mostly responsible, and 22% who say Newsom would be partially responsible.

In a similar question about cyber attacks, where respondents were only given a ‘yes,’ ‘no,’ or ‘unsure’ option, 60% of California voters say they would hold Governor Newsom responsible for an AI cyber attack if he vetoes Senate Bill 1047. 20% say they would not hold him responsible, and 20% are unsure. 

40% of California voters, including 32% of Democrats and 36% of independents, say they would be less likely to vote for Governor Newsom in a future presidential primary election should he veto SB1047. Only 13% of voters say such a veto would make them more likely to vote for him in a potential presidential primary.

If the California State Legislature passes the AI regulation bill SB104, but Gavin Newsom vetoes it, 54% of California voters say state lawmakers should put the proposition on the ballot as a ballot measure or legislative referendum. Just 21% say they should not put SB1047 on the ballot in such a case, and 25% are not sure. 

In turn, 58% of California voters say they would vote ‘yes’ on the following hypothetical ballot referendum that resembles SB1047. Just 19% say they would vote ‘no’ and 24% are unsure. Respondents were provided with the following proposed ballot measure: 

 

Proposition 1: Safe and Secure Artificial Intelligence Act

  • Requires developers of large-scale artificial intelligence (AI) models to implement safety and security protocols before training or using such models.
  • Prohibits use of AI models that pose unreasonable risk of causing severe harm.
  • Creates state board and division to oversee AI safety and provide guidance.
  • Requires annual third-party audits and state certifications for large AI model developers. Establishes whistleblower protections for AI company employees reporting safety concerns. Authorizes civil penalties for violations.
  • Fiscal Impact: This measure may result in additional state administrative costs for oversight and enforcement, potentially offset by fees and penalties assessed on AI developers.

 

“With SB1047, Governor Newsom has an opportunity to lead on this critical issue and make California a model for AI policy that ensures innovation is protected from worst-case scenarios and has the interests of the public in mind,” said Daniel Colson, the Executive Director of AIPI. “If he chooses to cave to special interests and veto the bill, the data shows he may face significant political repercussions, especially as AI risks rise in salience. Californians are watching closely, and they expect action to ensure AI innovations do not come at the cost of safety and security.”

 

About the Poll 

The poll surveyed 1,038 California voters online from August 25 to August 26. The survey was conducted in English, and its margin of error is ±4.2 percentage points.

See full toplines and crosstabs here.

 

About the Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.