Poll: Twice as Many California Voters Prefer Newsom Commentary on AI Safety Concerns Than on Innovation of It

By a 2:1 margin, California voters prefer Gov. Gavin Newson’s statement about addressing AI safety concerns over expressing concern about creating a chilling effect on AI innovation, a new Artificial Intelligence Policy Institute (AIPI) poll shows. The survey also finds that nearly 70% of voters in the Golden State that AI safety legislation to prevent catastrophic risks now before models get too powerful is necessary, while under 20% of them catastrophic risks are currently too speculative to warrant legislation. Additionally, three times as many voters would be less likely to support Newsom in a presidential primary if he vetoes landmark state-level AI legislation SB 1047 than would be more likely to support him should he choose to do so. Moving forward, if the California governor were to veto the legislation, more than half of voters want the State Legislature to put it on the ballot as a ballot measure and a plurality would assign responsibility to Newsom if an AI-enabled catastrophe were to occur.

“As Newsom publicly weighs the pros and cons about the AI legislation sitting on his desk, this new poll demonstrates that California voters are overwhelmingly in favor of signing the bill,”  said Daniel Colson, Executive Director of AIPI. “And while some elite thinkers say concerns about AI’s risks are overblown, his constituents disagree. It’s abundantly clear what path he should take if the governor wants to stand on the side of the people who elected him.”

 

Key Polling Data:

  • When informed about Newsom expressing concern about stifling innovation (“That’s challenging now in this space, particularly with SB 1047, because of the sort of outsized impact that legislation could have and the chilling effect, particularly in the open source community, that legislation could have)” and another quote in which he spoke about the importance of addressing concerns about it (“At the same time, you feel a deep sense of responsibility to address some of those more extreme concerns that I think many of us have, even the biggest and strongest promoters of this technology have) 56% of voters think Newsom should prioritize addressing safety concerns, while 19% choose avoiding stifling innovation. 
  • 67% of voters believe we should create safety legislation to prevent catastrophic risks now before models get too powerful and just 17% think focusing on catastrophic risks is too speculative and legislation shouldn’t do this yet. 
  • 56% of voters agree with those who argue SB 1047 is a strong first step for AI safety and 19% side with those who worry worry SB 1047 could lock in bad policy for AI innovation  
  • On the political impact of Newsom potentially vetoing SB 1047, 33% say the decision would make them less likely to vote for him in a future presidential primary election while 11% say it would make them more likely to vote for him. 
  • If Governor Newsom were to veto the SB 1047 legislation, 47% say Newsom would be responsible (18% fully responsible 17% mostly responsible, 29% partially responsible) for an AI-enabled catastrophe, and just 18% say he would not be responsible (12% minimally responsible, 6% not responsible at all). 

 

About the Poll 

The poll surveyed 1,015 voters in California on Sept. 23 and Sept. 24 2024, through online web panels. The margin of error is ±4.3 percentage points. The poll was conducted in English and weighted for education, gender, race, survey engagement, and 2020 election results.

See full toplines and crosstabs here.

About the Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.