Poll Finds Majority of Californians Back Safeguards on AI Usage, are Concerned, not Excited, About AI Growth
A new poll from Encode Justice and the Artificial Intelligence Policy Institute (AIPI) shows that the vast majority of Californians across the political spectrum support multiple provisions of the Safety in Artificial Intelligence Act (SB 294), a state-level framework that includes regulations to ensure that AI models are developed responsibly. The survey also reveals that about three times as many Californians are concerned about AI growth than are excited about it, and only a small minority of the state’s residents have faith in the tech industry regulating itself.
Among the results:
- 74% of Californians—including 76% of Democrats, 76% of Republicans and 68% of independents—believe that AI companies should be required to test and certify powerful AI models while just 7% think they shouldn’t be required to do so
- 68% of Californians believe that California should be a leader in setting rules for AI because it’s a leader in technology, while 32% believe regulation of AI would destroy jobs and cause tech companies to leave the state, destroying jobs, and California should lean toward maintaining a business-friendly environment
- 63% of residents of the state believe AI companies should be held liable for harm their technologies create, while just 14% believe they should not be
- 63% of California adults are somewhat or mostly concerned about growth in artificial intelligence, while 22% are excited about it
- 60% of Californians say that tech executive cannot be trusted to self-regulate the AI industry, while just 12% say they can be trusted to do so
- Preventing catastrophic outcomes from AI including biological terrorism and cyberattacks is a top priority for 71% Californians; two other top priorities were preventing AI from causing human extinction (61%) and requiring audits to make sure AI is safe before release (60%)
“These poll results reveal a sharp consensus: when it comes to AI’s risks, Californians know that industry self-regulation can no longer be the law of the land,” said Sneha Revanur, Founder and President of Encode Justice.
“Given its long history of being ahead of the curve on regulation, California must lead the nation on common sense AI safety measures,” added Sunny Gandhi, Encode Justice’s Vice President of Policy and Strategy.
“The data is crystal clear—Californians of all political leanings support policies that enact guardrails on AI development and use,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Proposals that have this level of bipartisan support do not grow on trees. The California legislature has a responsibility to Californians, and to the world, to be a leader on AI policy and make sure experiments being conducted in the state don’t backfire and cause major harm.”
About the Poll
The findings come from the fourth poll released by the Artificial Intelligence Policy Institute, a think tank dedicated to providing public opinion research and policy expertise on AI regulation. The poll, conducted by AIPI and Encode Justice, was taken on Oct. 25 and Oct. 26, 2023. The survey contained a survey of 1,105 people in California using web panel respondents. The poll was conducted in English, was weighted to the demographics of the 2020 general election, and its margin of error is ±4.4 percentage points.
About The Artificial Intelligence Policy Institute
The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI works with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.
While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role.
Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.
AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI polling shows where people stand on new developments and provides crucial policy recommendations for policymakers.
About Encode Justice
Encode Justice is the world’s first and largest youth movement working to reimagine the future of artificial intelligence through policy, advocacy, and public awareness efforts. Founded by then-15-year-old Sneha Revanur in 2020, the organization is now powered by nearly 900 high school and college students from every inhabited continent.
In the face of what could be one of the most significant and potentially catastrophic threats to our generation’s shared future, we are calling on governments around the world to institute rules of the road that address the full spectrum of AI risks, from near-term challenges like algorithmic bias and disinformation to possibly existential harms associated with loss of human control over increasingly powerful AI systems.
We are also working to encourage broad public participation in conversations about AI; through intergenerational partnership and meaningful democratic control, we believe we can steer the future of this transformative technology in a direction that uplifts all of humanity.
About Sneha Revanur
Sneha Revanur is the founder and president of Encode Justice, the leading youth movement for human-centered artificial intelligence. Born and raised in the heart of Silicon Valley, she is also a second-year at Williams College studying political science and economics and a former fellow at the Center for AI and Digital Policy.
Sneha’s work has been covered in CNN, the Washington Post, The Guardian, POLITICO, CNBC, Reuters, MIT Technology Review, Teen Vogue, Wired, The Hill, and more. Sneha was most recently the youngest individual named to TIME’s list of the 100 most influential voices in AI; the youngest participant invited to a private White House roundtable on AI with Vice President Harris; and the youngest member of Mozilla’s list of the 25 rising stars shaping our digital future.
Learn more at encodejustice.org or follow Sneha on Twitter at @sneharevanur and Instagram at @sneha.revanur.
About Daniel Colson
Daniel Colson is the founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high-inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA.
Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory, and has several forthcoming articles on the military strategic implications of Palantir’s AIP targeting and command product which has been operational in Ukraine for the past year.
MaxDiff is a survey technique where participants are shown a set of two items (in this case, AI policy goals), and asked to select which they find most and least preferable from the set. This is conducted with random combinations of items, allowing pollsters to gauge the relative importance or preference of each item. By analyzing the responses, we can determine the percentage of times an item was chosen as the most preferred when it was presented as an option.