Poll Shows Overwhelming Concern About Risks From AI as New Institute Launches to Understand Public Opinion and Advocate for Responsible AI Policies

NEW YORK — Majority of voters don’t trust tech executives to self-regulate their use of AI, 83% of voters believe AI could accidentally cause a catastrophic event, 72% want to slow down AI development and usage, a new survey shows.

American views on AI measures to be released on Friday, August 11.

A new poll from the Artificial Intelligence Policy Institute (AIPI) shows that the vast majority of voters of all political affiliations are concerned about the risks from artificial intelligence and support federal regulation of it. The survey—commissioned by AIPI and conducted by YouGov—demonstrates that policymakers need to act to legislate on AI risks if they don’t want to be behind the curve or out of touch with Americans.

Among the findings:

  • 72% of voters prefer slowing down the development of AI compared to just 8% who prefer speeding development up
  • 62% of voters are primarily concerned about artificial intelligence while just 21% are primarily excited about it
  • 86% of voters believe AI could accidentally cause a catastrophic event, and 70% agree that mitigating the risk of extinction from AI should be a global priority alongside other risks like pandemics and nuclear war
  • 82% of voters don’t trust tech executives to regulate AI, while voters support a federal agency regulating AI by a more than 3:1 margin, including 2:1 among Republicans
  • 76% of voters believe artificial intelligence could eventually pose a threat to the existence of the human race, including 75% of Democrats and 78% of Republicans

“The data is clear—Americans are wary about the next stages of AI and want policymakers to step in to develop it responsibly,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “At a time when nearly every issue is polarized, there’s a broad consensus among Americans that policymakers need to decide what path AI development should take. In the coming years, AI will become increasingly pervasive, transforming various aspects of our daily lives. As a result, the country is in need of an organization to provide the general public and policymakers with polling, information, and research that will play a key part in regulating the technology.”

This poll marks the first of many from the Artificial Intelligence Policy Institute, a think tank dedicated to providing public opinion research and policy expertise on AI regulation. AIPI will conduct frequent surveys to demonstrate where Americans stand on Artificial Intelligence issues, provide analysis on existing proposals coming out of Washington, D.C., including those from Senators Chuck Schumer and Richard Blumenthal and Representatives Ted Lieu and Ken Buck, and provide policy proposals to policymakers that reflect and address the public’s concerns. With AI evolving every day and Americans wary about its rapid advancement, it is critical for policymakers to meet this moment with urgency and consideration of their constituents.

“Powerful and potentially harmful AI development is not an inevitability; policymakers can make rules on which AI experiments are worth the risks. But unfortunately, not everyone is aware of that role the government has the ability to play,” said Daniel Colson, AIPI’s Executive Director. “Our political leaders, and we as a society more broadly, need to choose what risks we are willing to endure for the sake of the potential of technological progress. As we do not yet understand how the latest AI systems work or what their risks are, AIPI will advocate for a cautious approach with policymakers setting clear rules of the road.”

AIPI was founded to serve as a go-to resource for gauging American public opinion as it relates to AI and provide policymakers and the media with a fuller understanding of the risks posed by artificial intelligence – and solutions for reining in the worst-case scenarios. By measuring public opinion, AIPI will show policymakers that being active in AI policy is not only the right path forward on the merits, but also that action is necessary to better represent their constituents.

With years of experience in the field, AIPI has a deep understanding of AI’s dangers and opportunities. The threat to humankind is alarming, and policymakers must understand not only the economic consequences but also the potential for the next phases of AI development to bring the risk of catastrophic events. AIPI proposes a way forward where lawmakers and influencers in Washington can be informed by a common, nonpartisan set of resources so they can speak collaboratively, productively, and urgently about the need to control AI development and regulate it to mitigate the most dire risks. AIPI will also advocate for a broad range of policies that impose guardrails and oversight on AI and the development of superintelligence.

About the Poll

From July 18 to 21, 2023, YouGov conducted a survey of 1,001 voters nationally using online respondents. The survey was conducted in English. The margin of error is ±3.3 percentage points.
On Friday, August 11, AIPI will release additional findings from the survey, including how Americans react to various policy proposals for regulating AI.

For more findings on how voters view AI, including job automation, AI slowdown, automated hacking, and AI weapons systems, and other topics, see full toplines and crosstabs here.

About the Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role.

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech.

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.

About Daniel Colson

Daniel Colson is the co-founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA.

Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory, and has several forthcoming articles on the military strategic implications of Palantir’s AIP targeting and command product which has been operational in Ukraine for the past year.

Learn more at https://www.theaipi.org/ or follow Daniel on Twitter at @DanielColson6 and Threads at Daniel.J.Colson.