Poll: Voters Support Bringing EU-Style AI Regulations to the US, Prioritizing Safety Over Speed in Research

A new poll conducted by the Artificial Intelligence Policy Institute (AIPI) shows that the American public supports the passage of the European Union’s AI Act by nearly a 4:1 margin, and 64% support similar regulation in the United States. 

The survey showed strong public support for a slowdown of AI research and skepticism of tech companies; respondents decisively back federal regulation that curbs rapid AI research and development by private companies. By a 2:1 margin, respondents agree that it is the role of the government to make sure companies don’t go too fast when developing AI models. 75% say the government should restrict what private companies can do when training AI models. 

AIPI also surveyed public opinion on risky research initiatives across AI development and dangerous virus research—particularly relevant as scientists and the federal government look to revise guidelines on potential pandemic pathogens. 83% of the public is in agreement that the federal government should implement renewed oversight protocols on research experiments using dangerous viruses. When prompted about AI being in such research, 68% say that we should be concerned that bad actors could use AI to create biological weapons. 

“Americans understand that there are very real threats inherent in AI, and want policymakers to act accordingly—by throwing up yield and stop signs through public policy interventions—rather than giving private actors the green light to go full speed ahead with unfettered technological innovation,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “It’s abundantly clear that the American public isn’t convinced by what the tech companies have been selling, and that they much prefer a slower, more controlled approach to AI than one that entails high levels of risk.”

 

In a poll of 1,222 U.S. adults fielded on Dec. 13, the poll found:

  • 48% of voters support the passage of the European Union’s AI Act after learning about the tiered approach to AI applications and testing requirements on “foundational” models that the regulation poses. Just 13% say the EU should not pass the AI Act. 
  • 64% say the US needs similar regulation to impose testing requirements for powerful “foundational” models, prioritizing safety over speed. Additionally, 73% agree that the United States is a leader in the technology sector, and that’s why it should be a leader in setting the rules for AI. 
  • 53% say Stability AI, the company behind the image generation model Stable Diffusion, should be held liable for its model being used to generate fake non-consensual pornographic images of real people, while 26% say that only the individuals producing the images should be held responsible. 
  • 64% support the government creating an emergency response capacity to shut down the most risky AI research if it is deemed necessary; just 16% oppose doing so. 
  • 68% of respondents are concerned that AI could be used by bad actors in such research to create bioweapons. 67% support requiring testing and evaluation of all AI models to make sure that they can not be used to create biological weapons before they are released.
  • More generally, 83% agree that the federal government should make sure that research experiments using dangerous viruses are conducted safely by requiring that the scientists who conduct the experiments adhere to certain oversight protocols. 81% agree that entities that fund scientific research should be prevented from funding experiments that make viruses more dangerous.
  • On the recent scandal surrounding Sports Illustrated’s use of AI-generated articles and reporter profiles, 84% of respondents say this practice is unethical, and 80% say it should be illegal. 65% support policy that requires companies to disclose and watermark content created by AI, with 46% in strong support. 

 

About the Poll 

The poll was taken on Dec. 13, and contained a survey of 1,222 voters nationally. The survey was conducted in English. Its margin of error is ±4.2 percentage points.

See full toplines here and crosstabs here.


About the Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.