Voters Favor Caution with Dual-Use AI, Say Potential Do Not Make Up For Serious Risks, Poll Shows

Poll Also Shows That Voters Who Think AI is Powerful Want More Restrictions 

AIPI Also Tests Voters Reactions to Fake AI Audio — and Whether Respondents Believe Deep Fakes, Even When Told They Aren’t Real

American voters believe the potential benefits of dual-use AI models do not outweigh their risks, a new poll from The Artificial Intelligence Policy Institute finds. The survey found that the majority of voters are concerned about to AI’s potential dual-use applications in medical research, cyber security, generative art, writing, filmmaking, goods manufacturing, and energy production, after being presented with the benefits and potential risks of them.  

Overall, 69% of voters support and just 6% oppose holding model developers accountable for preventing their AI from causing harm to innocent bystanders, being misused, or having its code leaked or stolen. 63% support and 10% oppose requiring artificial intelligence labs to have strict cybersecurity measures, have plans to contain dangerous models, share model capability predictions with the government, and have outside experts check their AI systems for dangerous behaviors.

When presented with specific cases of dual-use applications, voters support a cautious, regulated approach. They do not see the benefits as outweighing the potential harms and are against the development of dual-use models when regulation is not presented as an option. 

 

  • For dual-use AI applications in medical research, voters were presented with a case where an AI can do research so humans can live longer and healthier lives but also do research to create more powerful biological weapons and viruses. 60% of voters say they prefer allowing the technology to be developed, but in a slow, careful, and regulated manner. 33% would instead never develop the technology, and 7% would allow private companies to develop the technology freely.
  • Similarly, 58% prefer a slow, careful, and regulated approach to an AI model with the dual capability of protecting computers from cyber attacks and launching cyber attacks themselves. 34% prefer such a model never to be developed, and 8% prefer allowing private companies to freely develop the technology. 
  • For an AI model that is able to generate beautiful works of art but has the capability to produce child pornography, 63% prefer that the model never be developed. 32% prefer a regulated approach, and 5% prefer private companies to freely develop the technology.
  • Similarly, 51%  would prefer an AI model with the dual-use ability to produce elegant writing but also send scam emails to never be developed. 42% support a slow, regulated approach to such a model, and 7% prefer it to never be developed.
  • When presented with a model with the dual-use capability of being able to produce riveting movies but also able to produce fake videos smearing public figures, 53% of voters prefer that technology to never be developed. 40% support the regulated approach, and 7% prefer allowing private companies to freely develop the technology.

 

“As AI models become increasingly advanced, their development for specific fields will always be a double-edged sword,” said Daniel Colson, the Executive Director of the AI Policy Institute. “We have the option to go down the path mitigating the risks before they happen or to take a hands-off, wait-and-see approach, and it could not be more clear which one the American public wants to take. Like nuclear technology, if we fail to recognize the potentially catastrophic risks powerful AI models can present, we will not be able to leverage their transformative power for good.”

Key Finding: Preferences Of Those Who See AI as Transformative vs. Just Another Technology

Another key finding from the poll shows that voters who think AI is powerful consistently want more restrictions on the technology than those who think it is weak and will cause only modest changes in society. For example, voters were asked which goal is more important when developing artificial intelligence technology: keeping society safe from the potential harms of artificial intelligence, taking a slower controlled approach, or giving everyone the freedom to access and benefit from this technology as soon as possible. 

Those who consider AI as a powerful technology support a slow, controlled approach at 91%, whereas those who see AI as weak support it at just 78%. Further, when presented with the dual-use capabilities of AI models, those who see AI as weak consistently support private companies to develop the technology freely rather than other options. 

In all, 59% of voters said AI is a uniquely powerful technology, unlike other technologies that came before us; it will dramatically change society, as opposed to 19% who say it is a technology like many others that came before us; it will produce modest changes in society. 22% are unsure. 

“When people are convinced about AI’s potential transformative impact, they overwhelmingly support safeguards and political action,” said Colson. “Our polling shows that voters are likely to become more and more concerned as AI becomes more powerful.” 

Do You Believe Me or Your Lying Ears? Voter Perception of Scandals in the Age of AI 

The poll also asked about scenarios where content comes out showing someone famous doing something wrong. In each case, the perpetrator would deny that the material is real. For each, respondents were asked whether they would be more likely to believe the material presented was real or was produced by AI.

  • If audio comes out with Joe Biden’s voice supposedly accepting a bribe, 65% of voters would believe the content is produced by AI, and 35% would believe it is real.
  • If audio comes out with Donald Trump’s voice supposedly accepting a bribe, 58% of voters would believe the content is produced by AI, and 42% would believe it is real.
  • If audio comes out showing a little-known moderate politician supposedly accepting a bribe, 62% of voters would believe the content is produced by AI, and 38% would believe it is real.
  • If a video comes out showing Taylor Swift calling her fans “gullible idiots,” 76% of voters would believe the content is produced by AI, and 24% would believe it is real.

 

About the Poll 

The poll surveyed 1,114 respondents on March 25 and March 26. The poll was conducted in English, and its margin of error is ±5.1 percentage points.

See full toplines here and crosstabs here.

About The Artificial Intelligence Policy Institute
The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics.

The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.  

Learn more at https://www.theaipi.org/ or follow Daniel on Twitter at @DanielColson6