New Poll Finds Preventing Catastrophic Outcomes is the Top AI Policy Objective for Americans, Majority Support Regulation of Deepfakes, and Ban on AI-Written News Articles

NEW YORK—A new poll from the Artificial Intelligence Policy Institute (AIPI) reveals that preventing catastrophic outcomes, safety requirements, and holding companies liable for harm are the top policy priorities for the American people. The AIPI-commissioned survey also found that Americans are concerned about AI taking and destroying jobs, and support regulations that will help prevent AI from wreaking havoc on the job market and political sphere.

Among the results: 

  • Americans’ top priority is preventing dangerous and catastrophic outcomes from AI, with 77% of respondents selecting it as more important when compared against other AI policy priorities. Requiring mandatory safety audits is the second most popular policy, being selected 65% of the time. Making sure AI companies are liable for harms they cause was the third most popular priority at 60%. 
  • 60% of US adults support banning any news outlet from using AI to produce articles, while 20% oppose such a ban 
  • 79% believe the government should create rules around the use of deepfakes, while 21% believe companies should be allowed to self-regulate; 92% are concerned about AI creating ads of politicians saying or doing things they didn’t actually do or say, including 59% who are extremely concerned, while 87% say it is extremely, very or somewhat important to regulate deepfakes
  • 69% of American adults support an AI framework that would create an independent licensing and auditing organization for advanced AI models, increase liability on AI companies for harms done by their models, limit computer chip exports to China, and require disclosures of the use of AI, compared to just 18% who oppose it 
  • 49% think AI will destroy jobs while just 32% believe it will increase economic growth and create new ones

“The AI revolution is here—and Americans are afraid of what is coming,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Tech executives may be excited about what advancements in AI can bring, but the latest numbers show the American public sees AI as a potentially catastrophically dangerous technology that we need regulation to rein in. On a multitude of AI-related topics, Americans have risks—such as job losses, misinformation-filled robo-journalism, getting duped by deepfakes, and Terminator-style outcomes—at the front of mind, not potential benefits.” 

The findings come from the third poll released by the Artificial Intelligence Policy Institute, a think tank dedicated to providing public opinion research and policy expertise on AI regulation. AIPI will continue to conduct frequent surveys to demonstrate where Americans stand on Artificial Intelligence issues, and provide analysis on existing proposals coming out of Washington, D.C. With AI evolving every day and Americans wary about its rapid advancement, it is critical for policymakers to meet this moment with urgency and consideration of their constituents. 

AIPI was founded to serve as a go-to resource for gauging American public opinion as it relates to AI and provide policymakers and the media with a fuller understanding of the risks posed by artificial intelligence—and solutions for reining in the worst-case scenarios. By measuring public opinion, AIPI will show policymakers that being active in AI policy is not only the right path forward on the merits, but also that action is necessary to better represent their constituents.

With years of experience in the field, AIPI has a deep understanding of AI’s dangers and opportunities. The threat to humankind is alarming, and policymakers must understand not only the economic consequences but also the potential for the next phases of AI development to bring the risk of catastrophic events. AIPI proposes a way forward where lawmakers and influencers in Washington can be informed by a common, nonpartisan set of resources so they can speak collaboratively, productively, and urgently about the need to control AI development and regulate it to mitigate the most dire risks. AIPI will also advocate for a broad range of policies that impose guardrails and oversight on AI and the development of superintelligence.

About the Poll 

The poll, conducted by AIPI using the Lucid platform, was taken from September 29 to Oct. 5, 2023, contained a survey of 1,050 people nationally using online respondents. The survey was conducted in English. The margin of error is ±4.7 percentage points.

See full toplines here and crosstabs here.


About The Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.  

About Daniel Colson

Daniel Colson is the co-founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high-inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA. 

Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory, and has several forthcoming articles on the military strategic implications of Palantir’s AIP targeting and command product which has been operational in Ukraine for the past year. 

Learn more at or follow Daniel on Twitter at @DanielColson6 and Threads at Daniel.J.Colson.


MaxDiff is a survey technique where participants are shown a set of two items (in this case, AI policy goals), and asked to select which they find most and least preferable from the set. This is conducted with random combinations of items, allowing pollsters to gauge the relative importance or preference of each item. By analyzing the responses, we can determine the percentage of times an item was chosen as the most preferred when it was presented as an option.