Overwhelming Majority of Voters Believe Tech Companies Should be Liable for Harm Caused by AI Models, Favor Reducing AI Proliferation and Law Requiring Political Ad Disclose Use of AI
- 73% of voters believe AI companies should be held liable for harms from technology they create compared to just 11% that believe they should not
- 77% of voters support a law requiring political ads disclosing their use of AI, including 64% who support it strongly, and just 10% oppose such a law
- 64% of voters support the government creating a organization tasked with auditing AI while just 14% oppose it
- By a more than 2:1 margin, voters prefer focusing on addressing powerful unknown threats (46%) over weaker known near term threats (22%)
- 65% of voters prioritize keeping AI out of the hands of bad actors, compared to 22% who prioritize providing benefits of AI to everyone
- 73% of voters want to restrict Chinese companies’ access to cloud compute from US companies while just 9% believe the US should continue to allow them to access it
“American voters are saying loud and clear that they don’t want to see AI fall into the wrong hands and expect tech companies to be responsible for what their products create,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Voters are concerned about AI advancement—but not about the U.S falling behind China; they are concerned about how powerful it can become, how quickly it can do so and how many people have access to it.”
This poll is the third set of data released by the Artificial Intelligence Policy Institute, a think tank dedicated to providing public opinion research and policy expertise on AI regulation. AIPI will continue to conduct frequent surveys to demonstrate where Americans stand on Artificial Intelligence issues, and provide analysis on existing proposals coming out of Washington, D.C. With AI evolving every day and Americans wary about its rapid advancement, it is critical for policymakers to meet this moment with urgency and consideration of their constituents.
AIPI was founded to serve as a go-to resource for gauging American public opinion as it relates to AI and provide policymakers and the media with a fuller understanding of the risks posed by artificial intelligence—and solutions for reining in the worst-case scenarios. By measuring public opinion, AIPI will show policymakers that being active in AI policy is not only the right path forward on the merits, but also that action is necessary to better represent their constituents.
With years of experience in the field, AIPI has a deep understanding of AI’s dangers and opportunities. The threat to humankind is alarming, and policymakers must understand not only the economic consequences but also the potential for the next phases of AI development to bring the risk of catastrophic events. AIPI proposes a way forward where lawmakers and influencers in Washington can be informed by a common, nonpartisan set of resources so they can speak collaboratively, productively, and urgently about the need to control AI development and regulate it to mitigate the most dire risks. AIPI will also advocate for a broad range of policies that impose guardrails and oversight on AI and the development of superintelligence.
About the Poll
From September 2 to 6, 2023, YouGov conducted a survey of 1,118 voters nationally using online respondents. The survey was conducted in English. The margin of error is ±3.2 percentage points.
See full toplines here and crosstabs here.
About The Artificial Intelligence Policy InstituteThe Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.
While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role.
Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.
AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech.
AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.
About Daniel Colson
Daniel Colson is the co-founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high-inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA.
Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory, and has several forthcoming articles on the military strategic implications of Palantir’s AIP targeting and command product which has been operational in Ukraine for the past year.
Learn more at https://www.theaipi.org/ or follow Daniel on Twitter at @DanielColson6 and Threads at Daniel.J.Colson.