New Poll Shows Overwhelming Concern About AI’s Use in Global Conflict, Support for International Safeguards, Potential U.S.-China Agreement
A new poll from the Artificial Intelligence Policy Institute (AIPI) surveyed public opinion on AI issues related to global conflict and misinformation and found overwhelming bipartisan support for international cooperation, treaties, and watchdogs to control AI’s usage and slow down its development.
The poll of more than 1,200 voters takes the temperature on timely issues such as AI’s use for propaganda in the Israel-Hamas war, how it has helped develop anti-Semitic content, and the potential agreement between presidents Joe Biden and Xi Jinping on preventing AI in drone warfare and within the nuclear chain of command.
Americans of all ages, races, genders, and political ideologies share a deep concern about AI’s usage in global conflict and creating misinformation, and in turn, support international agreements that foster international cooperation, slow down AI’s rapid development, and yield safeguards to bring the technology within human control.
“AI safety policy is national security policy—and on this aspect of keeping Americans safe, there is broad consensus,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Our latest poll shows a voracious appetite for AI safeguards and international agreements and that these issues don’t fall along typical partisan lines. The American public has given elected officials a mandate for public policy intervention, and policymakers and political leaders must take the opportunity to take swift and aggressive action.”
Among the findings:
- As Presidents Biden and Xi prepare to agree to a ban on AI use in drone warfare and the nuclear chain of command, 59% of voters—including 68% of Democrats, 56% of independents, and 53% of Republicans—expressed support for such cooperation between the U.S. and China; only 20% opposed it.
- When asked about the use of artificial intelligence platforms to develop anti-Semitic propaganda, 59% of voters support the government requiring artificial intelligence companies to monitor the use of AI for racist content.
- 64% of voters believe that the government should prosecute both people who create racist images and artificial intelligence companies if their tools are being used to generate racist images.
- 58% support the United States advocating for an international agreement regulating the use of artificial intelligence in war, compared to 20% who oppose it.
- 68% of voters agree that AI must be treated as an incredibly powerful and dangerous technology when considering policy goals for the technology. Only 18% disagree.
- 70% agree that preventing AI from quickly reaching superhuman capabilities should be an important goal for AI policymaking. Just 14% disagreed with that goal.
- A whopping 74% indicated that another goal for AI policy should be to prevent AI from being used for impersonations using the likeness or voice of people in a video, image, or sound form without that person’s consent, illustrating the everyday reality of the issue and the urgent need for action
- 61% of voters agreed that achieving policy that slows down the increase in AI capability is important, while only 18% disagreed.
- When asked about using AI-generated images or voices of real people for political advertisements, 49% supported a ban on such usage, and 34% opposed it. A ban on AI use in political advertisements had majority support across party lines (+29 net support with Democrats, +5 with independents, and +11 with Republicans).
- 51% of American voters support the introduction of a global watchdog to regulate the use of artificial intelligence, compared to 28% who oppose one.
- 49% support the introduction of an international treaty to ban any ‘smarter-than-human’ artificial intelligence, compared to 29% who opposed one.
These findings come from the fifth poll conducted by the Artificial Intelligence Policy Institute, a think tank dedicated to providing public opinion research and policy expertise on AI regulation. AIPI will continue to conduct frequent surveys to demonstrate where Americans stand on Artificial Intelligence issues and provide analysis on existing proposals coming out of Washington, D.C. With AI evolving every day and Americans wary about its rapid advancement, it is critical for policymakers to meet this moment with urgency and consideration of their constituents.
AIPI was founded to serve as a go-to resource for gauging American public opinion as it relates to AI and provide policymakers and the media with a fuller understanding of the risks posed by artificial intelligence—and solutions for reining in the worst-case scenarios. By measuring public opinion, AIPI will show policymakers that being active in AI policy is not only the right path forward on the merits, but also that action is necessary to better represent their constituents.
With years of experience in the field, AIPI has a deep understanding of AI’s dangers and opportunities. The threat to humankind is alarming, and policymakers must understand not only the economic consequences but also the potential for the next phases of AI development to bring the risk of catastrophic events. AIPI proposes a way forward where lawmakers and influencers in Washington can be informed by a common, nonpartisan set of resources so they can speak collaboratively, productively, and urgently about the need to control AI development and regulate it to mitigate the most dire risks. AIPI will also advocate for a broad range of policies that impose guardrails and oversight on AI and the development of superintelligence.
About the Poll
The poll was taken Nov. 20 to Nov. 21, 2023, and contained a survey of 1,268 voters nationally, using online respondents. The survey was conducted in English. Its margin of error is ±4.7 percentage points.
About The Artificial Intelligence Policy Institute
The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.
While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role.
Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.
AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech.
AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.
About Daniel Colson
Daniel Colson is the co-founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high-inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA.
Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory, and has several forthcoming articles on the military strategic implications of Palantir’s AIP targeting and command product which has been operational in Ukraine for the past year.