Poll: Majority of Voters Want A Potential Harris Administration to Prioritize AI Regulations That Minimize Risk

In a test of Vice President Harris’s existing comments on AI, 63% of voters most prefer a message about ensuring AI is adopted and regulated in a way that fosters cooperation across sectors, minimizes harm and maximizes benefit. 

A new Artificial Intelligence Policy Institute (AIPI) poll shows that voters overwhelmingly want a potential Kamala Harris administration to focus on safety mandates and minimizing the risk of AI. Specifically, more than half of voters (53%), believe that if Vice President Harris becomes president in 2025 she should prioritize minimizing the risks of AI more so than realizing the promise of AI (22%). An equal number of Republicans and Democrats (53%), believe a potential Harris administration should work on minimizing the risks of AI.  

In all, voters care much more about the possibility of a catastrophic accident or national security threat emerging from AI (55%) than they do about preventing AI from being in the hands of just a few companies (25%). 

The poll also tested voter preference of recent quotes given by Vice President Harris about AI, notably as the Biden administration’s key leader on the issue, to see which messaging voters want to hear. The most popular message, chosen 63% of the time, is one about ensuring that AI is adopted and regulated in a way that minimizes harm and maximizes benefit: 

 

  • “I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefit.” – March 28, 2024

 

The least preferred messages relate to deepfake exploitation and AI’s overwhelming potential to solve global issues like disease, food insecurity, and climate change. The top performing messages are those that balance AI’s risks with its potential benefits and emphasizes the need for cooperation between private and public sectors. 

When asked which set of risks Vice President Harris should prioritize if elected, 55% of voters—including 55% of Democrats, 49% of independents, and 58% of Republicans—say it is more important to focus on reducing the risk of cyberattacks and biological attacks caused by AI as opposed to the 23% of voters want priority placed on reducing the risk of bias and misinformation caused by AI.  22% of voters are still unsure. 

Additionally, the poll asked voters about the US AI Safety Institute, a government initiative aimed at advancing the science and practice of AI safety to ensure the responsible development and use of artificial intelligence housed within the National Institute of Standards and Technology (NIST). When told more about the initiative, and that policy makers are considering giving this institution legislative authorization through legislation, meaning it would remain as a stable institution regardless of administration, 54% of voters support authorizing the AI Safety Institute. Just 16% oppose its authorization and 30% of voters are unsure. 

Nearly the identical share of voters support an existing bill which authorizes the US national AI Safety Institute as a stable institution, establishes testing facilities at government labs, sets up prize competitions for AI breakthroughs, makes datasets available for research, and promotes international collaboration on AI standards and research.

 

Additional key numbers from the poll:

  • When presented with the possibility of banning AI or not regulating it, 46% support a ban compared to 18% who support no regulation. 36% of voters are still unsure. 
  • 76% of voters support increased safety mandates and regulations, including 78% of Democrats and 74% of Republicans over 7% of voters who support no regulation of AI at all. 

 

About the Poll 

The poll was taken on August. 11, and contained a survey of 1,080 respondents in the United States. The survey was conducted in English. Its margin of error is ±3.7 percentage points.

The full toplines are available here and crosstabs here.


About the Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.  

Daniel Colson is the co-founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high-inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA. Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory, and has several forthcoming articles on the military strategic implications of Palantir’s AIP targeting and command product which has been operational in Ukraine for the past year. Learn more at https://www.theaipi.org/.

 

About Daniel Colson

Daniel Colson is the co-founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high-inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA. Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory. Follow Daniel on Twitter at @DanielColson6