American voters are worried about risks from AI technology. The AI Policy Institute’s mission is to channel public concern into effective regulation. We engage with policymakers, media, and the public to shape a future where AI is developed responsibly and transparently.
AIPI uses polling to maintain comprehensive knowledge of the public perception of AI and its associated risks, aiming to inform and influence mainstream media and policy-making. From July 18 to 21, 2023, AIPI conducted a survey of 1,001 voters nationally with our polling partner YouGov. Across the board, voters are more concerned than excited about AI.
believe AI could accidentally cause a catastrophic event.
are concerned about artificial intelligence, while just 21% are excited.
prefer slowing down the development of AI compared to just 8% who prefer speeding development up.
believe human-level AI (AGI) will be developed within 5 years.
don’t trust AI tech executives to self-regulate AI.
The AI frontier is being pushed forward rapidly by corporations pouring billions into training powerful AI models. The speed of this advancement has outpaced our understanding of these systems, leaving us in the dark about their capabilities, behavior, and the risks they pose. AI leaders are unanimously sounding the alarm, with lab leaders Sam Altman (CEO, OpenaAI), Demis Hassabis (CEO, Deepmind) and Dario Amodei (CEO, Anthropic) all signing an open letter saying “mitigating the risk of extinction from AI should be a global priority.”
The AI Policy Institute seeks to respond to these challenges. By regulating the data centers necessary for developing cutting-edge AI models, and by mandating the demonstration of an AI model’s safety prior to its deployment, government has the opportunity to significantly mitigate approaching threats. Through dialogue and collaboration with lawmakers, journalists and technologists, the AI Policy Institute is committed to finding a safer path forward through the AI revolution.