AI vs. Public Opinion: Catching you up on the latest from AIPI

Since our launch in August, the Artificial Intelligence Policy Institute (AIPI) has been hard at work conducting and releasing impactful polling on AI-related issues, raising awareness of AI’s catastrophic risks, and acting as a much-needed bridge between the tech world, the American public and policymakers. Equipped with public opinion data, we have been able to continually show that there is an overwhelming concern about the rapid development of AI and an urgent need to mitigate AI’s potentially catastrophic risks. In turn, we have been everywhere, getting our data and insights into well-read publications, galvanizing debate in the tech and political spaces, and taking our analysis to meetings with numerous elected officials.

Our first poll, released in August, found that 83% of voters believe AI could accidentally cause a catastrophic event, and 72% want to slow down AI development and usage. Each of our subsequent polls showed similar concerns and surveyed specific issues from deep fakes to open sourcing, to the export of chips to China, and many more in between. We’ve incorporated our polling into projects such as an interactive map that shows state-by-state job risks associated with AI growth and a report card featured in Axios that graded prominent AI legislation proposals against public sentiment. 

This work has created a stronger narrative to counter the vocal techno-optimists who are pushing for a full-steam-ahead approach to AI development. In turn, we have captured the attention of numerous congressional offices; dozens of staffers for top Senators and House members have been eager to hear where voters stand on AI and what policies need to be prioritized. 

That’s why we wanted to catch you up on everything we’ve been up to since our launch:

 

  • Our first poll, released in August, found that a majority of voters don’t trust tech executives to self-regulate our use of AI, 83% of voters believe AI could accidentally cause a catastrophic event, and 72% want to slow down AI development and usage. It was one of the first public opinion polls to show just how concerned the American public is about AI’s rapid development and our strong desire for government intervention. For example, the poll showed that voters support a federal agency regulating AI by a more than 3:1 margin, including 2:1 among Republicans. See more on our findings here.
  • We dedicated our second round of public opinion research to potential policy interventions and found that, on a wide array of issues, voters want stricter regulations and safeguards against the harmful outcomes AI can bring. 76% want AI-generated images to be marked with proof they were generated by a computer, and 60% believe AI systems used for military purposes should be subject to international regulation in a similar way that nuclear weapons are. See more on our findings here.
  • As the debate about open-sourcing AI models heated up, we added crucial public opinion numbers to the conversation, finding in our third poll that voters oppose open-sourcing powerful AI models by a 2:1 margin. Voters overwhelmingly supported erring on the side of caution, with 71% saying the potential risks of AI are greater than its potential benefits. 66% of voters believe AI companies should be required to have regulatory representation on our boards. See more on our findings here.
  • We have also polled on whether tech companies should be liable for harm caused by our AI models. On this question, our poll found that 73% of voters believe AI companies should be held liable, 67% think the AI models’ power should be restricted, and 65% believe keeping AI out of the hands of bad actors is more important than providing AI’s benefits to everyone. See more on our findings here.
  • AI and deep fakes have brought about new advertising tactics that are already being deployed in the 2024 election cycle. Our poll found that 77% of voters support a law requiring political ads to disclose our use of AI, including 64% who support it strongly, and just 10% oppose such a law. See more on our findings here.
  • With various AI policies being discussed on Capitol Hill, we used our depth of public opinion research to put these proposals under the microscope and grade each one against voter sentiment. Understanding the public’s most pressing concerns, we evaluated nine separate federal policy proposals against whether they successfully can adapt to AI’s advancements, discourage high-risk AI development, reduce the proliferation of dangerous AI, affect model training, restrict capability escalation, address current vulnerabilities and dangers, and prevent or delay superintelligent AI. View the full report card here.
  • In response to reports about Nvidia’s plans to sell new high-performing AI chips to China, we put a poll into the field that found that more than 70% of US adults disapprove of Nvidia’s actions, and 63% support implementing anti-trust measures on the company. After reading information about Nvidia, favorability for Nvidia dropped from 32% favorable and 10% unfavorable to 22% favorable and 44% unfavorable. See more on our findings here.
  • Along with national polling, we also polled proposed AI policies in California. Conducted in partnership with Encode Justice, our survey found the vast majority of Californians across the political spectrum support multiple provisions of the Safety in Artificial Intelligence Act (SB 294), a state-level framework that includes regulations to ensure that AI models are developed responsibly. The survey revealed that about three times as many Californians are concerned about AI growth than are excited about it, and only a small minority of the state’s residents have faith in the tech industry regulating itself. See more on our findings here.
  • In the immediate aftermath of President Biden’s executive order on AI, we polled on each provision of the policy and found that 69% of American voters—including 64% of Republicans and 65% of Independents—support the executive order and 75% think the government should do more to regulate to AI. The most popular provisions of President Biden’s AI executive order are the ones that establish testing requirements, disclosure of AI use, and prevent companies from training powerful AI models without Americans’ knowledge. See more on our findings here and a chart detailing them below:
Biden Executive Order Toplines
  • In addition to polling, we released an interactive map and dataset to shed light on the near-future effects of automation from AI on the job market. We found that 20% of US jobs will be significantly exposed to AI automation in the near future and broke down those numbers state by state and sector by sector. To see which jobs and which states will be most impacted, view the interactive map and data set here.

 

Our polling has been featured in numerous leading political and tech news outlets alongside insights from Executive Director Daniel Colson. Some highlights:

“Americans are wary about the next stages of AI and want policymakers to step in to develop it responsibly,” Daniel Colson, executive director of the Artificial Intelligence Policy Institute, which favors a cautious approach to AI deployment, told Axios.

“The recent poll conducted by the Artificial Intelligence Policy Institute (AIPI) paints a clear picture: the American public is not only concerned but demanding a more cautious and regulated approach to AI.”

“The AIPI survey reveals that 72% of voters prefer slowing down the development of AI, compared to just 8% who prefer speeding development up.” 

“The numbers in the AIPI poll are staggering: 86% of voters believe AI could accidentally cause a catastrophic event, and 76% think it could eventually pose a threat to human existence.” 

Colson views it this way: The top scientists at the biggest AI firms believe that they can make artificial intelligence a billion times more powerful than today’s most advanced models, creating “something like a god” within five years.

His proposal to stop them: Prevent AI firms from acquiring the vast supplies of hardware they would need to build super-advanced AI systems by making it illegal to build computing clusters above a certain processing power. Because of the scale of computing systems needed to produce a super-intelligent AI, Colson argues such endeavors would be easy for governments to monitor and regulate.

“I see that science experiment as being too dangerous to run,” he said…

“Powerful and potentially harmful AI development is not an inevitability,” Daniel Colson, the executive director at AI Policy Institute, said in a statement. “Our political leaders, and we as a society more broadly, need to choose what risks we are willing to endure for the sake of the potential of technological progress.”

Crucially, that requires not making the same mistake we made in the social media era: equating tech progress with social progress. “That’s been like a foregone conclusion,” West said. “I’m not convinced that AI is associated with progress in every instance.”

If the survey results are any indication, it seems as though a good portion of the American public may also be outgrowing that naiveté.

“‘It’s a very positive step in terms of Congress starting to seriously consider that AI is going to be an extremely powerful and transformative technology over the coming years,’ Daniel Colson, co-founder of research and advocacy non-profit AI Policy Institute, told The Daily Beast. He added that he was encouraged by both the forum and the framework introduced by Blumenthal and Hawley—though he was concerned that Congress still might not go far enough in order to properly regulate the technology.” 

‘As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,’ Daniel Colson, the executive director of the AI Policy Institute, told me. ‘There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.’

And yet, Colson pointed out, “most of the direction of society is set by the technologists and by the technologies that are being released … There’s an important way in which that’s extremely undemocratic.”

The AI Policy Institute advocates “political solutions to potential catastrophic risks” from AI. Its report card judged proposed AI regulation based on various attributes, including how the proposal adapts to advancements, how it discourages high-risk AI deployment, and how it reduces the proliferation of dangerous AI.

Voters “expect tech companies to be responsible for the products they create,” said Daniel Colson, executive director of the AIPI.

“71%. Percentage of respondents polled between late September and early October who disapproved of Nvidia selling high-performance chips to China, according to the nonprofit Artificial Intelligence Policy Institute. The Commerce Department said Tuesday it would tighten restrictions on China’s ability to buy advanced semiconductors.”

“Bot wars: While most Americans say they do not want human bias in our news, they’re not ready to let emotionless machines deliver the news either. A new poll from the Artificial Intelligence Policy Institute shared exclusively with Semafor found that just 18% of Americans said they would feel good about artificial intelligence writing news articles. But the same survey respondents seemed resigned to the fact that AI would likely be producing quality news articles soon: 62% of respondents said that AI will be able to write news articles that are indistinguishable from human-written articles in the next five years.”

“‘This is a larger step in the right direction than we’ve seen so far,’ Colson said. ‘The American public is really supportive, both of this executive order, but also of the government doing more to regulate AI.’

‘This executive order is really building some of the initial infrastructure necessary to allow the government to be able to track what’s going on in the AI industry, what the tech companies are doing, and what the models are,’ Colson said.

However, Colson believes that it’s a start—albeit a very small one—towards a future with truly safe and trustworthy AI.

‘A lot of AI safety people want to slow down the development of AI due to fears that near-term models will be very, very powerful,’ Colson explained. ‘This executive order definitely doesn’t do anything close to that. But at least the government is attempting to learn what models are being developed and how they’re being developed. That’s definitely the necessary first step in order for more substantial regulations to come later.’”

“Daniel Colson, co-founder and executive director of the AI Policy Institute think tank, said call-center workers are likely to be more exposed if the technology is able to get better at mimicking human traits.

‘I think [that] once you get the personality interface worked out for these chatbots, I think that’s going to really change the game to the customer service stuff,’ Colson said.

With millions of people employed in call centers or other customer service–related professions, any job losses could have a big impact. While Colson said that, historically, the public has ‘generally been willing to tolerate’ job losses around technology when the benefits are apparent, a significant percentage of workers being displaced by AI could create ‘a real political hurricane.’”