NEW POLL: Voters Support NYT’s Lawsuit vs. OpenAI, Want Congressional Action on AI in 2024, Disclosures on AI Use in Political Ads, Slowdown on AI Development

Voters are supportive of NYT’s lawsuit against OpenAI, with 59% of voters agreeing and just 18% disagreeing that AI companies should not be allowed to use their content to train models and 70% agreeing that AI companies should compensate news outlets if they want to use their articles to train their models 

By a +34 margin, voters support the passage of comprehensive legislation regarding artificial intelligence in 2024, and AIPI’s latest survey dives into specific policy proposals being discussed in Congress.

As the barrage of 2024 election advertisements ramps up, 71% of voters support requiring disclosures for any political ads created by AI

56% of voters want AI progress stopped or significantly slowed while 27% disagree

The newest poll conducted by the Artificial Intelligence Policy Institute (AIPI) shows that voters consider AI legislation to be an important priority in 2024 and continue to be concerned about the technology’s rapid development and deployment into daily life.

Voters Support The New York Times Lawsuit Against OpenAI

The survey asked about The New York Times’ recent lawsuit against Open AI for copyright infringement:

 

  • When informed about the details of the lawsuit and the allegation made by The New York Times, 59% of voters agreed that AI companies should not be allowed to use their content to train models, and just 18% disagreed 
  • 70% agreed with the statement that AI companies should compensate news outlets like The New York Times if they want to use their articles to train their models 
  • As a potential policy solution to the issue, 68% of voters support federal legislation clarifying that copyright law requires AI companies form licensing agreements with news outlets before using their articles to train models

 

“This is a landmark case in what tech companies are allowed to do with the data they collect and extract,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Companies are starting to realize that AI models are a huge threat to the value of their intellectual property, and support restrictions on how AI can be trained. The New York Times is taking the lead and making sure the deployment of generative AI doesn’t repeat the “move fast and break things” approach of Facebook and social media platforms.”

Voters are Ready for Congress to Act in 2024

65% of voters said Congress should pass legislation regarding artificial intelligence in 2024, agreeing with the statement that “the effects of AI are already being seen in society and it’s time to act”. In the same vein, 68% agreed that tech company executives can’t be trusted to self-regulate the AI industry. As Congress considers legislation on AI, the survey found that:

 

  • 82% would like Congress to consider policy on the use of realistic images generated with AI (deepfakes), and another 71% of voters support requiring that any political ads disclose and watermark content created by AI
  • 90% of voters would like to see policy address liability regarding the use of AI for criminal activity 
  • 91% of voters would like legislation to include protections for consumers from AI for fraud
  • 91% would like to see requirements for companies to test and certify models before they are released publicly
  • 83% would like Congress to consider legislation that includes limits on the capabilities of AI models
  • 79% would like Congress to consider legislation that includes limits on how powerful the training data can be for models
  • 72% support policy requiring companies to disclose and watermark content created by AI

 

“2024 will be a defining year for AI and how our political leaders choose to deal with it,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “The data is clear: policymakers cannot turn a blind eye to AI, and their voters are concerned about how they are going to grapple with the rapid changes in their daily lives from this technology. As this remains an issue without partisan battle lines, and as the 2024 election brings AI even more into the public discourse, the time is ripe for action.”

Voters Think AI Development Should Slow Down

The public’s strong desire for political action is underlined by a deep concern about AI’s rapid development:

 

  • 56% of voters are in agreement that it would be a good thing if AI progress was stopped or significantly slowed—just 27% disagreed 
  • Similarly, when presented with the choice, 77% of voters said that we should go slow and deliberately with AI development, as opposed to 8% who said we should speed up development 
  • On the current track, 65% of voters believe that the next two years of progress in AI will be faster than the last two years, with 10% thinking that it will be slower

 

About the Poll 

The poll was taken on January 1st, and contained a survey of 1,264 voters nationally. The survey was conducted in English. Its margin of error is ±4.3 percentage points.

See full toplines here and crosstabs here.


About the Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.