New Poll Reveals Overwhelming Support To Rein in AI’s Use of Public Data, Shows Concern Over AI Job Displacement and Energy Consumption

Concern over job displacement jumped +14 percent after hearing AI-generated music, with nearly 50 percent saying the song was more advanced than they thought AI was capable of

55 percent support a policy proposal to add $47.7 million in research funding to create standards, tests, and guidelines for ensuring AI systems are developed and used responsibly

As public scrutiny rises about companies using publicly available data on the internet to train their AI models, a poll from The Artificial Intelligence Policy Institute finds that three in four (74%) American voters say that companies should be mandated to compensate the data’s creators. Additionally, 60% of voters say AI companies should not be able to use public data on the internet to train their models; just 19% say AI companies should be able to do this, and 21% are unsure. 78% of voters think there should be government regulations on how AI companies can use publicly available data for training their models. That includes 83% of Democrats, 72% of independents, and 78% of Republicans.

The poll also tested voter concerns about the risk of AI taking their jobs, asking the question before and after they were made to listen to an AI-generated song. Before hearing the song, just 31% of voters indicated that they were worried that AI would soon be able to do their job. But after hearing the song, 45% said they worried that AI would soon be able to do their job—with 48% saying the song was more advanced than they thought AI was capable of. In turn, 52% say they are more nervous about the implications of AI-generated music.

The survey also showed that a majority, 55%, supports a current policy proposal to add $47.7 million in funding for research to help create standards, tests, and guidelines for ensuring AI systems are developed and used responsibly; 24% oppose it. On another policy proposal, 61% support a lawmaker-proposed tax on the electricity used by AI companies for their computational power, with the revenue going towards supporting and upgrading the electrical grid infrastructure to handle the growing demand. A majority of independents and Republicans are in favor of this tax, and just 20% of all voters are opposed. Support is in large part attributable to the fact that 72% of voters are concerned about the increasing energy consumption of AI data centers. 

“The general public is decisively opposed to the Gold Rush-like approach that AI companies take to training their models on publicly available data, and they support protections and compensation for the creators of this content,” said AI Policy Institute Executive Director Daniel Colson. “And the more information voters have about AI’s capabilities, the more they are worried about it. This new poll is consistent with previous findings; people want sensible regulation and caution when it comes to AI, not a hands-off approach that gives tech companies free rein.”   

About the Poll 

The poll surveyed 1,039 respondents on April 12 and April 13. The poll was conducted in English, and its margin of error is ±4.5 percentage points.

See full toplines here.

About the Artificial Intelligence Policy Institute

The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.

While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role. 

Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.

AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech. 

AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.  

About Daniel Colson


Daniel Colson is the co-founder and executive director of the AI Policy Institute. AIPI is a think tank that researches and advocates for government policies to mitigate extreme risks from frontier artificial intelligence technologies. Daniel’s research focuses on how AI weapons systems will impact military strategy and global political stability. Prior to AIPI, Daniel co-founded Reserve, a company focused on offering financial services in high-inflation currency regions lacking basic financial infrastructure, and the personal assistant company CampusPA. Daniel has spent the other half of his career as a researcher working at the intersection of theoretical sociology, history, military theory, and catastrophic risk mitigation. He helped Samo Burja write his seminal manuscript on sociology, Great Founder Theory, and has several forthcoming articles on the military strategic implications of Palantir’s AIP targeting and command product which has been operational in Ukraine for the past year. Follow Daniel on Twitter at @DanielColson6