Poll: American Voters Support Liability For Companies That Produce Political Deepfakes, FCC Making Robocalls Illegal
As deceptive usage of artificial intelligence skyrockets during the 2024 election cycle, a new poll from Artificial Intelligence Policy Institute (AIPI) shows the overwhelming majority of Americans support holding AI companies, rather than just individuals, liable for misleading AI-generated content. Specifically, more than 80% of voters say Elevenlabs should be held liable for the recent deepfake robocall of an AI-generated President Biden telling New Hampshire residents to not vote in the Democratic primary; the identical share of voters support the recent Federal Communications Commission (FCC) vote to make the use of AI-generated voices in robocalls illegal.
Additionally, the survey reveals that Americans are wary about OpenAI’s recent biosecurity tests, which 66% say present a bioterrorism risk and say AI models should be tested and contained for such uses. Americans support OpenAI conducting this kind of transparent testing, but 62% say the study’s results increase their concern about using AI for bioterrorism. Along similar lines as views on political deepfakes, 67% of Americans say if an individual uses an artificial intelligence model to assist in creating a virus, the company that made the model should be held liable, compared to 13% who say just the individual should be held liable.
Some key numbers from the poll:
- 84% of respondents say companies that create AI models used to generate fake political content should be held liable, compared to 4% who say they should not be held liable.
- 70% support and just 13% oppose legislation that would make artificial intelligence companies liable if their models were used to create deepfake political content. That includes 71% of Republicans, 74% of Democrats, and 63% of independents.
- 75% of Americans—including 77% of Democrats, 74% of independents, and 74% of Republicans—say that the use of deepfake technology to attempt to influence elections, such as in the New Hampshire case, should be illegal.
- 75% support holding the developers of AI models liable when AI-generated content is used for scams and deceptive political tactics. 66% support holding Elevenlabs liable for the deceptive use of their model in the New Hampshire primary; compared to just 14% who oppose holding the company liable.
- 81% of voters approve of the FCC’s recent vote to make the use of AI-generated voices in robocalls illegal. The vote has overwhelming support with groups particularly vulnerable to robocall scams; 86% of people aged 45 to 64 support the FCC’s vote, and 88% of people over 65 support it.
- After being informed about OpenAI’s recent study to evaluate how much their AI models increase the risk of bioterrorism, 66% of respondents say powerful AI models present a bioterrorism risk and should be tested and contained; just 6% say these models are not risky and should be widely available.
- 62% of Americans say the results of OpenAI’s study increase their concern about using AI for bioterrorism; 8% say it decreases their concern. 79% support requiring companies to test models before they are released to ensure they can’t be used to create viruses. 67% agree that if an individual uses an artificial intelligence model to assist in creating a virus the company that made the model should be held liable; just 13% say the company should not be held liable.
“Across the gamut of AI issues, from elections to biosecurity, Americans are overwhelmingly concerned about the proliferation of powerful AI models and want companies to be held liable when their models are used in such dangerous cases,” said Daniel Colson, Executive Director of the Artificial Intelligence Policy Institute. “Voters are saying loud and clear that we need to hold model developers accountable when their technology is used to undermine their safety; otherwise, there’s no putting the genie back in the bottle.”
About the Poll
The poll was taken on Feb. 12, and contained a survey of 1,103 voters in the United States. The survey was conducted in English. Its margin of error is ±4.6 percentage points.
See full toplines here and crosstabs here.
About The Artificial Intelligence Policy Institute
The Artificial Intelligence Policy Institute is an AI policy and research think tank founded by Daniel Colson to advocate for ethical oversight of AI for mitigating potential catastrophic risks posed by AI. AIPI’s core mission is to demonstrate that the general public has valid concerns about the future of AI and is looking for regulation and guidance from their elected representatives. If politicians want to represent the American people effectively, they must act aggressively to regulate AI’s next phases of development. AIPI seeks to work with the media and policymakers to inform them of the risks of artificial intelligence and develop policies that mitigate risks from emerging technology while still benefiting from artificial intelligence.
While much of the public discussion has been oriented around AI’s potential to take away jobs, AIPI will be focused on centering the importance of policies designed to prevent catastrophic outcomes and mitigate the risk of extinction. Currently, most of the AI space comprises those with a vested interest in advancing AI or are academics. The AI space lacks an organization to both gauge and shape public opinion on the issues—as well as to recommend legislation on the matter— and AIPI will fill that role.
Ultimately, policymakers are political actors, so the country needs an institution that can speak the language of public opinion sentiment. AIPI’s mission is about being able to channel how Americans feel about artificial intelligence and pressure lawmakers to take action.
AIPI will build relationships with lawmakers by using polling to establish AIPI as a subject matter expert on AI safety and policy issues. Politicians are incentivized to support AI slowdown policies due to the strong majority of voters supporting slowdown and regulation. But AI is currently less salient as a political issue than other topics, as so far, there have only been moderate impacts from emerging AI tech.
AI technological advancement is and will continue to be an evolving situation, and politicians, the media, and everyday Americans need real-time information to make sense of it all. AIPI’s polling will show where people stand on new developments and provide crucial policy recommendations for policymakers.
Learn more at https://www.theaipi.org/ or follow Daniel on Twitter at @DanielColson6.