As the 2024 White House race faces the prospect of a firehose of AI-enabled disinformation, a robocall has already sparked a special alert about audio deepfakes impersonating US President Joe Biden.
“What a bunch of malarkey,” said the phone message, digitally spoofing Biden's voice and echoing one of his signature phrases.
A robocall urged New Hampshire residents not to cast ballots in last month's Democratic primary, prompting state officials to launch an investigation into voter suppression.
This has prompted demands from campaigners for stricter policing around artificial intelligence tools or for an outright ban on robocalls.
Disinformation researchers fear the proliferation of cheap and easy-to-use and difficult-to-trace voice cloning tools could lead to rampant misuse of AI-powered applications in a crucial election year.
“This is definitely the tip of the iceberg,” Vijay Balasubramanian, chief executive and co-founder of cyber security firm PinDrop, told AFP.
“We could see more deepfakes this election cycle.”
In a detailed analysis published by Pindrop, a text-to-speech system developed by AI voice cloning startup ElevenLabs was used to create the Biden robocall.
The scandal comes as campaigners on both sides of the US political aisle use sophisticated AI tools for effective campaign messaging and tech investors pump millions of dollars into voice-cloning startups.
Balasubramanian declined to say whether Pindrop shared its findings with ElevenLabs, which last month announced a financing round from investors that valued the company at $1.1 billion, according to Bloomberg News.
ElevenLabs did not respond to repeated AFP requests for comment. Its website directs users to a free text-to-speech generator to “instantly create natural AI voices in any language.”
According to its security guidelines, the company says users are allowed to create voice clones of political figures such as Donald Trump without their permission, if they express “humor or mockery” and “it is clear to the listener that what they are hearing is an imitation, and not authentic content.”
US regulators are considering making AI-generated robocalls illegal, giving new impetus to the fake Biden call effort.
“The political deepfake moment is here,” said Robert Wiseman, president of the advocacy group Public Citizen.
“Policymakers must rush to protect themselves or we face electoral chaos. The New Hampshire deepfake is a reminder of the many ways deepfakes can lead to chaos.”
Researchers worry about the impact of AI tools that create videos and texts that make voters struggle to separate truth from fiction, undermining trust in the electoral process.
But audio deepfakes used to impersonate or smear celebrities and politicians around the world have caused much concern.
“Of all the surfaces where AI can be used for voter suppression — video, image, audio — audio is the biggest vulnerability,” Tim Harper, senior policy analyst at the Center for Democracy & Technology, told AFP.
“Cloning a voice using AI is very easy and difficult to detect.”
The ease of creating and spreading fake audio content complicates an already hyperpolarized political landscape, undermines trust in the media and allows someone to claim fact-based “evidence is fabricated”, Wasim Khaled, chief executive of Blackbird.AI, told AFP.
Such concerns are heightened as the proliferation of AI audio tools outpaces detection software.
China's ByteDance, the owner of the wildly popular platform TikTok, recently unveiled StreamVoice, an AI tool for real-time conversion of a user's voice to any desired alternative.
“Even if the attackers used ElevenLabs this time, it's likely to be a different productive AI system in future attacks,” Balasubramanian said.
“It is imperative that these tools have adequate safeguards in place.”
Balasubramanian and other researchers recommend creating audio watermarks, or digital signatures, as tools, along with possible safeguards such as control that makes them accessible only to authenticated users.
“Even with those measures, it can be difficult and really expensive to detect when these tools are being used to create malicious content that violates your terms of service,” Harper said.
“(It) requires an investment in trust and security and a commitment to focus on electoral integrity as a risk.”
Also, read these top articles today:
Elon Musk's Neuralink Troubles Over? Well, Neuralink's challenges are far from over. Implanting the device in a human is just the beginning of a decades-long clinical project fraught with competitors, financial hurdles and ethical issues. Read all about it here. Sound interesting? Go ahead and share this with everyone you know.
Deepfake video scam busted by cybercriminals! Hong Kong police said Sunday that scammers defrauded a multinational corporation of nearly $26 million by impersonating senior executives using deepfake technology, one of the first cases of its kind in the city. Find out how they did it here. If you enjoyed reading this article, please forward it to your friends and family.
Facebook founder Mark Zuckerberg has apologized to the families of children who have been exploited online. But that is not enough. What lawmakers in the US are now asking social media companies to do. Dive in here.