EDITORIAL

Deepfaking the hustings

“AI-generated deepfakes of Taylor Swift are an example of AI’s power to deceive and defraud voters. The potential harm to society resulting from such disinformation are wide-reaching and immensely damaging.

TDT

This year, nearly 2 billion people in over 50 countries will be casting their votes in elections and the one thing that’s on the minds of tech experts is: how will artificial intelligence (AI)-generated disinformation impact the results of these crucial political exercises?

As it is, AI has already been making waves in politics worldwide. In January, a spam campaign launched to discredit Taiwan’s president was traced to an actor associated with the Chinese Communist Party, even as deepfake videos showed candidates withdrawing from the elections in Bangladesh on election day.

Then UK Prime Minister Rishi Sumak was impersonated in a range of video ads on Facebook while in the US, President Joe Biden was the subject of an AI-generated audio message attempting to dissuade people from voting during the New Hampshire primary.

In April, a video of Muhammad Basharat Raja, a candidate in the Pakistan elections, showed him telling voters to boycott the vote. India’s PM, Narendra Modis was also reproduced through AI, for satirical purposes. In Ukraine, President Volodymyr Zelensky was cloned in a video which simulated him asking his troops to lay down their arms a few days into a full-scale Russian attack.

Likewise in Slovakia, deepfakes were circulated, defaming a political party leader to swing the elections in favor of his pro-Russia rival.

And in Indonesia, ex-military general Prabowo Subianto, who had been accused of war crimes and human rights abuses during the Suharto regime in the 1990s, sold himself in the last presidential election through, among other things, videos showing him as a heart sign-flashing grandfather. It worked, as Prabowo is now Indonesia’s president. Not bad for someone once banned from entering the US and Australia for his dismal human rights record.

More recently, while Democrats were gathered at their Chicago convention, GOP presidential candidate Donald Trump plugged away on social media posting an image of someone resembling his rival, Vice President Kamala Harris, addressing what looked like an extreme leftist demonstration with the Communist red banner prominently displayed.

On his Truth Social account, Trump also had AI-generated women wearing “Swifties for Trump” shirts along with a fake image of ultra-popular artist Taylor Swift garbed in an Uncle Sam top captioned,” Taylor wants YOU to VOTE for DONALD TRUMP,” alongside Trump exclaiming, “I accept!” In reality, Taylor Swift is, of course, a staunch supporter of President Biden and VP Kamala Harris.

Said Public Citizen co-president Lisa Gilbert, “The AI-generated deepfakes of Taylor Swift are yet another example of AI’s power to deceive and defraud voters.”

Gilbert, whose group advocates for legislation to regulate AI, added, “The potential harm to society resulting from such disinformation is wide-reaching and immensely damaging.”

Someone identified only as “Sergey” from the Spanish collective United Unknown, which is described as a group of “visual guerrilla video and image creators,” uses Stable Diffusion to generate deepfakes creating realistic satirical images. So far, this Stability AI tool is the only one among major AI image generators that is open-source, meaning its models can be downloaded and modified allowing users freedom on whom and what they want to depict.

“It is already very easy, and will even be easier in the near future to use AI tools to generate fake images realistic enough to fool people. Generators have gone from generating poor quality images to achieving photo-realistic results,” says Sergey.

He emphasized, however, that technology is neutral. “The responsibility always lies with the creator, not with the technology used. AI generates images from our instructions; the results are mainly based on human requests but the knowledge the models acquire is from us generators. They replicate, reproduce our values, prejudices, biases, offering an image that reflects us, our ideas,” stressed Sergey.

This is a reminder, especially as we face an all-important midterm election in 2025, and further, the 2028 presidential elections. How much will these crucial elections be shaped by the use of technology?

Indeed, the ultimate concern is not so much AI in itself but how it is intended to be used. Still, as deepfake tools become more sophisticated and accessible to generators, the threat they pose to society as a whole is alarming.

Responsible media’s all-important role, specifically its vigilance in fact-checking information online in an era of rampant disinformation, cannot be overstressed.

Policy makers and legislators likewise must act with urgency in prioritizing legislative measures designed to ensure that voters and the general public are not fooled and the electoral — nay, the entire democratic — process is prevented from crumbling from the onslaught of disinformation.