MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Wednesday, 25 December 2024

Fake is all the rage

Making deep fakes gets cheaper and easier, thanks to AI

Stuart A. Thompson Published 03.04.23, 05:47 AM

NYTNS

It wouldn’t be completely out of character for Joe Rogan, the comedian-turned-podcaster, to endorse a “libido-boosting” coffee brand for men.

But when a video circulating on TikTok showed him and his guest, Andrew Huberman, hawking the coffee, some viewers were shocked — including Huberman.

ADVERTISEMENT

“Yep that’s fake,” Huberman wrote on Twitter after seeing the ad, in which he appears to praise the coffee’s testosterone-boosting potential. Experts said Rogan’s voice appeared to have been synthesised using AI tools. Huberman’s comments were ripped from an unrelated interview.

Making deep fakes once required elaborate software to put one person’s face onto another’s. But now, many of the tools to create them are available easily — even on smartphone apps.

The content, called cheap fakes by researchers, works by cloning celeb voices, altering mouth movements to match other audio and writing persuasive dialogue.

The videos have raised fresh concerns over whether social media companies are prepared to moderate the growing digital fakery. “What’s different is that everybody can do it now,” said Britt Paris of Rutgers University, US, who helped coin the term “cheap fakes”. “It’s not just people with sophisticated computational technology and fairly sophisticated computational know-how. Instead, it’s a free app.”

In one video on TikTok, US Vice-President Kamala Harris appeared to say everyone hospitalised for Covid-19 was vaccinated. In fact, she said the patients were unvaccinated.

Graphika, a research firm that studies disinformation, spotted deep fakes of fictional news anchors that pro-China bot accounts distributed last year, in the first known example of the tech being used for state-aligned influence campaigns.

But several new tools offer similar technology to everyday Internet users, giving comedians and partisans the chance to make their own convincing spoofs.

Last month, a video circulated showing President Biden declaring a national draft for the war between Russia and Ukraine. The video was produced by the team behind Human Events Daily, a podcast and livestream run by Jack Posobiec, a Right-wing influencer known for spreading conspiracy theories. A tweet about the video from The Patriot Oasis, a conservative account, used a breaking news label without indicating the video was fake. It was viewed over 8 million times.

Many of the video clips featuring synthesised voices appeared to use technology from ElevenLabs, a US startup co-founded by a former Google engineer. In November, the company debuted a speech-cloning tool that can be trained to replicate voices in seconds.

ElevenLabs attracted attention last month after 4chan, a message board known for racist and conspiratorial content, used the tool to share hateful messages. In one example, 4chan users created an audio recording of an antisemitic text using a computer-generated voice that mimicked actor Emma Watson. Motherboard reported earlier on 4chan’s use of the audio technology.

ElevenLabs said on Twitter that it would introduce new safeguards, like limiting voice cloning to paid accounts and providing a new AI-detecting tool. But 4chan users said they would create their own version of the voice-cloning technology using open-source code, posting demos similar to audio produced by ElevenLabs.

Experts who study deep fake technology suggested the fake ad featuring Rogan and Huberman had most likely been created with a voice-cloning program, though the exact tool used was not clear. The audio of Rogan was spliced into a real interview with Huberman discussing testosterone.

Federal regulators have been slow to respond. One federal law from 2019 requested a report on the weaponisation of deep fakes by foreigners, required government agencies to notify Congress if deep fakes targeted elections in the US and created a prize to encourage the research on tools that could detect deep fakes.

“We can’t wait two years...” said Ravit Dotan, a researcher who runs the Collaborative AI Responsibility Lab at the University of Pittsburgh, US. “By then, the damage could be too much.”

NYTNS

Follow us on:
ADVERTISEMENT
ADVERTISEMENT