MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Wednesday, 03 July 2024

Deepfake video of actor Rashmika Mandanna shocks, concern over misuse of AI tools

While Rashmika expressed shock at the fake video, Bollywood icon Amitabh Bachchan commented on X: 'Yes this is a strong case for legal'

K.M. Rakesh Bangalore Published 08.11.23, 06:05 AM
Rashmika Mandanna.

Rashmika Mandanna. File picture

A viral deepfake video of actor Rashmika Mandanna has sparked serious concern about the misuse of easy-to-use tools available even on smartphones to create mischievous and offensive images.

In the short video clip still circulating on social media platforms, a woman portrayed as Rashmika and clad in a revealing outfit enters an elevator. Only that she isn’t Rashmika. Later on Tuesday, a deepfake still image of actor Katrina Kaif emerged, seeking to morph a popular scene from the trailer of her upcoming film Tiger 3.

ADVERTISEMENT

While Rashmika expressed shock at the fake video, Bollywood icon Amitabh Bachchan commented on X: “Yes this is a strong case for legal.”

The Centre on Tuesday urged digital media intermediaries to identify misinformation and deepfakes and remove any reported content within 36 hours, or face action under the IT Act.

The original video features British-Indian Zara Patel who has a following of 4.5 lakh on Instagram. The video shared by Zara, whose profile describes her as “full-time engineer” and “mental health advocate”, has so far received over 18,000 likes. Unidentified miscreants have superimposed Rashmika’s face on Zara’s.

This comes hardly five months after a social media user from Kerala amused everyone with his deepfake Godfather sequence by replacing Al Pacino, Rex Rocco and John Cazale with Mohanlal, Mammootty and Fahadh Fasil, respectively.

Rashmika described the video purporting to feature her as “scary” and said she was hurt.

“I feel really hurt to share this and have to talk about the deepfake video of me being spread online. Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused,” the actor said on X on Monday.

“Today, as a woman and as an actor, I am thankful for my family, friends and well-wishers who are my protection and support system. But if this happened to me when I was in school or college, I genuinely can’t imagine how could I ever tackle this. We need to address this as a community and with urgency before more of us are affected by such identity theft,” Rashmika added.

Zara, in her Instagram story, clarified that she had no role in the deepfake video and that even she was upset.

“It has come to my attention that someone created a deepfake video using my body and a popular Bollywood actresses face. I had no involvement with the deepfake video, and l’m deeply disturbed and upset by what is happening,” Zara said.

“I worry about the future of women and girls who now have to fear even more about putting themselves on social media. Please take a step back and fact-check what you see on the internet. Not everything on the internet is real,” she cautioned.

Shweta Mohandas, researcher on AI at the Centre for Internet and Society, a Bangalore-based research organisation, felt that these AI-based tools are proliferating at a fast rate. “This technology is getting simpler and faster. That is the real danger. It can be used for gender-based violence, intimate partner violence. That is extremely worrying,” she told The Telegraph.

“It’s easy to create such deepfake videos of anyone whose photographs are easily accessible on social media. And it is becoming really easy since these applications do much of the heavy-lifting unlike in the past when one needed to be skilful in handling software like Photoshop to even do anything remotely so,” she pointed out.

While Shweta doesn’t think many precautions can be put in place to stop those misusing this kind of technology, she said even the existing civil and criminal laws can take punitive action. “Existing laws can handle these offences to a large extent. Criminal and civil laws can be used to penalise those who misuse these technologies rather than wait for new legislation.”

South filmmaker Pawan Kumar said the deepfake video was scary. “It is very scary. I was discussing yesterday. In a year or two we won’t know what is real anymore,” he told this newspaper.

“That Godfather deepfake was artistic. But this is completely different,” he said, expressing fear at the scope of misuse.

Kumar cautioned that it wouldn’t be possible to control the spread of technology. “Is it really possible to regulate this technology? I think this is something that we should understand how to deal with, although there are also positive elements in the technology.”

Trinamul Congress Rajya Sabha member Saket Gokhale on Tuesday wrote to minister of state for electronics and IT Rajeev Chandrasekhar mentioning that the “proliferation of deepfake video technology can have serious consequences which include shaming & targeting of women, implicating innocent people into videos of crime, fake targeting of individuals on social media, as well as manipulation during elections & in domestic politics”.

Gokhale sought to know the steps being taken by the Union government to deal with the “danger and menace of deepfake videos and AI-generated fake images” that could even harm the sovereignty and integrity of the country and had the potential to cause law and order problems.

In response to an X user’s message seeking a legal and regulatory framework to deal with deepfakes, Chandrasekhar said the “government is committed to ensuring Safety and Trust of all DigitalNagriks using Internet”.

“Under the IT rules notified in April, 2023 — it is a legal obligation for platforms to ensure no misinformation is posted by any user AND ensure that when reported by any user or govt, misinformation is removed in 36 hrs.”

“If platforms do not comply wth this, rule 7 will apply and platforms can be taken to court by aggrieved person under provisions of IPC. Deep fakes are latest and even more dangerous and damaging form of misinformation and needs to be dealt with by platforms.”

Follow us on:
ADVERTISEMENT