MY KOLKATA EDUGRAPH
ADVERTISEMENT
photo-article-logo Saturday, 25 January 2025

‘As a woman, it’s scary’: With easily available AI apps, anyone can now kiss you or worse

Cybercrime experts and activists against online harassment express shock over new artificial intelligence-aided apps that can generate explicit videos and images from mere photographs

Samrat Sardar Published 24.01.25, 12:59 PM
1 5
AI generated videos have emerged on social media lately, including of Elon Musk and Italian Prime Minister Giorgia Meloni
ADVERTISEMENT

Indian Prime Minister Narendra Modi kisses Italian PM Giorgia Meloni, Amitabh Bachchan kisses his daughter-in-law Aishwarya Rai, Elon Musk kisses Meloni. Have you come across such videos on social media lately?

That is because morphing and generation of fake intimate images/videos are now child’s play with applications like Filmora, AI Video, PixVerse and AI Vidu. 

With only access to your photos, a stranger can now create videos of them being physically intimate with you any way they want.

All these apps are available both on Google Play (Android) and the Apple App Store (iOS), and these applications have already been downloaded by lakhs of people amid a rise in technology-facilitated cyberviolence, specially against women.

Besides numerous advertisements by the developers, there are even tutorials on YouTube on how to use these applications. 

And the number of such apps is growing. 

Dr Dhanya Menon, known as India’s first female cybercrime investigator, said that such videos have already been used for blackmail and extortion. 

Such AI tools increase technology-facilitated gender-based violence against women, said  Dr Debarati Halder, a professor of law at Parul University. 

“They also facilitate gratification for revenge porn that stands without any legal protection in India,” said Halder, who is also the founder and honorary managing director of the Centre for Cyber Victim Counselling.

2 5
An AI-generated image of Pope Francis that surfaced on social media recently. The Pope has called upon the leaders at the World Economic Forum in Davos to keep close oversight of the development of AI

Deepfakes and illegal porn 

An app like AI Video by HubX unabashedly promotes its sales on the promise that it will fulfil your wish to “kiss your celebrity crush”. 

“Just upload a photo of yourself and your favourite star, and watch as AI Video makes it happen. The passion is real,” proclaims the advertisement for the app on YouTube. 

Menon said she was worried that children and young adults might misuse such tools “to find themselves on the criminal side rather than on the victim side”.

Halder said such apps definitely enhanced “the illegal business of generating non-consensual sexually explicit images for adult entertainment”. 

Halder added: “But here comes the legal twist. It now goes to the porn sites – even if they are hosted outside India – to take down the contents after the victim or a public-spirited person has reported the matter as violating privacy rights and creating illegal porn content.” 

Halder, who is also a cybercrime victim counsellor, recalled cases in which some porn websites refused to take down reported content without legal documents like a court order. 

“This further escalates the victimisation of women,” she said. Apart from shock and lifelong trauma, the woman is forced to endure social backlash, victim-blaming and, in some cases, “ignorance by the police, especially if the victim is coming from a semi urban-rural background,” Halder said. 

3 5
Shutterstock

Impact on criminal cases

With the availability of these apps, can a real crime with evidence – maybe sexual abuse that was filmed – be argued to be fake when reported? 

“Sadly, such issues are also rampant. I'm sure this will also happen just like terminologies like morphing and hacking have been and continue to be misused,” said Menon.

Halder said trial court judges were being trained to consider online sexual abuse content-related cases very carefully. She said the prosecution must prove if it was “a genuine case” or an “AI-generated case”. 

“In both cases, the court can hold the perpetrator – either real life or the one who has used AI for creating such illegal sexually explicit content – responsible,” Halder said. 

Halder also said that defendants had “the right to defend their contribution to the causation of crime”, and that cyber forensic tests could be admissible.

Responsibility of the government

Although the threats these technologies pose to society are manifold, a complete ban on them also raises questions about the right to freedom of speech. Mark Zuckerberg’s Meta’s recent decision – after Donald Trump’s election as the US President – to stop fact-checking content adds to the complexities. 

On being asked about steps the government should take, Ria Banerjee, a faculty at Prafulla Chandra College whose area of interest is gender studies, demanded “digital awareness” and spaces that allowed victims to “speak up without the fear of stigmatization”. She wanted such apps banned.

Menon felt that “a complete denial is never a solution”. 

Halder said the government can only impose a ban on these apps if they fail the “clear and present danger test” that determines when freedom of speech should be curtailed.

In case these applications are not following the IT Act Rules, 2021 for grievance redress and are deficient in “monitoring policy for misuse of the apps”, they can be banned. 

Referring to the TikTok controversy in the US – where it was banned amid concerns over user data collection and Chinese government influence, but now Trump is trying to revive the app –  Halder said: “We also have to see if the apps are processing data of the users and can share the same with the government.”

Menon said the government of India has “already started working on this domain”. 

But even if the government goes ahead for immediate banning of such apps, Halder feared that the people who misuse them “may still be at large to use other apps/platforms to continue their harassment”.

4 5
Union minister for electronics & information technology Ashwini Vaishnaw addresses a news conference after meeting social media platforms on the deepfake issue, in New Delhi, November 23, 2023

Legal help: What to do if you are victimised

Halder said that women must not be cowed down. 

All women and girls subjected to technology-facilitated abuse must be encouraged to report such offences to the police and the courts, she said.

“Look for the reporting mechanism of the intermediary. If the intermediary is not replying to your report within 24 hours, approach the local police and lodge a complaint not only on the issue of image violation, but also on the responsibility of the intermediary concerned,” Halder advised.

One should also immediately ask their trusted friends to report the images that “may surface in unknown/unauthorised” platforms. A “pool of reports” will “enable the intermediaries to take action and prevent further circulation” of the content reported, she said.

“It has been made mandatory for all tech companies, social media companies and intermediaries to follow the Information Technology (Intermediary guidelines and digital media ethics) Code, 2021,” she pointed out.

Apart from Chapter IX and XI of the IT Act, 2000, a victim can also look for provisions in the Bharatiya Nyaya Sanhita’s Section 79 as it addresses offences related to word, gesture or act intended to insult modesty of women, Halder pointed out. 

A victim of unauthorised image-based violence/harassment may also seek legal help under the following BNS sections:

Sections 65 & 66 and 43 (addressing offence to computer, computer system and unauthorised access to computer, data etc), 66C (addressing identity theft), 66D (addressing punishment for cheating by personation by using computer resource.), 67A (addressing offence for punishing etc of sexually explicit contents) of the IT Act, 2000(amended in 2008), and some provisions from Bharatiya Nyaya Sanhita like S.75 (addressing sexual harassment), 77 (addressing voyeurism), 79 (addressing offence of harming the modesty of women by word gesture etc) and also indecent representations of women prohibition act. For underage victims, one may necessarily look into the Pocso act.

Menon advised that the crime should also be reported on cybercrime.gov.in.

5 5
Shutterstock

‘As a woman, it’s scary’

According to UN data from last year, 16 to 58 per cent of women and girls across the world met with online gender-based violence. 

For their 2019 collaborative research paper ‘Born Digital, Born Free? A Socio-Legal Study on Young Women’s Experiences of Online Violence in South India’, Anita Gurumurthy, Amrita Vasudevan and Sarada Mahesh surveyed 881 college-going women aged 19-23 years in six cities and small towns in Karnataka, Kerala and Tamil Nadu.

They found that 83 per cent of those women who had faced online harassment experienced sexual harassment “such as abusers manipulating their images to appear sexual, sharing their sexual images without consent and making relentless unwanted requests for sexual contact”. 

In the Union Budget of 2024-25, the Narendra Modi government allocated Rs 1,550 crore to boost AI research and tackle cybercrimes. Amid such government initiatives, features like Filmora AI Kiss and AI Video emerge as new challenges. 

When the capabilities of the new apps were pointed out, Halder said: “Even if you are using intermediary-provided safety guidelines, your images and photo data are not safe. This indeed creates a sense of insecurity and vulnerability in the minds of women and girls.”

Banerjee also found the new apps “alarming and intimidating”. 

“This can aggravate obnoxious forms of harassment by coercing the victim to comply and succumb to the perpetrator's agenda,” Banerjee said.

Menon said: “As a woman, it's scary.” 

Follow us on:
ADVERTISEMENT

MORE IN PICTURES

Share this article

CLOSE