First Rashmika Mandanna and then Katrina Kaif. AI generated morphed videos of the two actors created a flutter last week, highlighting the urgent need to stop the misuse of deepfake technology and prompting calls for better ways to identify it. While Delhi Police on Friday registered an FIR against unidentified people in connection with the deepfake video of Mandanna, Amitabh Bachchan and Union minister Rajeev Chandrasekhar among others have expressed concern. The government stepped in with an advisory to major social media companies to identify misinformation, deepfakes and other content that violate rules and remove those within 36 hours after being reported to them.
The debate, in the backdrop of the Israel-Hamas conflict that saw a surge in the use of deepfake video to spread disinformation and manipulate public opinion, also led to many questions.
What are deepfake videos?
Deepfake videos are synthetic media in which a person in an existing image or video is replaced with someone else's likeness. While the technology has been around for several years, it has become increasingly sophisticated and accessible recently, raising concerns about its potential misuse.
What can we do?
One way to combat the spread of deepfakes is to educate the public about the technology and how to identify fakes.
"Far more than technical expertise or abilities, there is a mindset we need to encourage. People need to be aware that the creation of fakes is rampant and becoming easier all the time.,” Eoghan Sweeney, an open-source investigation (OSINT) specialist and trainer, told PTI.
“That is why, in a fraught atmosphere such as exists around a scenario like the current one, it's crucial to be aware that a huge amount of the information and content that finds its way to your attention is inauthentic," he added.
Some tips:
Several tools and techniques can be used to detect deepfakes, such as looking for inconsistencies in facial expressions, skin texture, and lighting. However, deepfakes are becoming increasingly sophisticated, making it more difficult to spot them.
Look out for signs that give an idea that videos being shared on social media could be AI-generated fake photos or visuals.
- AI-generated text can sometimes be grammatically incorrect or have odd phrasing. This is because AI systems are trained on large datasets of text, which may only sometimes contain perfect grammar or natural language usage.
- AI-generated text can sometimes go off on tangents or introduce new information irrelevant to the main topic. This is because AI systems may not always understand the context of the text they generate.
- AI-generated photos and videos can sometimes have peculiar lighting, facial gestures, or backgrounds. This is because AI systems may not always be able to generate realistic images and videos accurately.
- AI-generated videos are often created by stitching together different clips, so there may be inconsistencies in the lighting, shadows, or background. The subject's skin tone may change from one shot to the next, or the shadows may be in different directions.
- AI-generated videos can have difficulty accurately rendering human movements so there may be weird or unnatural movements in the video. The face of the person/people in the video may contort, or their limbs may move strangely.
- AI-generated videos are often low quality, especially if created using a free or low-cost AI video generator. Look for pixelation, blurring, or other video artifacts.
- AI video generation technology is constantly improving, so it's important to know the latest techniques. You can do this by reading articles and blogs about AI video generation or by following experts on social media.
Identify the source of information: Where did the disinformation come from? Who posted it? What are their credentials? Verify the information. Check the facts and see if there is any evidence to support the claim. If you can't find any evidence to support the claim, it is likely false.
Once you have verified that the disinformation is false, explain why it is incorrect clearly and concisely. Be sure to provide evidence to support your claims. It's necessary to break the chain of disinformation at your end.
"The way that social media algorithms and human psychology work, it isn't likely to be the most credible material that forces its way most readily into your eye line, but rather that which attracts the most attention, often because it is dramatic and outrage-inducing. Needless to say, dedicated purveyors of disinformation are practised in techniques that elicit such reactions. So in terms of winning the battle for minds and eyeballs, the deck is stacked," Sweeney told PTI.
The Berlin-based OSINT specialist suggested that people step back and ask themselves some questions before trying to evaluate photographs and videos forensically.
* Why might it be that this is being shared right now?
* What is the response it is trying to provoke in me and others?
* How susceptible am I to taking it on board, given my existing sympathies, and how does it play on those? (You can contact PTI Fact Check on WhatsApp number +91-8130503759 for any claim or social media post that needs to be fact checked or verified).
Except for the headline, this story has not been edited by The Telegraph Online staff and has been published from a syndicated feed.