That woman looks gorgeous in a pink dress. I will check if it’s there in my size. Maybe I will even look for a similar shade of lipstick.” Sounds familiar? For many, scrolling continuously through certain shopping websites is a pastime. At times, helping customers make up their minds is the model who appears on the screen.
Here’s the deal: The models you see on some websites — and we are not naming names — are not real. No, they don’t exist. They only exist in the AI world. There are several artificial intelligence start-ups that are selling images of faces that have been computer-generated, which helps small companies skip the part of hiring models for photoshoots.
In fact, the level of realism is so high that 99 per cent people won’t think there is something weird going on. The technology is helping many companies fill their database quickly while keeping costs low.
For example, take a look at Generated.photos, which offers “unique, worry-free model photos”. You can narrow your search of the massive database by “head pose”, “eye colour”, “hair length”, “emotion”, “sex” and “age”, to name a few.
AI-generated images are not illegal but how they are used matters. Batting on behalf of such start-ups, one may say that a background probe of the “real” models shown on certain websites may turn out dubious and a source of embarrassment for the website. Also, there is more diversity by using AI-generated photos.
The pictures that are being generated by AI start-ups involve finding similarities with people in the real world. The computer programme needs to fall back on something to generate something, right? In other words, unknowingly faces may get generated that have some likeness to people in the real world.
Another company working in this space is Rosebud.AI which can even make some of these AI-generated faces talk. You must have seen this online in some website or the other. The level of speech is still not up to the mark but the faces are. For a company on a tight budget, access to such technology matters.
In case you want to see a stream of fake photos, visit thispersondoesnotexist.com and a companion AI system trained on images of cats, called thiscatdoesnotexist.com. Keep hitting the refresh button for the websites to throw up new images.
All this brings us to the big downside or some would say flipside — deepfakes. According to Wired, the term deepfakes comes from the Reddit username of the person or persons who in 2017 released a series of pornographic clips modified using machine learning to include the faces of Hollywood actresses. Deepfakes are improving and for some, it’s just a fun tool while for many it can be a way to spread disinformation. Since technology is constantly improving to identity key facial features, the level of deepfakes is also on the rise.
Face recognition has, in fact, come a long way. You can use a photo app on your phone to easily find pictures related to parties or that of your child without going through the entire collection. So be careful about the apps you are using as a lot of information is at stake.
Recently, The New York Times carried a must-read piece on a company called Clearview AI, which “scraped the web of billions of public photos — casually shared online by everyday users — to create an app capable of recognising a stranger from just one photo. The technology promises superpowers: the ability to organise and process the world in a way that wasn’t possible before.” The little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something”, NYT has reported.
Where do we stand? Agreed that artificial intelligence can make our lives better but there is a price to be paid if we are not careful.