MY KOLKATA EDUGRAPH
ADVERTISEMENT
Regular-article-logo Saturday, 16 November 2024

How to spot a liar

As artificial intelligence systems improve, it will be easier to doctor images, videos and social interactions

Cade Metz And NYTNS Published 21.04.19, 10:09 AM
Fake news, Deepfakes and machine learning.

Fake news, Deepfakes and machine learning. iStock

During the summer before the 2016 presidential election, John Seymour and Philip Tully, two researchers with ZeroFOX, a security company in Baltimore, US, unveiled a new kind of Twitter bot. By analysing patterns of activity on the social network, the bot learned to fool users into clicking on links in tweets that led to potentially hazardous sites.

The bot, called SNAP_R, was an automated “phishing” system, capable of homing in on the whims of specific individuals and coaxing them toward that moment when they would inadvertently download spyware onto their machines. “Archaeologists believe they’ve found the tomb of Alexander the Great is in the US for the first time: goo.gl/KjdQYT,” the bot tweeted at one unsuspecting user.

ADVERTISEMENT

Even with the odd grammatical misstep, SNAP_R succeeded in eliciting a click as often as 66 per cent of the time, on par with human hackers who craft phishing messages by hand.

The bot was unarmed, merely a proof of concept. But in the wake of the election and the wave of concern over political hacking, fake news and the dark side of social networking, it illustrated why the landscape of fakery will only darken further. The two researchers built what is called a neural network, a complex mathematical system that can learn tasks by analysing vast amounts of data. A neural network can learn to recognise a dog by gleaning patterns from thousands of dog photos. It can learn to identify spoken words by sifting through old tech-support calls.

And, as the two researchers showed, a neural network can learn to write phishing messages by inspecting tweets, Reddit posts and previous online hacks.

Today, the same mathematical technique is infusing machines with a wide range of humanlike powers, from speech recognition to language translation. In many cases, this new breed of artificial intelligence (AI) is also an ideal means of deceiving large numbers of people over the Internet. Mass manipulation is about to get a lot easier.

“It would be very surprising if things don’t go this way,” said Shahar Avin, a researcher at the Center for the Study of Existential Risk at the University of Cambridge, UK. “All the trends point in that direction.”

Many technology observers have expressed concerns at the rise of AI that generates Deepfakes — fake images that look like the real thing. What began as a way of putting anyone’s head onto the shoulders of a porn star has evolved into a tool for seamlessly putting any image or audio into any video.

In April 2018, BuzzFeed and comedian Jordan Peele released a video that put words, including “We need to be more vigilant with what we trust from the Internet,” into the mouth of former US President Barack Obama.

The threat will only expand as researchers develop systems that can metabolise and learn from increasingly large collections of data. Neural networks can generate believable sounds as well as images. This is what enables digital assistants such as Apple Siri to sound more human than they did in years past.

Google has built a system called Duplex that can phone a local restaurant, make reservations, and fool the person on the other end of the line into thinking the caller is a real person. The service is expected to reach smartphones before the end of the year.

Experts have long had the power to doctor audio and video. But as these AI systems improve, it will become easier and cheaper for anyone to generate items of digital content — images, videos, social interactions — that look and sound like the real thing.

Inspired by the culture of academia, the top AI labs and even giant public companies such as Google openly publish their research and, in many cases, their software code.

With these techniques, machines are also learning to read and write. For years, experts questioned whether neural networks could crack the code of natural language. But the tide has shifted in recent months.

Organisations such as Google and OpenAI, an independent lab in San Francisco, have built systems that learn the vagaries of language at the broadest scales — analysing everything from Wikipedia articles to self-published romance novels — before applying the knowledge to specific tasks. The systems can read a paragraph and answer questions about it. They can judge whether a movie review is positive or negative.

This technology could improve phishing bots such as SNAP_R. Today, most Twitter bots seem like bots, especially when you start replying to them. In the future, they will respond in kind.

The technology also could lead to the creation of voice bots that can carry on a decent conversation — and, no doubt one day, call and persuade you to divulge your credit card information.

These new language systems are driven by a new wave of computing power. Google engineers have designed computer chips specifically for training neural networks. Other companies are building similar chips, and as these arrive, they will accelerate AI research even further.

Jack Clark, head of policy at OpenAI, an AI research company, can see a not-too-distant future in which governments create machine-learning systems that attempt to radicalise populations in other countries, or force views onto their own people.

“This is a new kind of societal control or propaganda,” he said. “Governments can start to create campaigns that target individuals, but at the same time operate across many people in parallel, with a larger objective.”

Ideally, artificial intelligence could also provide ways of identifying and stopping this kind of mass manipulation. Mark Zuckerberg of Facebook likes to talk about the possibilities. But for the foreseeable future, we face a machine-learning arms race.

Consider generative adversarial networks, or GANs. These are a pair of neural network systems that can automatically generate convincing images or manipulate existing ones.

They do this by playing a kind of cat-and-mouse game: the first network makes millions of tiny changes to an image — snow gets added to summery street scenes, grizzlies transform into pandas, fake faces look so convincing that viewers mistake them for celebrities — in an effort to fool the second network.

The second network does its best not to be fooled. As the pair battle, the image only gets more convincing — the AI trying to detect fakery always loses.

Detecting fake news is even harder. Humans can barely agree on what counts as fake news; how can we expect a machine to do so? And if it could, would we want it to?

Perhaps the only way to stop misinformation is to somehow teach people to view what they see online with extreme distrust. But that may be the hardest fix of them all.

“We can deploy technology that patches our computer systems,” Avin said. “But we cannot deploy patches to people’s heads.”

And therein lies the problem.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT