It will disappoint fans of The Terminator, but the AI revolution is coming not in the form of killer robots or dystopian autocracies, but chat bots. We were told it would mean the apocalypse. So far it looks a lot like customer service, albeit much better than usual.
The latest revolution in public-facing artificial intelligence is ChatGPT, a piece of software designed by OpenAI, a California-based research company. GPT is short for Generative Pretrained Transformer. In the simplest terms, it works by scouring its dataset, which is most of what is written on the Internet, finding the answers that best fit a given prompt, and rendering it in clear, if wooden, English. It’s a bit like the autocomplete function on your phone or email, except on a much grander scale.
With developments in AI, the latest capabilities are mind-boggling. You type in a request and it generates a response. It can write scripts, poems, even newspaper articles. As a test, I asked it for a limerick about The Daily Telegraph — There once was a newspaper called The Daily Telegraph Whose reporters were known for their cheek They’d write about royal affairs With a very particular flair But their puns were often quite weak.
What it lacks in scansion, it makes up for in insolence. It isn’t only journalists and copywriters at risk of being made redundant. Even more impressively, ChatGPT has been able to write credible computer code. It has been suggested that the software could soon replace Google.
Recently, Paul Buchheit, an engineer working on Google’s email service, Gmail, tweeted that Google might only be a year or two away from “total disruption”. Where Google simply provides search results, GPT can turn out a comprehensive answer.
"A lot of the creative industries have been caught flat-footed by this,” says Dr Daniel Susskind, a professor at Oxford and author of a book, The Future of the Professions, about the effect AI will have on employment. “They think there’s something special about a faculty like creativity, but it turns out these systems can solve problems that might require creativity from us, but do it in different ways. It’s a cautionary tale for many people who think what they do is too complex or subtle for these systems to do.”
While ChatGPT is impressive to the layman, he says, it is not a surprise to those studying the field. Automated writing software has been used in certain kinds of journalism for years.
The software is not infallible. While its doggerel is impressive, the poetry lacks heart. It sometimes comes up with outright nonsense. As it is trained on data only going up to last year, it is not up to date on current affairs. Patterns quickly emerge. Asking it similar questions will produce formulaic responses. It has been likened to a confident 11-year-old winging its answers without real understanding. But as Susskind observes, this is just the start.
“It’s impressive, but it raises a lot of ethical concerns,” says Prof Carissa Veliz, a professor of ethics and philosophy at the University of Oxford, UK. School essays, for example, may need a complete rethink when any student can generate a passable essay for free in seconds. Perhaps we will see a return to handwriting, or exam conditions.
“You can ask it to create a conspiracy theory about Covid, say, and it can do it quite well,” Veliz adds. “It makes it cheaper and easier to spread fake news. It’s inherently deceptive in its design. It’s designed to sound like a thinking being but it’s simply statistical inference. It doesn’t have any understanding. It just mimics discourse.”
It may even be that this technology shouldn’t be made available to the public, she says.
ChatGPT is only the most high-profile in a wave of impressive AI developments in recent weeks. For a small fee, you can spruce up your photos with Lensa. Cicero, a bot developed by Meta to play the classic board game Diplomacy, finished in the top 10 per cent of an online competition. Diplomacy is a step on from chess or Go, because the key gameplay dynamic is that you must make deals with your fellow players and then betray them. In the chat boxes, the bots were happily plotting and reneging against their human counterparts. Nobody suspected they weren’t playing a human.
Gary Marcus, an AI entrepreneur and writer, is more worried by another Meta programme, Galactica, which writes scientific papers. “Galactica makes the publication of misinformation really easy,” he says. “It is indifferent to the truth. We should really worry about having bots that can generate really plausible information that is hard to distinguish from reality.” The next generation of GPT will be even better. “I’m concerned that GPT-4 will make the cost of misinformation basically zero, and really difficult to detect. That’s a real problem. The genie is out of the bottle.”
THE DAILY TELEGRAPH