MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Tuesday, 05 November 2024

The H Factor: New book on AI that deals with human intelligence in the era of new technology

In Literary Theory for Robots, Dennis Yi Tenen’s playful new book on artificial intelligence and how computers learned to write, one of his most potent examples arrives in the form of a tiny mistake

Jennifer Szalai Published 19.02.24, 07:29 AM
somnath bhatt/nytns

somnath bhatt/nytns

In Literary Theory for Robots, Dennis Yi Tenen’s playful new book on artificial intelligence and how computers learned to write, one of his most potent examples arrives in the form of a tiny mistake.

Tenen draws links among modern-day chatbots, pulp-fiction plot generators, old-fashioned dictionaries and medieval prophecy wheels. Both the utopians (the robots will save us!) and the doomsayers (the robots will destroy us!) have it wrong, he argues. There will always be an irreducibly human aspect to language and learning — a crucial core of meaning that emerges not just from syntax but from experience. Without it, you just get the chatter of parrots, who, “according to Descartes in his Mediations, merely repeated without understanding”, Tenen writes.

ADVERTISEMENT

But Descartes didn’t write Mediations; Tenen must have meant Meditations — the missing “t” will slip past any spell-checker programme because both words are perfectly legitimate. (The book’s index lists the title correctly.) This minuscule typo doesn’t have any bearing on Tenen’s argument; if anything, it bolsters the case he wants to make. Machines are becoming stronger and smarter, but we still decide what is meaningful. A human wrote this book. And, despite the robots in the title, it is meant for other humans to read.

Tenen, now a professor of English and comparative literature at Columbia, US, used to be a software engineer at Microsoft. He puts his disparate skill sets to use in a book that is surprising, funny and resolutely unintimidating, even as he smuggles in big questions about art, intelligence, technology and the future of labour. I suspect that the book’s small size — it’s under 160 pages — is part of the point. People are not indefatigable machines, relentlessly ingesting enormous volumes on enormous subjects. Tenen has figured out how to present a web of complex ideas at a human scale.

To that end, he tells stories, starting with 14th-century Arab scholar Ibn Khaldun, who chronicled the use of the prophecy wheel, and ending with a chapter on 20th-century Russian mathematician Andrey Markov, whose probability analysis of letter sequences in Alexander Pushkin’s Eugene Onegin constituted a fundamental build-
ing block of generative AI. (Regular players of the game Wordle intuit such probabilities all the time.) Tenen writes knowledgeably about the technological roadblocks
that stymied earlier models of computer learning, before “the brute force required to process most everything published in the English language” was so readily available. He urges us to be alert. He also urges us not to panic.

“Intelligence evolves on a spectrum, ranging from ‘partial assistance’ to ‘full automation’,” Tenen writes, offering the example of an automatic transmission in a car. Driving an automatic in the 1960s must have been mind-blowing for people used to manual transmissions. An automatic worked by automating key decisions, downshifting on hills and sending less power to the wheels in bad weather. It removed the option to stall or grind your gears. It was “artificially intelligent”, even if nobody used those words for it. American drivers now take its magic for granted. It has been demystified.

As for the current debates over AI, this book tries to demystify those, too. Instead of talking about AI as if it has a mind of its own, Tenen talks about the collaborative work that went into building it. “We employ a cognitive-linguistic shortcut by condensing and ascribing agency to the technology itself,” he writes. “It’s easier to say, ‘The phone completes my messages’ instead of ‘The engineering team behind the autocompletion tool writing software based on the following dozen research papers completes my messages.’”

Tenen doesn’t deny that AI threatens much of what we call “knowledge work”. Nor does he deny that automating something also devalues it. But he also puts this another way: “Automation reduces barriers of entry, increasing the supply of goods for all.” Learning is cheaper now, and so having a big vocabulary or repertoire of memorised facts is no longer the competitive advantage it once was. “Today’s scribes and scholars can challenge themselves with more creative tasks,” he suggests. “Tasks that are tedious have been outsourced to the machines.”

I take his point, even if this prospect still seems bad to me, with an ever-shrinking sliver of the populace getting to do challenging, creative work while a once-flourishing ecosystem collapses. But Tenen also argues that we, as social beings, have agency, if only we allow ourselves to accept the responsibility that comes with it. “Individual AIs do pose a real danger, given the ability to aggregate power in the pursuit of a goal,” he concedes. But the real danger comes “from our inability to hold technology makers responsible for their actions.” What if someone wanted to strap a jet engine to a car and see how it fared on the streets of a crowded city? Tenen says the answer is obvious: “Don’t do that.”

Why “Don’t do that” can seem easy in one realm but not another requires more thinking, more precision, more scrutiny — all qualities that fall by the wayside when we cower before AI, treating the technology like a singular god instead of a multiplicity of machines built by a multiplicity of humans. Tenen leads by example, bringing his human intelligence to bear on artificial intelligence. By thinking through our collective habits of thought, he offers a meditation all his own.

NYTNS

Follow us on:
ADVERTISEMENT
ADVERTISEMENT