Many experts who have been deeply involved in developing the related technologies of Artificial Intelligence, machine learning and robotics are beginning to warn us about possible consequences that could have disruptive effects on social living. The warnings have ranged from the need for adequate regulatory safeguards to dangerous civilisational and even existential threats to humanity. Side by side, there are the champions of AI who claim that it will usher in the newest revolution in enabling technologies that will improve human well-being beyond imagination. There are many layers in this debate. On the positive side, there is little doubt that AI would bring many benefits in the areas of health and medicine, education, public policy, defence and business strategy. These benefits are well-known and being publicised well. On the negative side, serious questions are being raised about disruptions in labour markets. Obviously, many jobs will be lost over the next few years. New jobs will also be created over time, though most analysts feel that the net aggregate effect is likely to be negative. The existential threat about the takeover of the world by inorganic beings may not be impossible, but does not appear imminent, at least not in the next couple of decades. Machines are still not sentient beings, and their motor skills in navigating the terrain of the planet remain limited. Yet, these two features are essential if machines are indeed to take over the human world.
What, then, is the chief worry taking sleep away from the experts who wish to warn us about the imminent dangers of AI? The primary worry stems from two aspects of the emerging technology that are already visible. These are the ability of machines to learn by themselves, beyond what is ‘taught’ or ‘programmed’ into them and their mastery over human language. Indeed, these are exactly the two features that set Homo sapiens apart from other species — intelligence and communication. The extent of the capabilities of these machines to learn by themselves is not yet fully known, even to the people who develop these machines. Language is critically important in influencing our thoughts, feelings and actions. We use words, images and sounds to communicate. Everywhere, from hospitals to markets, to religious rituals and in the uttering of sweet nothings, we use words to communicate and learn. In the beginning, they say, there was the Word.
Once language is mastered, a machine can, even without being sentient, influence human thoughts and actions. Machines can create (it is no longer in the future) text, images, music, write codes and manipulate language with an efficiency that is astonishingly fast. The package of current capabilities implies, as one historian suggested, a hacking of the human operating system. Hence, AI can communicate with human beings and develop a deep and intimate relationship with them. Can it cause harm in any way? It depends on a number of things. An example from AI in daily life might help understand the possibilities. At the moment, when one uses the internet, say YouTube, AI gauges what one is looking at more frequently. Suppose one is looking at pictures of pretty women. Then the AI will try to bring to the viewer’s notice more and more pictures of pretty women. However, YouTube cannot create new content. It will efficiently curate what is there and dig it out. The latest AI can create new content. It might, after a point, start showing pictures of prettier and prettier women who are ‘deep fakes’ created by the alien intelligence. The viewer will not be able to distinguish between real and fake. Reality and illusion merge into an indistinguishable blur.
Consider people who develop and use AI. They can use it to do mischief. A person who dislikes me and is vindictive might send me a false message which is extremely credible. I act on the fake news to find myself embarrassed or in danger. A business house may push AI to make the maximum money for it. The ‘marketing efforts’ of that AI will use deep fakes to influence the minds of humans, luring them to buy the product. AI can also influence people on how to vote and choose a government; AI can mass-produce false manifestos and spread them with remarkable speed. AI, within the next one or two years, will be able to influence major economic decisions, political choices, and even create cultural artefacts. For instance, AI can easily and completely assimilate all the cultural products of all societies. Then it can create its own output — music, paintings, novels, recipes. AI may not have any feelings yet. But it will be capable of inspiring all kinds of feelings in humans.
A printing press cannot create a book, it can print an existing one. Artificial Intelligence can. A gun can be made more efficient and deadlier, but a gun cannot create another superior gun. AI can create superior AI. In communicating with humans intimately, AI begins to learn new things and these AI machines evolve together, simultaneously and, more importantly, begin to know the individuals they interact with — the person’s beliefs, preferences, likes and dislikes, personal history, and the total psychological makeup. In short, it acquires complete control over an individual’s mind. It is this way that AI, unlike any other technology of the past, begins to acquire autonomy in being able to decide what to do, and agency in doing what it decides. Even without being sentient and fully territorially mobile, AI can disrupt and dominate human life and culture. In the initial stages, over the next five years, it is likely to imitate human behaviour and culture. It will keep on learning endlessly. What happens next is anybody’s guess.
In the ever-growing arabesque of intertwined lies and falsehoods, human beings are losing control over what to believe in and how to react. In the ensuing chaos, democracy begins to crack. Democracy is about debate, differences of opinions, dissent — a tortuous process of consensus building. These are impossible in an accelerating state of chaos. Insecurity increases. People then seek authoritarian solutions — solace in a strongman as leader. In the United States of America, which has access to the best technology and knowledge base, there is a significant percentage of people who have strong archaic beliefs, do not know whether climate change is a real threat or not, and cannot even decide on who has won an open presidential election.
Unlike the looming ecological crisis, the dangers of AI are not entirely out of our hands, yet. We must engage with this new technology and urge governments to regulate them till they are considered safe. Some governments may not want to do so. Similarly, those organisations that own AI technology may resist regulation too. Indeed, there could be new conflicts between corporations and States on who will ultimately own the fertile new digital soil. If that happens, it could well turn out to be the ‘Oppenheimer Moment’ of the 21st century.
Anup Sinha is former Professor of Economics, IIM Calcutta