MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Saturday, 16 November 2024

Rising rogue: Editorial on the need to regulate Artificial Intelligence

The Indian government, however, has told Parliament in April that it was not planning any restrictions on AI, which it described as an engine for economic growth and entrepreneurship

The Editorial Board Published 27.05.23, 04:26 AM
Representational image.

Representational image. File photo

The OpenAI chief executive officer, Sam Altman, whose interactive chatbot, ChatGPT, has taken the world by storm, warned the world last week that he might have unleashed something he cannot fully control. In a testimony to the United States of America Congress, he supported regulations on artificial intelligence firms like his. Generative AI of the kind that ChatGPT and several similar platforms, including Google’s Bard, use can mimic humans in producing and analysing text, cracking tough examinations, answering questions, and performing other language and speech-related tasks. The technology is promising to transform the world: Goldman Sachs warned in March that it could replace 300 million jobs. This is only a part of the broader AI revolution, which is poised to disrupt multiple industries and shape human perception of what reality means. Multiple countries are investing in AI-driven weapons systems that will choose, aim and strike targets without any human intervention, raising legal, technological and ethical questions. This week, an AI-generated deepfake claiming to capture an explosion at the Pentagon went viral, demonstrating the dangers of this technology.

In March, several technology titans, researchers and founders, including Elon Musk, the owner of Tesla, SpaceX and Twitter, issued an open letter, calling for a pause in the development of AI systems more powerful than GPT-4, the latest version of ChatGPT. Meanwhile, governments are scrambling to step in before it is too late. So far, China and the European Union have been most proactive in moving towards systems and laws aimed at guarding against technology that could go rogue. China has set up an algorithm registry where AI firms are required to share the source code of their products. It has introduced regulations against deepfake technology and has legally put the onus on tech companies for any mishaps because of their AI. While China is developing almost bespoke rules for each new advance in AI tech, the EU has taken a different approach. Its proposed new legislation will be a blanket law that will divide AI into three categories: acceptable, risky but acceptable with regulations, and so dangerous that it must be banned.

The Indian government, however, has told Parliament in April that it was not planning any restrictions on AI, which it described as an engine for economic growth and entrepreneurship. This complacency might prove to be short-sighted. New innovation must indeed be encouraged. Yet without guardrails in place, the scope for misuse is immense in a country like India, with a large technological gap, deep polarisation, and a media landscape that, for the most part, no longer holds power to any kind of accountability. It is not too difficult to envisage a situation where AI — either by design or because it goes out of control — creates an economic crisis, sparks riots or enables political fraud. Indian democracy is already under severe stress. It must not wait to test whether it can withstand the risks of an unpredictable technology.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT