MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Tuesday, 05 November 2024

Skewed focus: EU on artificial intelligence

Transformative technology like AI should not be stifled by undue restrictions

The Editorial Board Published 26.04.21, 12:58 AM
Representational image.

Representational image. Shutterstock

Trust is imperative; this is the principle underlying the new regulations unveiled by the European Union to govern the use of artificial intelligence. Trust can only stem from transparency. Therefore, according to the draft rules, companies providing AI services and those using these would have to be able to explain exactly how the AI is making decisions, be open to risk assessments, and ensure human oversight in how these systems are created and used. This is a lofty aim; experts claim that as AI systems get more complex and feed on more data, it may be impossible to put a finger on why the machine is making a particular decision. Regulations to control AI — an ecosystem with no set definition yet either in law or in the industry — will thus have to be fluid and evolve with the times. More important, a one-size-fits-all approach cannot be used to govern a system that feeds on data from across the world and performs a vast range of functions, from making self-driving cars work to taking hiring and lending decisions in banks and scoring examinations. The draft rules not only set limits around using AI in these fields, but they also place checks and balances on “high-risk” applications of AI by law enforcement and courts to safeguard people’s fundamental rights. While some uses like live facial recognition in public places may be banned altogether, there are several exemptions in the name of national security that leave room for encroachments on privacy and fundamental rights. It is dangerous that while governments demand accountability from tech firms, they keep loopholes open to mine data and exploit the invasive reach of AI.

The world was looking towards the EU — its General Data Protection Regulation in 2018 has become the framework for similar legislations worldwide — for a way forward when it comes to regulating AI. While its emphasis on transparency is significant, the burden of accountability is placed on those developing AI. Moreover, the draft skirts around issues of racial and gender bias that have plagued new technologies since their inception. These prejudices can make the use of AI in the interest of “national security” antithetical to democratic principles. In India, where regulations of the internet and its services tend to be overarching and weigh in favour of State control, such imbalances can mean the difference between democracy and a surveillance State. Transformative technology like AI should not be stifled by undue restrictions. It is citizens’ interests and not the State that must be at the centre of legislation regulating AI.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT