Chief Justice of India D.Y. Chandrachud on Saturday cautioned that the integration of AI in modern processes, including court proceedings, raises complex ethical, legal and practical issues that demand a thorough examination.
Delivering his inaugural address at the two-day conference on “Technology and Dialogue” between the Supreme Court of India and the Supreme Court of Singapore, the CJI stressed the need for proper deliberation and a regulated system to prevent the misuse of AI that could affect the fundamental and other privacy rights of citizens.
“Amid the excitement surrounding AI’s capabilities, there are also concerns regarding potential errors and misinterpretations. Without robust auditing mechanisms in place, instances of ‘hallucinations’ — where AI generates false or misleading responses — may occur, leading to improper advice and, in extreme cases, miscarriages of justice,” CJI Chandrachud said. He cited an instance in the US where a New York lawyer submitted a legal brief containing fabricated judicial decisions.
“The impact of bias in AI systems presents a complex challenge, particularly when it comes to indirect discrimination. This form of discrimination occurs when seemingly neutral policies or algorithms disproportionately affect certain groups, thereby undermining their rights and protections,” the CJI said.
The CJI said indirect discrimination could manifest in two crucial stages in the realm of AI. First, incomplete or inaccurate data may lead to biased outcomes during the training phase. Second, discrimination may occur during data processing, often within opaque “black-box” algorithms that obscure the decision-making process from human developers.
A “black box” refers to algorithms or systems where the internal workings arehidden from users or developers, making it difficult to understand how decisions are made or why certain outcomes occur.
“A classic example of this is in the case of automated hiring systems, where algorithms may inadvertently favour certain demographics over others, without the developers fully understanding how or why these biases are being perpetuated. This lack of transparency raises concerns about accountability and the potential for discriminatory outcomes,” CJI Chandrachud said.
The CJI recalled that the European Commission’s proposal for EU regulation of AI highlights the risks associated with AI in judicial settings, categorising certain algorithms as high risk due to their “black box” nature.
He said facial recognition technology (FRT) is a prime example of high-risk AI, given its inherently intrusive nature and potential for misuse. Defined as a “probabilistic technology that can automatically recognise individuals based on their face to authenticate or identify them, FRT is the most widely used type of biometric identification”.
“The full realisation of AI’s potential thus hinges on global collaboration and cooperation. While AI presents unprecedented opportunities, it also raises complex challenges, particularly concerning ethics, accountability and bias. Addressing these challenges requires a concerted effort from stakeholders worldwide, transcending geographical and institutional boundaries,” Justice Chandrachud told the gathering.