ADVERTISEMENT

Inside Amitava Das’s quest to ‘civilise’ Artificial Intelligence

The boy from Behrampore is presently a professor in the US, working on AI, code-mixing, and more

Priyam Marik Published 04.06.24, 02:14 PM
Amitava Das initially wanted to be a painter, before eventually opting to study engineering in college

Amitava Das initially wanted to be a painter, before eventually opting to study engineering in college Amitava Das

What is intelligence? How is natural intelligence different from its artificial counterpart? Does one form of intelligence need to understand another in order to mimic, even surpass, it? And how exactly can intelligence be interpreted and improved? These are just some of the questions that intrigued Amitava Das, long before most of the rest of the world started debating human versus Artificial Intelligence (AI).

“I got interested in AI back when it wasn’t sexy. In fact, at the time, my fellow students and professors would be mocked and asked to go to the arts’ department,” says Amitava, 39, looking back on his PhD days at Jadavpur University (between 2006 and 2012), when he studied Natural Language Processing (NLP). Today, Amitava is a research associate professor at the AI Institute at the University of South Carolina, where he works at the intersection of “human language, cognition and the mind, and AI”. The technical aspects of his bio (as mentioned on his website) throw up terms that even ChatGPT may struggle to decipher — “social computing”, “multimodal misinformation and disinformation”, “infusing concepts from Gravitational Wave Theory into neural networks and AI”, to mention a few. But his defining contributions in recent times revolve around “civilising AI” and ‘code-mixing’. And yet, Amitava’s first love was not language or technology. It was painting.

ADVERTISEMENT
Amitava preferred to be involved in academia instead of taking up a job on completing his bachelor’s degree

Amitava preferred to be involved in academia instead of taking up a job on completing his bachelor’s degree Amitava Das

‘Give me a problem I cannot solve’

“Growing up in Behrampore, I didn’t want to be an engineer or a scientist. I wanted to be a painter and got admitted to Visva-Bharati,” Amitava tells My Kolkata over video call on a mild morning in Columbia, South Carolina. Unsure about his career path, Amitava thought about pursuing chemistry after a year of painting, before taking up engineering at the Murshidabad College of Engineering and Technology. But Amitava was not eager to do a typical job after graduation. Rather, he wanted to study further. As an inquisitive and occasionally mischievous student who once used his tech expertise to change “the passwords of all the computers in a lab before an exam”, Amitava needed a challenge. Something to channelise his intellect and intensity towards. So, one day, he went up to his senior and good friend, Asif Eqbal (Asifda), now an associate professor at IIT Patna, and told him: “Give me a problem I cannot solve.” Asifda obliged, designing a task that required transcribing code from one language into another. Over two months, Amitava racked his brains but could not solve it. But he did find a more important answer: “I knew I wanted to do research.”

As part of his PhD at Jadavpur University, Amitava worked on something he calls “sentiment analysis and opinion mining”. His primary aim was to understand if machines can identify and analyse human sentiments. Following his PhD, Amitava’s skill set took him on a professional journey whose intricacy could rival that of neural networks. In 2012, Amitava was a postdoctoral scientist at the Norwegian Institute of Science and Technology (NTNU) in Trondheim. Then, the first half of 2013 brought him to Bengaluru to work as chief engineer at Samsung India, where he helped build a “sentimentally intelligent virtual agent capable of recognising emotions from images or text from social media activity” for the Samsung Galaxy series. Between August 2013 and September 2014, Amitava was in Denton, Texas, as a research scientist at the University of North Texas. This is where Amitava ignited his passion in code-mixing, involving two or more language varieties in speech. In June 2015, Amitava took over as assistant professor at IIT Sri City in Andhra Pradesh, a post he held till the middle of 2018. In the meantime, Amitava spent a year as a visiting scientist at the Indian School of Business (ISB). The year 2018 saw him take up the role of associate professor at Mahindra University in Hyderabad, which he left after seven months to join Wipro. Having functioned as principal scientist at Wipro from February 2019 to June 2022, Amitava has continued his association with the company in the capacity of an advisory scientist since his move to the US in July 2022. Last August, Amitava added another feather to his cap when he came on board as a part-time, adjunct faculty member at IIT Patna.

‘AI can be surprisingly smart as well as surprisingly stupid’

Amitava explains the most common problems that leads AI to ‘hallucinate’

Amitava explains the most common problems that leads AI to ‘hallucinate’ TT archives

“I believe in curiosity and continuous learning,” says Amitava, who imbibes the same spirit of inquiry among his students in the US, where he teaches NLP. “AI can be surprisingly smart as well as surprisingly stupid,” observes Amitava, as if he were making a glib comment about a child. He proceeds to specify the stupidity part, citing something called ‘hallucinations’. While the word has generally denoted false perception in humans, in the context of AI, to hallucinate is to present false or misleading information with confidence and authority.

Cambridge Dictionary had named ‘hallucinate’ as its word of the year in 2023. Amitava is not a fan of the label but has reluctantly accepted it. “My goal is to detect hallucinations in AI and figure out ways to avoid and mitigate them,” declares Amitava, before explaining with avuncular enthusiasm: “A common problem with AI is that of factual mirage, where it says something factually inaccurate combined with something that’s true, which becomes difficult to detect. AI can also write extremely convincing stories and make them appear real (he proceeds to suggest a prompt: Elon Musk and Kamala Harris get married!). AI can invent information on its own, such as making up names and quotes of victims when talking about an earthquake. Plus, AI is frequently bad with acronyms. Try RTI (Right to Information) in the Indian context, and it will make all sorts of wrong extensions.”

Amitava takes an example from 3 Idiots — Rancho’s (played by Aamir Khan) confusing but entirely correct definition of a book — to illustrate how humans struggle to process information depending on speed, length and the complexity of language. AI faces similar issues, argues Amitava, whose mission of “civilising AI” is to make it more receptive to the context of its content. “We also need to make AI question better,” he adds. Why? “Think of a situation where you directly ask a Large Language Model (LLM) such as ChatGPT or Google’s Bard how to make an atom bomb. Chances are that the LLM won’t give you an answer and try to sidestep the question. But there are ways to obtain the answer you’re looking for by simply twisting the nature of your queries. Maybe you make up a complicated story for a movie and imagine its protagonist in a situation where they need vital information for the plot to progress. That’s where an LLM often starts giving away answers, including ones related to hacking government websites.”

Civilising AI for Amitava also means a framework that equips AI with an ability to weed out misinformation and disinformation on a large scale. “When I was working full-time at Wipro, a project for Meta (previously Facebook) outsourced manual checking of misinformation in its content to us. But given the amount and the pace of content generation, it was very difficult to keep up,” recollects Amitava. As a professor, Amitava is, understandably, concerned about students copy-pasting solutions from AI, defeating the purpose of knowledge. “We’re working on techniques that can distinguish work generated by AI from human efforts. Watermarks are a partial solution, but they don’t work when someone has paraphrased text generated from AI.”

Amitava has written on how to combat AI hallucinations through better articulation besides applying the “Counter-Turing Test” in a paper, which won him the Outstanding Paper Award at the conference on Empirical Methods in Natural Language Processing (EMNLP) in Singapore last December. He and his team have also devised three indices — AI Detectability Index (ADI), AI Adversarial Attack Index (AAVI) and Hallucination Vulnerability Index (HVI). According to Amitava, “civilising AI embodies a nuanced equilibrium between a machine’s usability (ADI) and its inclination towards adversarial behaviour (AAVI and HVI).”

‘Do we want AI to be our servant or do we want it to be our god?’

Amitava believes that government regulation is a must when it comes to the future of AI

Amitava believes that government regulation is a must when it comes to the future of AI Amitava Das

As the discussion veers to code-mixing, Amitava dials up the energy. “When we speak in multiple languages, we’re still thinking predominantly in our mother tongue. Look at a typical Bengali sentence such as: ‘Tomar brand new phone ta ke immediately charge koro’ (Charge your brand new phone immediately). Even though there are more English words than Bengali ones in the sentence, it’s still a Bengali sentence, which is conceived and syntactically structured in Bengali,” elaborates Amitava, for whom the trickiest part about getting AI to comprehend code-mixing is dealing with transitions, the points where language switches from one to another. He even narrates (best not replicated in full by a layperson!) how he got inspired by electromagnetic waves and conceived of CONFLATOR, a code-mixing model built by him and his team.

Segueing to a broader, almost philosophical realm, does Amitava think that AI and its advance is best left to private players? Or should governments intervene more robustly? “The government has to be involved. Otherwise markets will only be concerned with profits, not regulation. Tomorrow, if an AI-automated car kills a pedestrian, who’s responsible for that? These are questions for the government to decide and frame into policies,” responds Amitava, keen on looking at an even larger picture: “In the long run, we need to decide what kind of AI we want. Do we want AI to be our servant or do we want it to be our god?”

On the livelihood front, Amitava is not worried about AI eating up jobs originally meant for humans: “Like previous forms of technology, AI will make us adapt and evolve. Old jobs may go, but new ones will emerge.” What Amitava is concerned about is safety, especially around AI and data, a concern that gets mingled with excitement as every week brings with it another potentially game-changing discovery in AI. “I’m intrigued by a number of AI tools and applications. In particular, Midjourney and Sora excite me a lot. As a creative person, I’m fascinated by it,” admits Amitava, himself an aspiring filmmaker.

While his fascination with all things AI is lifelong, Amitava feels the collective craze around AI is approaching fever pitch and may soon plateau. If and when that happens, the natural intelligence of pracademics like Amitava will be even more crucial in ensuring that AI can be an ally instead of an adversary for humanity.

Follow us on:
ADVERTISEMENT