MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Sunday, 22 December 2024

Need to understand tech and its impact on world, academic on 'AI in everyday life'

There is a sense of AI nationalism or the idea that a country must develop its own tech to serve its own interests

Mathures Paul Calcutta Published 09.12.24, 06:18 AM
Sana Kharegani speaking at Infocom 2024 on Friday at ITC Sonar. Picture by Gautam Bose

Sana Kharegani speaking at Infocom 2024 on Friday at ITC Sonar. Picture by Gautam Bose

Nations and people are worried about being left behind as artificial intelligence (AI) is improving quickly and has the potential to reshape the global economy and turbocharge scientific research.

Where does a 22-year-old fit in, someone whose career is about to take off?

ADVERTISEMENT

“Be a medical doctor, if you want to be, but understand technology and what it means for you. If you want to be a lawyer, go be a lawyer, but understand the tools and the technology and what’s happening to the sector. No longer can you go into these vertical sectors and not understand the technology and how it’s impacting the world both from a national and a geopolitical perspective,” said Sana Khareghani, professor of practice in AI at King’s College London.

The former head of the UK government’s office for AI is in Calcutta to talk about “AI in everyday life” at Infocom 2024, the ABP Group’s flagship event.

During a conversation on the sidelines of the event, she emphasised the need for an interdisciplinary approach.

“When I was heading the UK’s office for AI, we described the world in terms of a Venn diagram. We have people who develop AI, people who work with AI and people who live with AI. Sometimes you can be all of those things. Interdisciplinarity is the key to solving real challenges across sectors.”

There is a sense of AI nationalism or the idea that a country must develop its own tech to serve its own interests. Laws and regulations are being enacted and alliances are being forged. The US is in a strong position in the AI race for the time being, using trade policy to cut off China from key microchips. In the UK and EU, talks of responsible AI score high.

“We need the rest of the world involved and their voices heard at the table. Unless the world is involved in these conversations, none of the solutions will be representative,” she said.

At the moment, there is the EU AI Act in Europe and an AI regulation approach put out in the UK. China has its security, approach and policies around AI and while the US has an executive order issued by President Biden, there will be a new President there in a few weeks.

“On top of these policies, there are now three safety institutes that are looking at the safety of AI algorithms at the frontier of AI research. And you also have standards.

“These allow interoperability of the technology across borders which everyone is waiting for. Although countries have agreed to work together to reap the benefits of these technologies and mitigate the existential safety risks, this is far from countries agreeing on universal laws and regulations around AI technologies, so standards are key. In my opinion, unless countries like India get involved in this conversation, approaches are not going to be right, because we will not be able to represent the rest of the world looking at these questions from an insular and country-specific perspective,” said Khareghani.

Another worry is the presence of very few women involved in the AI sector. Khareghani quotes figures from Silicon Valley-headquartered non-profit Technovation, which works to bring women and girls from underrepresented countries into AI.

Their recent report mentions an untapped market of $200 billion “that’s left on the table because we don’t get more women to understand how to use these technologies and we need to train them”.

The report says 17 million people are developing/working in AI globally and 14 million are men.

Khareghani’s work involves being an advisor to the Centre for Doctoral Training that’s being run out of Southampton University.

“It’s called SustAI and it is looking at AI for sustainability. That’s looking across the board at how we can make algorithms more sustainable, how we can make them more efficient and make better use of resources. It’s also looking at other sustainability impacts of AI.”

Among her many AI-sized worries is one that involves edtech. “One of my biggest worries in something like edtech is the lack of governance and regulation around it. We haven’t seen any country completely change its national curriculum. There have been tweaks around the edges, but nobody has changed the national curriculum, because changing the national curriculum means that you have to wait 30 years until you figure out whether or not you got it right. That’s a risk nobody has really taken yet. So this is something to watch.”

Follow us on:
ADVERTISEMENT
ADVERTISEMENT