MY KOLKATA EDUGRAPH
ADVERTISEMENT
regular-article-logo Friday, 15 November 2024

Here’s how the AI flag will keep flying and fluttering in 2024

Companies like OpenAI, Google and Inflection AI are currently training models that will be far more powerful than ChatGPT 4.0

Mathures Paul Published 01.01.24, 11:33 AM
Dave Bowman and HAL in 2001: A Space Odyssey with the goal of disabling the latter’s anti-social impulses

Dave Bowman and HAL in 2001: A Space Odyssey with the goal of disabling the latter’s anti-social impulses

Companies like OpenAI, Google and Inflection AI are currently training models that will be far more powerful than ChatGPT 4.0. Of course, ChatGPT can already do a lot, so what will these more powerful models do? These will obviously do everything the current chatbots do but will also showcase new behaviours or Emergent Behaviour, like answering questions without having zero or little knowledge of a topic.

We are talking about AI as if it’s a new trick from PC Sorcar Jr. The truth is, we have been using AI for years. What’s changed? AI is now sexy in the eyes of investors. In the last few years we have been told that millionaire status can be enjoyed by investing in crypto and then NFTs and now, AI.

ADVERTISEMENT

Many are going to sleep, waking up in the middle of the night to images of Hal 9000 from Arthur C. Clarke’s Space Odyssey series. But no, we haven’t reached Skynet level yet. You know, the AI network of the Terminator films.

Early stages of the hype cycle

For years, tech companies have mostly depended on spec sheets to market smartphones, laptops and whatnot. A few companies with mediocre products now have a chance to jack up their marketing game as well as stock. It’s not enough for a company to be profitable because investors demand top-of-the-line, disruptive technology. Enter AI.

If you ask an investor of repute, they will probably say that AI has been on their radar since at least 2018-19. What we are seeing is the excitement of using generative AI models, especially since ChatGPT launched late last year. We are simply in the early stage of the AI hype cycle because a lot of funding is pouring in. If you are a founder of a startup with some clever implementation of AI, investments will come your way.

Look at a company like Nvidia, which became a part of the trillion-dollar club in 2023. If you are not in the tech circle, chances are you may have heard of it only in the last few months. They produce computer chips that are used in training and building AI models. So more and more people will eat into the hype and invest, hoping to make sackfuls of money.

The AI Act in the EU can assess risks involved in an AI system, assign responsibility and establish governance

The AI Act in the EU can assess risks involved in an AI system, assign responsibility and establish governance Illustration: iStock

In 2024, almost every tech company will stuff earnings calls with the word ‘AI’. All they have to do is bring out the AI flag.

Global policies

GPT or generative pre-trained transformer is a large language model trained on troves of data, gathered from all over the Internet to form a neural network with endless parameters. These models can offer the best sequence of words in response to a query. No wonder the New York Times has filed a case against OpenAI and Microsoft for using their data to train their model.

OpenAI CEO Sam Altman and Google CEO Sundar Pichai are talking about the need for regulation. They are trying to play the devil’s advocate and saying regulations are required. If we look at how generative AI produces content, there needs to be guardrails. Large language models can make up facts and investors wouldn’t have any of that.

The EU took a critical step by coming up with the world’s first comprehensive laws to regulate AI in December. The AI Act can assess risks involved in an AI system, assign responsibility and establish governance. At the same time, the European Parliament managed to get a ban on the use of real-time surveillance and biometric technologies, including emotional recognition, with a few exceptions.

At the same time, many governments are less than transparent about how they are deploying (if at all) AI in decision-making process. There will be more clarity around it.

Bye-bye homework; hello policing in classrooms

The march of AI will affect everybody, from teachers to lawyers, singers to doctors. Take the case of educators who are in a bind. After three years of the pandemic, schools and colleges managed to bring students back, only to be faced with another mountain called AI.

How should teachers respond? One solution involves banning AI from school networks and computers. This is possible only among a certain category of educational institutions in India. Namely, the privileged ones.
Let’s consider the situation where school networks don’t allow any kind of access to generative AI. Further, schools can always deploy AI detection software to catch generated text. All this translates into more class hours and zero homework. To discourage students from using AI, this kind of policing may become normal in schools. If teachers demand everything to be written in class, it may mean that students will have more free time at home.

But here’s a snag: More and more tech companies are inserting AI into tools that students use, like Google Doc and Grammarly. How do you stop students from using these?

So, the other option is to allow AI because an outright ban is impossible. Teachers may look at ways AI can supplement other teaching tools. The International Baccalaureate programme says it will “not ban the use of AI software” but “will work with schools to help them support their students on how to use these tools ethically”.

In India, one may argue that calculators are allowed because it frees up time spent on tedious calculations. Students are allowed to use it after a certain standard. So why not AI tools? It’s another matter that calculators cannot spit out sentences, have bias or throw misinformation at you. To a scientist, Energy (e) will always be mass (m) multiplied by the speed of light (c) squared (2) but a chatbot may have a different take on it, depending on the source it has been fed.

ChatGPT and Google’s Search Generative Experience (who comes up with these names!) work by predicting a reasonable sequence of words that are grammatically correct. If you are writing an essay about the sky, these chatbots can offer broad strokes that students can get away with. But when it comes to research and fact-oriented stuff, things can go wrong.

What I am trying to get at is ChatGPT will be acceptable as long as its usage is on the lines of our current usage of resources like Wikipedia. Not that Wikipedia is a trustworthy source. But as chatbots get powerful, teachers will have a tough time coming up with policies.

AI detection software

Nobody knows how to prevent students, writers, musicians, scriptwriters or office workers from making the disclosure of having used AI in projects. AI detection software is imperfect, often giving false positives. It will be difficult to make teachers confront students, accusing them of having used AI.

Since ChatGPT arrived, new AI detectors keep showing up every other week. All these detectors claim to spot writings that have taken the help of AI models. According to OpenAI, AI detectors don’t work. It says that ChatGPT “has no ‘knowledge’ of what content could be AI-generated”. The truth is, detectors work at times but not always. Users keep refining a search topic to the point the outcome becomes somewhat untraceable. If the AI-detected text hasn’t been edited at all, chances of detectors spotting the truth is high. For the moment, you need to find out how transparent these detectors are and how often they are wrong. The best solution at the moment is certifying human writing.

Fake, fake, fake

This year, there will be three very important elections — in India, the US and in Taiwan (given the relationship between the US and China, all eyes are on this election). The number of fake AI-generated videos and pictures circulating the Internet and social media space is already high and we don’t want things to go from bad to worse. By the time an AI video or picture is called out, perhaps it would be too late.

Chip diplomacy

Be it OpenAI or Google or Inflection AI are going to deliver solutions for different companies. Say, a company in healthcare can take a solution and tune it to its needs, like drug design. At the same time, the chips required to power these AI models are under government scrutiny. For example, the US government wouldn’t like China to receive the fastest of chips. It means, some countries will fall behind in the research curve. At the same time, it will encourage countries to invest more in chip development.

Follow us on:
ADVERTISEMENT
ADVERTISEMENT