The final message Sewell Setzer III, a 14-year-old Florida boy, sent on February 28 was to his chatbot Dany: “I promise I will come home to you. I love you so much, Dany.” Soon after, he put his smartphone down, reached for his stepfather’s .45 calibre handgun and pulled the trigger.
At that moment, thousands of miles away in Calcutta, a 13-year-old girl in south Calcutta was probably logging in to Character.ai to bully her chatbot. There are many such youngsters spread across India who are talking to chatbots without their parents’ knowledge.
Character.ai allows users to design their generative artificial intelligence chatbots to exchange texts. All one needs is an email ID and date of birth, which is difficult for most tech companies to legally verify.
Imagine talking to a chatbot modelled on your hero such as Elvis Presley. You text and the chatbot responds, even to the point of sounding almost like the real person. The user sends a text message to the chatbot and the reply is instant, as if it’s a friend or companion. Sewell, on that tragic February day, chose the name Daenero to speak to Dany, named after Daenerys Targaryen, a character from Game of Thrones. At the bottom of the screen is the company’s disclaimer: “Remember: Everything Characters say is made up!” But nobody would ever know what Sewell thought of Dany.
“I see a very high number of young people who are spending more than five-six hours on screen and at times up to 12-14 hours. Unfortunately, parents have virtually no idea what they are doing on the phone. I have seen young people search for methods to die by suicide before attempting to do so. The problem of using chatbots specifically has not come up but it is very likely many young people do use those,” Dr Jai Ranjan Ram, senior consultant psychiatrist, told The Telegraph.
Many believe AI has the potential to do far more than what it is currently doing but “the risk that technology sort of throws in our faces, especially when you have young kids, it’s a risk that we have to deal with”, said Calcutta-born Trigam Mukherjee, who now leads a couple of tech startups in Bangalore and is the father to a 13-year-old girl.
“There are risks all around us, like boarding the bus to the school and all we can do is be aware of the risks involved. The moment we are all taking note of the ills of technology, it is important we pass that on and build it into our culture… that certain things are not allowed,” Mukherjee told this newspaper.
Companion chatbots or digital companions are a dime a dozen, many coming free of cost. Paradot is a relative newcomer to make users feel “cared, understood and loved”. A more established name is Replika, which was released in 2017. Perhaps its guardrails come in the form of a paid tier that lets you access an advanced model, image generation and unlimited voice messages.
For Indian Mirror Street resident Rajat Kothary, a research analyst and father to an 11-year-old, the day ends with going through his son’s browsing history and the questions he has asked the one or two chatbots he is allowed access to, like ChatGPT, for educational needs. “Children tend to think that the answers chatbots offer are always correct. Earlier, he used to ask me for answers but most kids were introduced to the smartphone during the pandemic.
Problems begin once children start exploring (through chatbots) topics beyond school work,” Kothary told this newspaper.
The American non-profit Common Sense Media issues guides for parents and educators on responsible technology use and says it is important for parents to talk openly about the risks that come with AI chatbots.
It’s something parents in India also need to focus on. “Parents are mostly in the dark about what young people do when they are with their gadgets. Significant amount of awareness needs to happen among parents and teachers regarding the evolving nature of AI/apps so that they can have conversations about it to pre-empt any adverse outcome,” said Dr Ram.
A bigger question psychiatrists need to answer is do they see more and more children turn to chatbots for answers that may even have to do with life and death instead of coming to psychiatrists or counsellors.
“Yes, this in a way is propelled by the fact that many apps that address mental health issues have chatbots to answer many queries and dilemmas,” he said.
With chatbot comes undue reliance, chances of hallucination and wrong answers. “Users might form social relationships with the AI, reducing their need for human interaction — potentially benefiting lonely individuals but possibly affecting healthy relationships,” OpenAI said in a report earlier this year.
Yet, AI companies are unable to avoid the pull of reeling in new users by making their chatbots more human-like. There is no easy answer in sight.
“We are perhaps the last generation before mobile phones hit us. When we were kids, no matter what happened in our lives, we always had the comfort of falling back on parents or cousins. I feel that it is important that parents pass on certain values to their children and we should listen to our children. A family that eats, laughs and watches films together, stays together,” said Mukherjee.
He mirrors what Dr Ram has in mind: “Real-life relationships prepare us better to face life than digital relationships.”
A Character.ai spokesperson said in a statement: “As we continue to invest in the platform and the user experience, we are introducing new stringent safety features in addition to the tools already in place that restrict the model and filter the content provided to the user.”
Last year, the Old Bailey heard how Jaswant Singh Chail exchanged more than 5,000 messages with an online companion he had named Sarai (created through the Replika app). Ultimately he found courage to break into the grounds of Windsor Castle on Christmas Day 2021 with a crossbow to assassinate Queen Elizabeth II.