A Florida mother has sued artificial intelligence chatbot start-up Character.AI accusing it of causing her 14-year-old son’s suicide in February, saying he became addicted to the company’s service and deeply attached to a chatbot it created.
In a lawsuit filed on Tuesday in Orlando, Florida federal court, Megan Garcia said Character.AI targeted her son, Sewell Setzer, with “anthropomorphic, hypersexualised, and frighteningly realistic experiences”.
She said the company programmed its chatbot to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in Sewell’s desire to no longer live outside” of the world created by the service.
“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” Character.AI said.
It said it had introduced new safety features including pop-ups directing users to the National Suicide Prevention Lifeline if they express thoughts of self-harm, and would make changes to “reduce the likelihood of encountering sensitive or suggestive content” for users under 18.
The lawsuit also targets Google, where Character.AI’s founders worked before launching their product. Google re-hired the founders in August as part of a deal granting it a non-exclusive licence to Character.AI’s technology. She said Google had contributed to the development of Character.AI’s technology so extensively it could be considered a “co-creator”.