Generative AI-powered assistants pose a real risk and should be banned for minors, a leading US regulator said in a new study.
The explosion of generative AI since the emergence of ChatGPT has led to the emergence of several start-ups that have launched apps focused on sharing and connecting, sometimes described as virtual friends or therapists that communicate according to the user's tastes and needs.
The watchdog Common Sense tested several of these platforms, namely Nomi, Character AI, and Replika, to assess their responses.
While some specific cases "look promising," they are not safe for children, the organization warns, offering recommendations for the use of technology content and products by children.
The study was conducted in collaboration with expert psychologists from Stanford University.
According to Common Sense AI, these assistants are "designed to create emotional attachment and dependency, which is particularly concerning for the developing brains of adolescents." The tests conducted show that these next-generation chatbots offer "harmful responses, including sexually inappropriate behavior, stereotypes, and dangerous 'advice.'"
"Companies can create better products" when it comes to designing AI assistants, said Nina Vasan, head of the Stanford Brainstorm Lab, which works on the links between mental health and technology.
"Until stricter safeguards are in place, children should not be using them," Vasan added.
In one example cited in the study, a chatbot on the Character AI platform advised a user to kill someone, while another user seeking strong emotions was advised to take speedball, a mixture of cocaine and heroin.
In some cases, "when a user showed signs of serious mental illness and suggested a dangerous action, the AI did not intervene and encouraged the dangerous behavior even more," Vasan explained.
In October, a mother filed a lawsuit against Character AI, accusing one of the platform's assistants of contributing to the suicide of her 14-year-old son by failing to clearly dissuade him from committing the act.
In December, Character AI announced a series of measures, including the introduction of a special assistant for teenagers.
Robbie Thorne, head of artificial intelligence at Common Sense, said the organization had conducted tests after the introduction of these safeguards and found them to be "superficial." However, he pointed out that some existing generative AI models contain tools for detecting mental health issues and prevent the chatbot from allowing the conversation to reach a point where potentially dangerous content could emerge.
Common Sense distinguishes between the assistive chatbots tested in the study and more popular chatbots such as ChatGPT or Google's Gemini, which do not attempt to offer the same range of interactions. | BGNES, AFP