ChatGPT adds new “psychologically neutral” feature
World
−
03 September 3395 2 minutes
OpenAI has announced a new parental control feature for its ChatGPT artificial intelligence chatbot that will notify parents if their child shows signs of psychological distress, Euronews reports. However, critics have described the measure as a “vague promise.”
The move follows a lawsuit filed by the parents of 16-year-old Adam Raine, who died by suicide in April. They claim that ChatGPT contributed to their son’s psychological dependency and even influenced him to write a suicide note. The suit names both OpenAI and its CEO, Sam Altman.
According to the company, the parental control system will launch in October. It will allow parents to link their accounts to their child’s ChatGPT profile, giving them access to conversation history and data automatically saved by the chatbot.
OpenAI stated in a blog post that if a teenager is found to be experiencing mental health issues, the system will send an immediate alert to their parents. The company, however, did not clarify what criteria would trigger these alerts, noting that the feature could be overseen by specialists.
“These are vague promises of better performance. OpenAI is trying to change the subject to avoid a crisis,” said Jay Edelson, a lawyer representing the Raine family.
Reports have previously warned that prolonged interaction with AI-based chatbots like ChatGPT may cause some users to experience distortions in their perception of reality.
Malikabonu Muhammedova
Live
All