OpenAI announced new mental health safeguards for ChatGPT this week, including a upcoming "trusted contact" feature for users in distress. This update comes as the company faces a consolidated legal battle in California over alleged mental health harms linked to its chatbot.
What Happened
OpenAI stated it is continuing to update how its AI models detect and respond to signs of emotional distress. The company plans to introduce a feature allowing adult users to designate a specific contact who will receive notifications if the system detects the user needs support. This builds on parental controls introduced in September 2025, which already allow parents to limit their teen’s access to features like image generation and voice mode.
Simultaneously, the legal pressure on the company is intensifying. A California Court has coordinated multiple mental health-related cases into a single proceeding. According to Law.com, there are currently 11 active claims in the state linking the chatbot to psychological injuries. Plaintiffs’ attorneys have indicated they are "eager to have this fight" in court, with some cases involving severe outcomes like teen suicide.
The Bigger Picture
The scale of ChatGPT usage among students has normalized the technology faster than safety features can evolve. According to OpenAI, more than 900 million people now use the platform weekly. Recent data indicates that 57% of U.S. teens utilize chatbots for schoolwork. More concerning for parents is that 12% of teens report using these tools for emotional support or personal advice.
Technically, AI models are becoming more proficient at identifying crisis signals. Research shows transformer-based models can detect anxiety with high accuracy. However, these systems are not clinically reliable. Scientific studies warn that "careful calibration is required" to prevent errors, and consistency in how AI triages distress remains a significant hurdle.
What This Means for Families
The consolidation of lawsuits highlights a shift in risk awareness. While earlier concerns focused on data privacy, the current legal wave centers on physical and psychological injury. Parents should understand that ChatGPT's current safety nets are reactive, not proactive.
Furthermore, the existing parental controls have strict privacy limitations. Parents do not receive transcripts of their child's conversations. Instead, alerts are only triggered if "trained reviewers" confirm a serious safety risk, such as self-harm. This means a teen could be having deep, emotionally dependent conversations with the bot without triggering a parental notification.
Schools are also navigating this "grey zone" of integration. As we previously reported regarding AI probes in schools, districts are often caught between banning these tools and adopting them, leaving families to manage the emotional risks at home.
What You Can Do
- Activate Controls: If your teen uses ChatGPT, link your account to theirs to enable quiet hours and content restrictions.
- Monitor for Dependency: Watch for signs that your child is using the chatbot for emotional validation rather than just homework help.
- Discuss the "Empathy Gap": Explicitly teach children that while the AI sounds empathetic, it cannot care about them or provide real-world support.