An update on our mental health-related work
Source: OpenAI Blog
Each week, more than 900 million people use ChatGPT to improve their daily lives—whether learning new skills or navigating complex healthcare systems. Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.
Since introducing parental controls in September 2025, we’ve seen encouraging engagement from families and will continue building on these protections. Working closely with experts from our Council on Well‑Being and AI and our Global Physicians Network, we will soon introduce a trusted‑contact feature that allows adult users to designate someone to receive notifications when they may need additional support. As a reminder, parents also receive safety notifications about their teens’ use of ChatGPT through the same parental‑controls system. We’ll share more as these updates roll out in ChatGPT.
We are also continuing to advance how our models detect and respond to signs of emotional distress. This includes new evaluation methods that simulate extended mental‑health‑related conversations, helping us better identify potential risks and improve how ChatGPT responds in sensitive moments. We’ll share more about this work in the coming weeks as we continue strengthening ChatGPT’s safeguards.
Litigation updates
Separately, the Court recently coordinated a number of mental‑health‑related cases involving ChatGPT into a single proceeding in California. In the coming days, the Court will assign the coordination judge for this proceeding. As part of this consolidation process, plaintiffs’ attorneys have informed the Court that they intend to file a number of new cases.
As with the earlier filed mental‑health‑related litigation, OpenAI will continue to handle any additional cases with care, transparency, respect for the people involved, and in line with the following principles:
- We start with the facts and put genuine effort into understanding them.
- We will respectfully make our case in a way that is cognizant of the complexity and nuances of situations involving real people and real lives.
- We recognize that these cases inherently involve certain types of private information that require sensitivity when presented in a public setting like a court.
- Independent of any litigation, we’ll remain focused on improving our technology in line with our mission.
We recognize that court processes can be lengthy and, at times, opaque due to strict legal rules. It can also take time to collect and understand the relevant facts and present them to the court in line with its evidence procedures. We work to understand the details in good faith and seek only information that is relevant to the case and the specific allegations that have been made.
It’s important to reserve judgment and allow the facts to emerge appropriately through the court process, as these are complex and nuanced cases with many factors and circumstances that are often not reflected in the initial filings.
Our thoughts are with all those impacted by these heartbreaking situations. We continue to improve ChatGPT’s training to recognize and respond to signs of distress, de‑escalate conversations in sensitive moments, and guide people toward real‑world support, working closely with mental‑health clinicians and experts.
More information about our safety work can be found here: