![ChatGPT Parental Control]()
OpenAI is rolling out new updates to make ChatGPT safer and more supportive, especially in sensitive conversations and for younger users. Over the next 120 days, the company is focusing on improvements in four key areas: crisis intervention, access to expert help, trusted contact connections, and stronger protections for teens.
Partnering with Mental Health Experts
To guide this effort, OpenAI has brought together an Expert Council on Well-Being and AI—specialists in youth development, mental health, and human-computer interaction. Alongside this, the Global Physician Network, a pool of more than 250 doctors across 60 countries, is providing real-world insights on mental health contexts, including expertise in adolescent health, eating disorders, and substance use.
Smarter Responses Through Reasoning Models
OpenAI is also leveraging its reasoning models like GPT-5-thinking and o3, designed to spend more time analyzing context before responding. A new real-time router will soon direct sensitive conversations—such as those showing signs of acute distress—toward these advanced models to ensure safer, more thoughtful responses.
Strengthening Protections for Teens with Parental Controls
Recognizing that teens are growing up as “AI natives,” OpenAI is rolling out Parental Controls within the next month. These include:
Linked accounts between parents and teens (13+) via email invitation.
Age-appropriate response settings, enabled by default.
Feature management, letting parents disable chat history and memory.
Distress notifications, alerting parents when signs of acute emotional distress are detected.
These parental tools build on existing safety measures like in-app reminders encouraging users to take breaks during long sessions.
Looking Ahead
OpenAI emphasizes that this is only the beginning. The company will continue refining ChatGPT’s safety features, guided by experts, with the goal of making the AI more helpful, trustworthy, and supportive for everyone.