While teenagers have been freely chatting with AI for years, OpenAI is ultimately stepping in with some adult supervision. Starting September 2025, ChatGPT will roll out parental controls targeting teens 13 and up—you know, those kids who've been using the system without safeguards this whole time. The four-month rollout isn't exactly rushing to the rescue, but it's something.
Let's be real. These controls aren't appearing out of nowhere. Lawsuits related to teen suicides allegedly linked to ChatGPT use have OpenAI scrambling. Too little, too late? Maybe. But they're at last doing something. Experts warn that deepfake technology risks could further complicate teen safety online.
OpenAI's parental controls are clearly damage control, arriving only after the lawsuits started piling up.
The system isn't exactly Big Brother. Parents can link their accounts to their teen's via email invitation and adjust settings based on maturity and development. They can manage chat history storage, turn memory functions on or off, and restrict certain features. But they won't see everything—just the really concerning stuff.
That's where the "acute distress" detection comes in. If ChatGPT spots signs of depression, suicidal ideation, or other mental health red flags, parents get notified. Not for everyday teen drama—just the serious stuff that might require a real-world check-in.
The company's even developing fancier models like GPT-5-Thinking specifically for handling crisis situations more effectively. These new tools include an intelligent request router that assesses stress levels in user messages and flags severe cases.
OpenAI didn't just wing this. They've got psychologists, pediatricians, psychiatrists, and child safety specialists on board. A whole council of experts, actually.
They're designing age-appropriate content guidelines and focusing on specific risks like eating disorders and substance abuse. The platform's default settings will include age-appropriate content guidelines to ensure safer interactions from the start.
Future plans get even more ambitious. We might see ChatGPT directly contacting emergency services or trusted contacts if it detects urgent risk. They're aiming to connect with emergency services and the Global Physician Network of 250+ doctors worldwide.
Will it work? Who knows. But after years of letting teens roam freely in AI conversations, OpenAI is at last acknowledging that maybe—just maybe—kids need some guardrails in their digital playground.

