Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On

The Best Automatic Cat Litter Boxes

16 September 2025

The China-US deal for TikTok could take another month to work out

16 September 2025

Save $100 or More on a Mac Mini Today

16 September 2025

Consumer Reports asks Microsoft to keep supporting Windows 10

16 September 2025

OpenAI’s Teen Safety Features Will Walk a Thin Line

16 September 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Tuesday, September 16
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » OpenAI’s Teen Safety Features Will Walk a Thin Line
News

OpenAI’s Teen Safety Features Will Walk a Thin Line

By News Room16 September 20253 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an “age-appropriate” system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user’s parents. In cases of imminent danger, if a user’s parents are unreachable, the system may contact the authorities.

In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.

“We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” Altman wrote. “These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.”

While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child’s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when “the system detects their teen is in a moment of acute distress,” according to the company’s blog post, and set limits on the times of day their children can use ChatGPT.

The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.

At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely—a fact that the company is extremely unhappy about, according to sources I’ve spoken to. Today’s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.

“A Sexbot Avatar in ChatGPT”

From the sources I’ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It’s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there’s still nothing forcing these firms to do the right thing.

In a recent interview, Tucker Carlson pushed Altman to answer exactly who is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. “The person I think you should hold accountable for those calls is me,” Altman added. “Like, I’m a public face. Eventually, like, I’m the one that can overrule one of those decisions or our board.”

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

The Best Automatic Cat Litter Boxes

16 September 2025

The China-US deal for TikTok could take another month to work out

16 September 2025

Save $100 or More on a Mac Mini Today

16 September 2025

Consumer Reports asks Microsoft to keep supporting Windows 10

16 September 2025

Can Luigi Mangione get too big to jail?

16 September 2025

US Tech Giants Race to Spend Billions in UK AI Push

16 September 2025
Top Articles

iPhone 17 Air Colour Options Hinted in New Leak; Could Launch in Four Shades

10 July 202570 Views

Vivo X Fold 5 Colour Options, Specifications Teased Ahead of India Launch

2 July 202553 Views

Vivo X200 FE With 6,500mAh Battery, MediaTek Dimensity 9300+ SoC Launched: Specifications

23 June 202553 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss

Can Luigi Mangione get too big to jail?

16 September 2025

The first people in line on Tuesday, I was told, started camping out on the…

US Tech Giants Race to Spend Billions in UK AI Push

16 September 2025

Sam Altman says ChatGPT will stop talking about suicide with teens

16 September 2025

Matthew Prince Wants AI Companies to Pay for Their Sins

16 September 2025
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.