Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On

Google will reportedly let 15 superfans test unreleased Pixel phones

20 October 2025

I tested a bunch of Switch 2 screen protectors, and these are the best

20 October 2025

Bryan Cranston and SAG-AFTRA say OpenAI is taking their deepfake concerns seriously

20 October 2025

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

20 October 2025

Google’s new deadline for Epic consequences is October 29th

20 October 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Tuesday, October 21
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?
News

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

By News Room20 October 20252 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

At the end of August, the AI company Anthropic announced that its chatbot Claude wouldn’t help anyone build a nuclear weapon. According to Anthropic, it had partnered with the Department of Energy (DOE) and the National Nuclear Security Administration (NNSA) to make sure Claude wouldn’t spill nuclear secrets.

The manufacture of nuclear weapons is both a precise science and a solved problem. A lot of the information about America’s most advanced nuclear weapons is Top Secret, but the original nuclear science is 80 years old. North Korea proved that a dedicated country with an interest in acquiring the bomb can do it, and it didn’t need a chatbot’s help.

How, exactly, did the US government work with an AI company to make sure a chatbot wasn’t spilling sensitive nuclear secrets? And also: Was there ever a danger of a chatbot helping someone build a nuke in the first place?

The answer to the first question is that it used Amazon. The answer to the second question is complicated.

Amazon Web Services (AWS) offers Top Secret cloud services to government clients where they can store sensitive and classified information. The DOE already had several of these servers when it started to work with Anthropic.

“We deployed a then-frontier version of Claude in a Top Secret environment so that the NNSA could systematically test whether AI models could create or exacerbate nuclear risks,” Marina Favaro, who oversees National Security Policy & Partnerships at Anthropic tells WIRED. “Since then, the NNSA has been red-teaming successive Claude models in their secure cloud environment and providing us with feedback.”

The NNSA red-teaming process—meaning, testing for weaknesses—helped Anthropic and America’s nuclear scientists develop a proactive solution for chatbot-assisted nuclear programs. Together, they “codeveloped a nuclear classifier, which you can think of like a sophisticated filter for AI conversations,” Favaro says. “We built it using a list developed by the NNSA of nuclear risk indicators, specific topics, and technical details that help us identify when a conversation might be veering into harmful territory. The list itself is controlled but not classified, which is crucial, because it means our technical staff and other companies can implement it.”

Favaro says it took months of tweaking and testing to get the classifier working. “It catches concerning conversations without flagging legitimate discussions about nuclear energy or medical isotopes,” she says.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

Google will reportedly let 15 superfans test unreleased Pixel phones

20 October 2025

I tested a bunch of Switch 2 screen protectors, and these are the best

20 October 2025

Bryan Cranston and SAG-AFTRA say OpenAI is taking their deepfake concerns seriously

20 October 2025

Google’s new deadline for Epic consequences is October 29th

20 October 2025

AI Is Changing What High School STEM Students Study

20 October 2025

Apple adds a new toggle to make Liquid Glass less glassy

20 October 2025
Top Articles

Oppo K13 Turbo Pro – Price in India, Specifications (21st July 2025)

21 July 202528 Views

Oppo K13 Turbo – Price in India, Specifications (21st July 2025)

21 July 202524 Views

Reddit pauses its paywall plans

1 August 202518 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss

AI Is Changing What High School STEM Students Study

20 October 2025

In the early 2010s, nearly every STEM-savvy college-bound kid heard the same advice: Learn to…

Apple adds a new toggle to make Liquid Glass less glassy

20 October 2025

The FTC Is Disappearing Blog Posts About AI Published During Lina Khan’s Tenure

20 October 2025

I tested 15 Nintendo Switch 2 cases and these are the best

20 October 2025
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.