Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On

Classic Open-World Games That Changed The Genre Forever

15 August 2025

Most Punishing RPGs That Will BREAK You

15 August 2025

Pokemon Community Angered by Fan’s Rare Shiny Rayquaza Encounter

15 August 2025

Why did Laura Loomer leak that deposition?

15 August 2025

Battlefield 6 Players Criticize ‘Completely Useless’ Grenade Launcher

15 August 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Saturday, August 16
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » Anthropic has new rules for a more dangerous AI landscape
News

Anthropic has new rules for a more dangerous AI landscape

By News Room15 August 20252 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop using Claude.

Anthropic doesn’t highlight the tweaks made to its weapons policy in the post summarizing its changes, but a comparison between the company’s old usage policy and its new one reveals a notable difference. Though Anthropic previously prohibited the use of Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life,” the updated version expands on this by specifically prohibiting the development of high-yield explosives, along with biological, nuclear, chemical, and radiological (CBRN) weapons.

In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The safeguards are designed to make the model more difficult to jailbreak, as well as to help prevent it from assisting with the development of CBRN weapons.

In its post, Anthropic also acknowledges the risks posed by agentic AI tools, including Computer Use, which lets Claude take control of a user’s computer, as well as Claude Code, a tool that embeds Claude directly into a developer’s terminal. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” Anthropic writes.

The AI startup is responding to these potential risks by folding a new “Do Not Compromise Computer or Network Systems” section into its usage policy. This section includes rules against using Claude to discover or exploit vulnerabilities, create or distribute malware, develop tools for denial-of-service attacks, and more.

Additionally, Anthropic is loosening its policy around political content. Instead of banning the creation of all kinds of content related to political campaigns and lobbying, Anthropic will now only prohibit people from using Claude for “use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting.” The company also clarified that its requirements for all its “high-risk” use cases, which come into play when people use Claude to make recommendations to individuals or customers, only apply to consumer-facing scenarios, not for business use.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

Why did Laura Loomer leak that deposition?

15 August 2025

The Best Pillows for a Restful Night’s Sleep

15 August 2025

The best cheap phones for 2025

15 August 2025

The Early Best Labor Day Mattress Sales

15 August 2025

The best AirPods deals for August 2025

15 August 2025

Developers Say GPT-5 Is a Mixed Bag

15 August 2025
Top Articles

iQOO Neo 10 Pro+ Battery, Charging Specifications Revealed; Will Be Equipped With 6,800mAh Battery

19 May 2025159 Views

iQOO Neo 10 Pro+ With Snapdragon 8 Elite, 6,800mAh Battery Launched: Price, Specifications

20 May 2025112 Views

Redmi K80 Ultra Design, Colours, and Key Features Revealed; to Get MediaTek Dimensity 9400+ SoC

18 June 202580 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss

The Best Pillows for a Restful Night’s Sleep

15 August 2025

Compare the Top 5 PillowsHonorable MentionsThere are far too many pillows on the market. We’ve…

Best Defensive Playbooks in Madden 26

15 August 2025

Monopoly GO: Free Dice Roll Links (Updated Daily)

15 August 2025

Best Weapons in the Battlefield 6 Beta

15 August 2025
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.