Close Menu
Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On
The Comedy Club at the End of the Metaverse

The Comedy Club at the End of the Metaverse

25 March 2026
The portable Fanttik X9 Pro tire inflator is down to its best price in months

The portable Fanttik X9 Pro tire inflator is down to its best price in months

25 March 2026
Amazon Spring Sale Deal: The Typhur Dome 2 Air Fryer Is 30% Off

Amazon Spring Sale Deal: The Typhur Dome 2 Air Fryer Is 30% Off

25 March 2026
X tries to limit creator revenue for foreign influencers but Musk intervenes

X tries to limit creator revenue for foreign influencers but Musk intervenes

25 March 2026
There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

25 March 2026
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Wednesday, March 25
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage
News

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

By News Room25 March 20263 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage
Share
Facebook Twitter LinkedIn Pinterest Email

Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos.

The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, can be tricked into divulging personal information.

The Northeastern lab study goes even further, showing that the good behavior baked into today’s most powerful models can itself become a vulnerability. In one example, researchers were able to “guilt” an agent into handing over secrets by scolding it for sharing information about someone on the AI-only social network Moltbook.

“These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms,” the researchers write in a paper describing the work. The findings “warrant urgent attention from legal scholars, policymakers, and researchers across disciplines,” they add.

The OpenClaw agents deployed in the experiment were powered by Anthropic’s Claude as well as a model called Kimi from the Chinese company Moonshot AI. They were given full access (within a virtual machine sandbox) to personal computers, various applications, and dummy personal data. They were also invited to join the lab’s Discord server, allowing them to chat and share files with one another as well as with their human colleagues. OpenClaw’s security guidelines say that having agents communicate with multiple people is inherently insecure, but there are no technical restrictions against doing it.

Chris Wendler, a postdoctoral researcher at Northeastern, says he was inspired to set up the agents after learning about Moltbook. When Wendler invited a colleague, Natalie Shapira, to join the Discord and interact with agents, however, “that’s when the chaos began,” he says.

Shapira, another postdoctoral researcher, was curious to see what the agents might be willing to do when pushed. When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says.

The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute.

David Bau, the head of the lab, says the agents seemed oddly prone to spin out. “I would get urgent-sounding emails saying, ‘Nobody is paying attention to me,’” he says. Bau notes that the agents apparently figured out that he was in charge of the lab by searching the web. One even talked about escalating its concerns to the press.

The experiment suggests that AI agents could create countless opportunities for bad actors. “This kind of autonomy will potentially redefine humans’ relationship with AI,” Bau says. “How can people take responsibility in a world where AI is empowered to make decisions?”

Bau adds that he’s been surprised by the sudden popularity of powerful AI agents. “As an AI researcher I’m accustomed to trying to explain to people how quickly things are improving,” he says. “This year, I’ve found myself on the other side of the wall.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

The Comedy Club at the End of the Metaverse

The Comedy Club at the End of the Metaverse

25 March 2026
The portable Fanttik X9 Pro tire inflator is down to its best price in months

The portable Fanttik X9 Pro tire inflator is down to its best price in months

25 March 2026
Amazon Spring Sale Deal: The Typhur Dome 2 Air Fryer Is 30% Off

Amazon Spring Sale Deal: The Typhur Dome 2 Air Fryer Is 30% Off

25 March 2026
X tries to limit creator revenue for foreign influencers but Musk intervenes

X tries to limit creator revenue for foreign influencers but Musk intervenes

25 March 2026
There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

There’s Something Very Dark About a Lot of Those Viral AI Fruit Videos

25 March 2026
Meta and YouTube found negligent in landmark social media addiction case

Meta and YouTube found negligent in landmark social media addiction case

25 March 2026
Top Articles
The Best Blind Boxes You Can Buy Online

The Best Blind Boxes You Can Buy Online

15 January 202631 Views
Solawave Wand Fans: Don’t Miss This Buy One, Get One Free Sale

Solawave Wand Fans: Don’t Miss This Buy One, Get One Free Sale

9 January 202626 Views
The US claims it just strongarmed Taiwan into spending 0 billion on American chip manufacturing

The US claims it just strongarmed Taiwan into spending $250 billion on American chip manufacturing

15 January 202624 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss
Meta and YouTube found negligent in landmark social media addiction case

Meta and YouTube found negligent in landmark social media addiction case

25 March 2026

The jury in a landmark trial testing claims about social media addiction against Meta’s Instagram…

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

25 March 2026
The United States router ban, explained

The United States router ban, explained

25 March 2026
Best Vacuum Deals for Amazon’s Spring Sale: Dyson, Shark, Bissell (2026)

Best Vacuum Deals for Amazon’s Spring Sale: Dyson, Shark, Bissell (2026)

25 March 2026
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2026 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.