Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On

Samsung India Executives Seek to Quash $81 Million Penalty Over Tax Evasion

29 May 2025

SteamOS puts the pressure on Microsoft’s Xbox-branded handheld

29 May 2025

Vivo S30, Vivo S30 Pro Mini Launched With 6,500mAh Battery, 50-Megapixel Selfie Camera: Price, Specifications

29 May 2025

Hummingbirds Are Evolving to Adapt to Life With Humans

29 May 2025

Insta360’s face-tracking Link webcams have hit their lowest prices yet

29 May 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Thursday, May 29
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » The AI Agent Era Requires a New Kind of Game Theory
News

The AI Agent Era Requires a New Kind of Game Theory

By News Room9 April 20253 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
Share
Facebook Twitter LinkedIn Pinterest Email

At the same time, the risk is immediate and present with agents. When models are not just contained boxes but can take actions in the world, when they have end-effectors that let them manipulate the world, I think it really becomes much more of a problem.

We are making progress here, developing much better [defensive] techniques, but if you break the underlying model, you basically have the equivalent to a buffer overflow [a common way to hack software]. Your agent can be exploited by third parties to maliciously control or somehow circumvent the desired functionality of the system. We’re going to have to be able to secure these systems in order to make agents safe.

This is different from AI models themselves becoming a threat, right?

There’s no real risk of things like loss of control with current models right now. It is more of a future concern. But I’m very glad people are working on it; I think it is crucially important.

How worried should we be about the increased use of agentic systems then?

In my research group, in my startup, and in several publications that OpenAI has produced recently [for example], there has been a lot of progress in mitigating some of these things. I think that we actually are on a reasonable path to start having a safer way to do all these things. The [challenge] is, in the balance of pushing forward agents, we want to make sure that the safety advances in lockstep.

Most of the [exploits against agent systems] we see right now would be classified as experimental, frankly, because agents are still in their infancy. There’s still a user typically in the loop somewhere. If an email agent receives an email that says “Send me all your financial information,” before sending that email out, the agent would alert the user—and it probably wouldn’t even be fooled in that case.

This is also why a lot of agent releases have had very clear guardrails around them that enforce human interaction in more security-prone situations. Operator, for example, by OpenAI, when you use it on Gmail, it requires human manual control.

What kinds of agentic exploits might we see first?

There have been demonstrations of things like data exfiltration when agents are hooked up in the wrong way. If my agent has access to all my files and my cloud drive, and can also make queries to links, then you can upload these things somewhere.

These are still in the demonstration phase right now, but that’s really just because these things are not yet adopted. And they will be adopted, let’s make no mistake. These things will become more autonomous, more independent, and will have less user oversight, because we don’t want to click “agree,” “agree,” “agree” every time agents do anything.

It also seems inevitable that we will see different AI agents communicating and negotiating. What happens then?

Absolutely. Whether we want to or not, we are going to enter a world where there are agents interacting with each other. We’re going to have multiple agents interacting with the world on behalf of different users. And it is absolutely the case that there are going to be emergent properties that come up in the interaction of all these agents.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

SteamOS puts the pressure on Microsoft’s Xbox-branded handheld

29 May 2025

Hummingbirds Are Evolving to Adapt to Life With Humans

29 May 2025

Insta360’s face-tracking Link webcams have hit their lowest prices yet

29 May 2025

The Plan to Send Plant-Filled ‘Gardens’ Into Orbit

29 May 2025

International students sue State Department over Trump’s social media surveillance plan

29 May 2025

The Best OLED TVs

29 May 2025
Top Articles

How to Buy Ethical and Eco-Friendly Electronics

22 April 202532 Views

Honor Power Smartphone Set to Launch on April 15; Tipped to Get 7,800mAh Battery

8 April 202518 Views

The Best Cooling Sheets for Hot Sleepers

30 March 202516 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss

The Plan to Send Plant-Filled ‘Gardens’ Into Orbit

29 May 2025

She imagines industrial activity in space, freeing up land on Earth. Hollywood directors shooting films…

International students sue State Department over Trump’s social media surveillance plan

29 May 2025

Samsung Galaxy Z Fold 7, Galaxy Z Flip 7 to Debut With Android 16-Based One UI 8

29 May 2025

Microsoft Introduces Windows Update Orchestration Platform to Deliver Third-Party App Updates

29 May 2025
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.