For years, the cost of using “free” services from Google, Facebook, Microsoft, and other Big Tech firms has been handing over your data. Uploading your life into the cloud and using free tech brings conveniences, but it puts personal information in the hands of giant corporations that will often be looking to monetize it. Now, the next wave of generative AI systems are likely to want more access to your data than ever before.

Over the past two years, generative AI tools—such as OpenAI’s ChatGPT and Google’s Gemini—have moved beyond the relatively straightforward, text-only chatbots that the companies initially released. Instead, Big AI is increasingly building and pushing toward the adoption of agents and “assistants” that promise they can take actions and complete tasks on your behalf. The problem? To get the most out of them, you’ll need to grant them access to your systems and data. While much of the initial controversy over large language models (LLMs) was the flagrant copying of copyrighted data online, AI agents’ access to your personal data will likely cause a new host of problems.

“AI agents, in order to have their full functionality, in order to be able to access applications, often need to access the operating system or the OS level of the device on which you’re running them,” says Harry Farmer, a senior researcher at the Ada Lovelace Institute, whose work has included studying the impact of AI assistants and found that they may cause “profound threat” to cybersecurity and privacy. For personalization of chatbots or assistants, Farmer says, there can be data trade-offs. “All those things, in order to work, need quite a lot of information about you,” he says.

While there’s no strict definition of what an AI agent actually is, they’re often best thought of as a generative AI system or LLM that has been given some level of autonomy. At the moment, agents or assistants, including AI web browsers, can take control of your device and browse the web for you, booking flights, conducting research, or adding items to shopping carts. Some can complete tasks that include dozens of individual steps.

While current AI agents are glitchy and often can’t complete the tasks they’ve been set out to do, tech companies are betting the systems will fundamentally change millions of people’s jobs as they become more capable. A key part of their utility likely comes from access to data. So, if you want a system that can provide you with your schedule and tasks, it’ll need access to your calendar, messages, emails, and more.

Some more advanced AI products and features provide a glimpse into how much access agents and systems could be given. Certain agents being developed for businesses can read code, emails, databases, Slack messages, files stored in Google Drive, and more. Microsoft’s controversial Recall product takes screenshots of your desktop every few seconds, so that you can search everything you’ve done on your device. Tinder has created an AI feature that can search through photos on your phone “to better understand” users’ “interests and personality.”

Carissa Véliz, an author and associate professor at the University of Oxford, says most of the time consumers have no real way to check if AI or tech companies are handling their data in the ways they claim to. “These companies are very promiscuous with data,” Véliz says. “They have shown to not be very respectful of privacy.”

Share.
Exit mobile version