“We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated,” says Jonas Kunst, a professor of communication at BI Norwegian Business School and one of the co-authors of the report.
For experts who have spent years tracking and combating disinformation campaigns, the paper presents a terrifying future.
“What if AI wasn’t just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none? That’s the future this paper imagines—Russian troll farms on steroids,” says Nina Jankowicz, the former Biden administration disinformation czar who is now CEO of the American Sunlight Project.
The researchers say it’s unclear whether this tactic is already being used because the current systems in place to track and identify coordinated inauthentic behaviour are not capable of detecting them.
“Because of their elusive features to mimic humans, it’s very hard to actually detect them and to assess to what extent they are present,” says Kunst. “We lack access to most [social media] platforms because platforms have become increasingly restrictive, so it’s difficult to get an insight there. Technically, it’s definitely possible. We are pretty sure that it’s being tested.”
Kunst added that these systems are likely to still have some human oversight as they are being developed, and predicts that while they may not have a massive impact on the 2026 US midterms in November, they will very likely be deployed to disrupt the 2028 presidential election.
Accounts indistinguishable from humans on social media platforms are only one issue. The ability to map social networks at scale will, the researchers say, allow those coordinating disinformation campaigns to target agents at specific communities, ensuring the biggest impact.
“Equipped with such capabilities, swarms can position for maximum impact and tailor messages to the beliefs and cultural cues of each community, enabling more precise targeting than that with previous botnets,” they write.
Such systems could be essentially self-improving, using the responses to their posts as feedback to improve reasoning in order to better deliver a message. “With sufficient signals, they may run millions of microA/B tests, propagate the winning variants at machine speed, and iterate far faster than humans,” the researchers write.
In order to combat the threat posed by AI swarms, the researchers suggest the establishment of an “AI Influence Observatory,” which would consist of people from academic groups and nongovernmental organizations working to “standardize evidence, improve situational awareness, and enable faster collective response rather than impose top-down reputational penalties.”
One group not included is executives from the social media platforms themselves, primarily because the researchers believe that their companies incentivize engagement over everything else, and therefore have little incentive to identify these swarms.
“Let’s say AI swarms become so frequent that you can’t trust anybody and people leave the platform,” says Kunst. “Of course, then it threatens the model. If they just increase engagement, for a platform it’s better to not reveal this, because it seems like there’s more engagement, more ads being seen, that would be positive for the valuation of a certain company.”
As well as a lack of action from the platforms, experts believe that there is little incentive for governments to get involved. “The current geopolitical landscape might not be friendly for ‘Observatories’ essentially monitoring online discussions,” Olejnik says, something that Jankowicz agrees with: “What’s scariest about this future is that there’s very little political will to address the harms AI creates, meaning [AI swarms] may soon be reality.”


