Even now that the data is secured, Margolis and Thacker argue that it raises questions about how many people inside companies that make AI toys have access to the data they collect, how their access is monitored, and how well their credentials are protected. “There are cascading privacy implications from this,” says Margolis. ”All it takes is one employee to have a bad password, and then we’re back to the same place we started, where it’s all exposed to the public internet.”

Margolis adds that this sort of sensitive information about a child’s thoughts and feelings could be used for horrific forms of child abuse or manipulation. “To be blunt, this is a kidnapper’s dream,” he says. “We’re talking about information that let someone lure a child into a really dangerous situation, and it was essentially accessible to anybody.”

Margolis and Thacker point out that, beyond its accidental data exposure, Bondu also appears—based on what they saw inside its admin console—to use Google’s Gemini and OpenAI’s GPT5, and as a result may share information about kids’ conversations with those companies. Bondu’s Anam Rafid responded to that point in an email, stating that the company does use “third-party enterprise AI services to generate responses and run certain safety checks, which involves securely transmitting relevant conversation content for processing.” But he adds that the company takes precautions to “minimize what’s sent, use contractual and technical controls, and operate under enterprise configurations where providers state prompts/outputs aren’t used to train their models.”

The two researchers also warn that part of the risk of AI toy companies may be that they’re more likely to use AI in the coding of their products, tools and web infrastructure. They say they suspect that the unsecured Bondu console they discovered was itself “vibe-coded”—created with generative AI programming tools that often lead to security flaws. Bondu didn’t respond to WIRED’s question about whether the console was programmed with AI tools.

Warnings about the risks of AI toys for kids have grown in recent months, but have largely focused on the threat that a toy’s conversations will raise inappropriate topics or even lead them to dangerous behavior or self-harm. NBC News, for instance, reported last month that AI toys its reporters chatted with offered detailed explanations of sexual terms, tips about how to sharpen knives and claimed, and even seemed to echo Chinese government propaganda, stating for example that Taiwan was a part of China.

Bondu, by contrast, appears to have at least attempted to build safeguards into the AI chatbot it gives children access to. The company even offers a $500 bounty for reports of “an inappropriate response” from the toy. “We’ve had this program for over a year and no one has been able to make it say anything inappropriate,” a line on the company’s website reads.

Yet at the same time, Thacker and Margolis found that Bondu was simultaneously leaving all of its users’ sensitive data entirely exposed. “This is a perfect conflation of safety with security,” says Thacker. “Does ‘AI safety’ even matter when all the data is exposed?”

Thacker says that prior to looking into Bondu’s security, he’d considered giving AI-enabled toys to his own kids, just as his neighbor had. Seeing Bondu’s data exposure firsthand changed his mind.

“Do I really want this in my house? No, I don’t,” he says. “It’s kind of just a privacy nightmare.”

Share.
Exit mobile version