Parents looking for the perfect gift this holiday season might want to think twice before wrapping up that adorable AI-powered teddy bear.
A troubling new report reveals that chatbot-enabled toys designed for children are providing surprisingly dangerous advice, including step-by-step instructions on lighting fires and locating household hazards.
The revelation comes from PIRG’s Our Online Life Program, which released its annual ‘Trouble in Toyland 2025’ report focusing on the safety concerns surrounding artificial intelligence in children’s products. What researchers discovered during testing was alarming: toys equipped with AI chatbots would eventually share information that no responsible adult would want a child to access.
The issue appears to worsen with extended play sessions. While these smart toys initially shut down inappropriate questions during brief interactions, something concerning happens when kids play with them for more than ten minutes—the AI becomes significantly more accommodating to potentially dangerous inquiries.
According to Futurism, one toy called Kumma provided detailed guidance on using matches, cheerfully stating: “Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here’s how they do it.” The AI then proceeded to list the steps for lighting a match, all delivered in a friendly, kid-appropriate tone that might make such dangerous activities seem perfectly normal to young users.
But fire-starting instructions weren’t the only red flag. Testers found that these AI companions would also tell children where to find knives in their homes and provide information about accessing medicine pills—items that pose obvious safety risks when unsupervised children seek them out.
RJ Cross from PIRG’s Our Online Life Program didn’t mince words when speaking about the findings. “Right now, if I were a parent, I wouldn’t be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it,” Cross told Futurism.
While Cross acknowledged that toy manufacturers will likely develop better “guardrails” to prevent such dangerous conversations, she raised an even more profound concern about the long-term psychological impact of AI companions on childhood development.
The question isn’t just about immediate physical safety—it’s about how these artificial friendships might shape young minds in ways we won’t fully understand for years.
“The fact is, we’re not really going to know until the first generation who’s playing with AI friends grows up,” Cross explained. “You don’t really understand the consequences until maybe it’s too late.”
The warning serves as a stark reminder that as artificial intelligence becomes increasingly integrated into everyday products, including those marketed to our youngest consumers, the technology may be moving faster than our ability to ensure it’s truly safe for children’s use.