Report calls for AI toy safety standards to protect young children
The University of Cambridge is conducting a major research project to understand how AI-powered “smart” toys affect young children’s development, play, and social relationships. These toys use generative AI to hold conversations and are increasingly marketed as tools that support learning, emotional development, and companionship.
Focus on children aged 0–5
The study focuses especially on children aged 0–5 and includes observations of children playing with AI toys, as well as interviews and surveys with parents, teachers, and early-years professionals. A key aim is to explore how children form emotional connections with these toys and how adults perceive their benefits and risks.
Researchers highlight that there is currently very little independent evidence about the actual impact of these toys. While companies promote them as educational and supportive, it remains unclear whether they truly benefit children or could pose risks—particularly regarding emotional development, data privacy, and social behavior.
Concentrating on AI
A central concern is whether AI toys might widen or reduce social inequalities. The project specifically examines children from disadvantaged backgrounds to see if access to such technology creates new opportunities or instead deepens existing gaps.
The researchers also emphasize that children are highly adaptable and may use AI toys in unexpected ways, potentially creating new forms of play. However, because so much is still unknown, the study aims to provide evidence-based recommendations on how these toys should be designed and used to best support children’s development and well-being.
You want to read the full report? It is available for download here.