Amid a flood of toys boasting AI features this holiday season, a consumer advocacy group warns that more must be done to ensure the gadgets are safe for children.
Public Interest Research Group, which pushes for corporations and government bodies to prioritise health, safety and well-being, said in a new report that many of the new AI-enabled toys being advertised come with risks.
One of four AI toys examined by Pirg, an AI-powered plush bear named Kumma, had very few guardrails, and, according to the consumer advocacy group, βgave detailed instructions on how to light a matchβ, and in some instances βdiscussed a range of sexually explicit topics in depth in conversations lasting more than 10 minutesβ.
Pirg said the company behind the toy, FoloToy, later made changes to the device after an internal safety audit, but bigger concerns linger for the toy industry and its adoption of AI as a whole.
Other companies mentioned in the report, such as the AI-toy robot maker Miko, included disclaimers with its device that warned that the company βmay share some data with third-party game developers and advertising partnersβ.
Based on testing various products, Pirg said that there was room for considerable improvement when it comes to making AI toys safer and easier for parents to control.
βRegulators should enforce existing consumer protection and privacy laws that do already apply to AI products,β the report's conclusion read in part, also going as far as pushing to limit how toys with AI features are advertised. βAI toys should be neither designed nor marketed as emotional companions for children,β Pirg's analysis said.
If neither regulators or companies act upon the group's recommendations, the report said that ultimately the slack needs to be picked up elsewhe
Continue Reading on The National UAE
This preview shows approximately 15% of the article. Read the full story on the publisher's website to support quality journalism.