Video creators are increasingly turning to AI tools to rapidly produce low-quality videos targeting children on YouTube, raising concerns about the impact of such content on kids.
Wired reports that the world of children’s entertainment on YouTube is facing a new challenge: a growing influx of AI-generated videos aimed at the youngest viewers. According to a Wired investigation, a rising number of YouTube channels appear to be leveraging AI tools like ChatGPT, text-to-speech services, and generative AI features to automate the production of animated videos for kids.
These channels, often promoting themselves as educational or relying on familiar aesthetics like those of the popular Cocomelon series, are churning out videos at an alarming rate. One channel called Yes! Neo, with over 970,000 subscribers, has published a new video every few days since its launch in November 2023, with titles like “Ouch! Baby Got a Boo Boo” and “Poo Poo Song.”
Ben Colman, CEO of deepfake detection startup Reality Defender, analyzed several of these channels and found evidence of AI-generated scripts, synthetic voices, or a combination of both. “Generative text-to-speech is increasingly more commonplace in YouTube videos now — even for children, apparently,” Colman said.
The current trend of AI videos seems focused on capitalizing on YouTube’s recommendation algorithms and monetization opportunities. Tutorials promising riches from AI-generated kids’ videos have flooded the platform, with titles like “$1.2 Million With AI Generated Videos for Kids?” and “$50,000 a MONTH!”
Experts warn that this wave of hastily assembled, AI-driven content could have detrimental effects on young viewers. David Bickham, research director at Boston Children’s Hospital’s Digital Wellness Lab, expressed skepticism about the educational value of such videos, stating, “Something that’s generated entirely to capture eyeballs — you wouldn’t expect it to have any educational or positive beneficial impact.”
The rapid pace of AI video production raises concerns about the lack of human oversight and quality control. Tracy Pizzo Frey, senior adviser of AI for Common Sense Media, emphasized the importance of “meaningful human oversight, especially of generative AI,” suggesting that the responsibility shouldn’t fall entirely on families.
As YouTube prepares to introduce content labels and disclosure requirements for AI-generated content, questions remain about the platform’s ability to effectively monitor and regulate this emerging trend. The potential impact on children’s cognitive and emotional development, coupled with the risks of exposure to inappropriate or harmful content, has sparked calls for greater accountability and ethical considerations in the use of AI for kids’ entertainment.
Read more at Wired here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
COMMENTS
Please let us know if you're having issues with commenting.