Children’s cartoons get free exemption from YouTube’s deepfake disclosure rules

YouTube has updated its rulebook for the age of deepfakes. Starting today, anyone uploading a video to the platform will have to disclose certain uses of synthetic media, including generative AI, to let viewers know what they’re watching isn’t real. YouTube says this applies to “realistic” altered media such as “making it appear as if a real building caught fire” or “replacing one person’s face with another’s.”

new policy YouTube is taking steps that could help prevent the spread of AI-generated misinformation as the US presidential election approaches. It’s also surprising for what it allows: AI-generated animations aimed at children are not subject to the new synthetic content disclosure rules.

of youtube new policies Exclude animated content from the disclosure requirement altogether. This means that the emerging scene of get-rich-quick, AI-generated content hustlers can continue making videos for kids without disclosing their methods. Parents concerned about the quality of hastily produced nursery-rhyme videos will be left to identify AI-generated cartoons.

YouTube’s new policy also says creators are not required to flag the use of AI for “minor” edits that are “primarily aesthetic” such as beauty filters or cleaning up video and audio. The use of AI to “generate or improve” a script or caption is also permitted without disclosure.

There’s no shortage of low-quality content created without AI on YouTube, but generative AI tools lower the barriers to video creation in a way that speeds up its production. YouTube’s parent company Google recently said it was making changes to its search algorithms to reduce the recent flood of AI-generated clickbait, made possible by tools like ChatGPT. Video production technology is less mature but rapidly improving.

install problem

YouTube is the juggernaut of children’s entertainment, dwarfing competitors like Netflix and Disney. The platform has struggled in the past to moderate the vast amount of content aimed at children. It has come under criticism for hosting content that appears superficially appropriate or attractive to children but upon closer inspection contains distasteful topics.

WIRED recently reported on the rise of YouTube channels targeting children that use AI video-generation tools to create cheesy videos featuring generic 3D animations and off-kilter iterations of popular nursery rhymes.

The exemption for animation in YouTube’s new policy may mean that parents can’t easily filter such videos from search results or choose to watch popular and thoroughly vetted channels like PBS Kids or MSD on their own. Can’t stop YouTube’s recommendation algorithm from autoplaying AI-generated cartoons after you set up a child. Rachel.

Some problematic AI-generated content targeted at children needs to be flagged under the new rules. In 2023, BBC Investigated A wave of videos targeting older children that used AI tools to push pseudoscience and conspiracy theories, including climate change denial. These videos mimic traditional live-action educational videos – for example, showing the actual Pyramids of Giza – so unsuspecting viewers may mistake them for factually accurate educational content. (The pyramid video then moved on to the suggestion that the structures could generate electricity.) This new policy would ban that type of video.

YouTube spokesperson Elena Hernandez says, “We require children’s content creators to disclose content that is significantly altered or artificially crafted to appear realistic.” “We do not need the disclosure of material that is clearly unrealistic and is not misleading viewers into thinking it is real.”

YouTube Kids, a dedicated app for kids, is designed using a combination of automated filters, human review, and user feedback to find well-made content for kids. But many parents use the main YouTube app to curate content for their kids, relying on video titles, listings and thumbnail images to decide what’s appropriate.

So far, most of the apparently AI-generated children’s content WIRED found on YouTube has been poorly made, similar to more traditional low-effort children’s animations. They have ugly visuals, incoherent plots, and zero educational value – but that’s not it Specific Ugly, incoherent, or academically worthless.

AI tools make it easier to produce such content in large quantities. WIRED found that some channels uploaded longer videos, some more than an hour long. Requiring labels on AI-generated children’s content could help parents filter out cartoons that are published with minimal or no human scrutiny altogether.

Leave a Reply

Your email address will not be published. Required fields are marked *

Discover more from news24alerts

Subscribe now to keep reading and get access to the full archive.

Continue reading