AI slop goes mainstream as YouTube cashes in

AI slop goes mainstream as YouTube cashes in


There’s a prevailing wisdom that AI-generated content, or “slop” as it’s colloquially known, should make our skin crawl. AI models tend to generate uncanny faces, mangled hands and fantastical scenarios.

Take this YouTube Short video of a baby that finds itself being shimmied up a baggage loader onto a jumbo jet, before donning an aviation headset and flying the plane. It has racked up more than 103 million views.

So, too, have other AI-generated videos which are starting to dominate the platform in much the same way they’ve proliferated across Facebook, Pinterest and Instagram. Several of YouTube’s most popular channels now feature AI-generated content heavily.

I’d originally thought this would be a problem for YouTube as it grappled with what looked like a new form of spam, but the general lack of complaint from advertisers coupled with the gangbusters growth of AI content and even appreciative comments from viewers, changed my view. It seems the public is happy to gorge on slop, and that’s not a problem for Alphabet’s most valuable asset after Google Search. Quite the opposite.

Earlier this month, YouTube — which could surpass The Walt Disney Co this year as the world’s largest media company by revenue — updated its policies to strike a balance between allowing AI-generated videos to flourish on its platform without spamming it.

The new rules cut ad revenue from low-effort, repetitive content. Think channels like this one, this one, this one, this one and many more, often run by the same person uploading dozens of videos a day. Their creators might exploit AI tools like Eleven Labs to create a synthetic voice that reads out a script, scraped from Reddit, over a slideshow of stock images. Some of these videos get hundreds of thousands of views.

Case-by-case basis

The video platform’s overall approach, however, is that AI-generated content is fine, so long as it’s original, provides value to viewers and includes some human input. For now, it seems to be measuring that on a case-by-case basis, which is as good an approach as any with new tech. YouTube is also no stranger to fighting spam.

Indeed, the policy update seems have put advertisers at ease, even as 92% of creators on the site use generative AI tools, according to the company. Advertisers have a tacit understanding that more AI on YouTube means more content, and more revenue. It helps that the industry has years of experience trying to monitor icky content — from racism to conspiracy theories — shown next to their brands online. They’ve learnt it’s a yearslong game of Whac-A-Mole.

Read: YouTube: a $455-billion media giant hiding in plain sight

YouTube clearly wants AI content to thrive. Sister company Google has said that later this year it will bring its video-generation tool Veo3 to YouTube Shorts, making it even easier to create lifelike AI videos of Stormtrooper vloggers or biblical characters as influencers. The company says AI will “unlock creativity” for its creators.

But unlocking new forms of profit is more straightforward for Alphabet than it is for creators. Take Ahmet Yiğit, the Istanbul-based creator behind the viral pilot-baby video. Though his channel has racked up hundreds of millions of views, he’s only received an estimated US$2 600 for his most viral post, with the bulk of his audience coming from countries like India, where ad rates are low.

YouTubeYiğit says he spends hours on a single scene and juggles a dozen tools, suggesting that even this new generation of AI creators could end up working harder for less, while Alphabet reaps ad revenue from their output. As long as the content machine runs, it doesn’t matter whether AI videos are quick and easy or gruelling to make — only that they drive views and ads.

That’s why YouTube is leaning harder into welcoming slop than policing it. While the company does require creators to say if their videos contain AI, the resulting disclaimer is listed in a small-text description that viewers must click through to read, making it tough to spot. That does little to address the growing confusion around what’s real and what’s synthetic as more YouTubers race to capitalise on AI content.

Read: The most-watched YouTube videos of all time

The risk is that as slop floods our feeds and juices YouTube’s recommendation algorithms, it’ll drown out more thoughtful, human-made work. The earliest big YouTube hits were slices of life like the infamous, “Charlie Bit My Finger”. What happens when the next wave of viral hits has no bearing on reality, instead offering bizarre, dreamlike sequences of babies dressed as Stormtroopers, or Donald Trump beating up bullies in an alleyway?

Perhaps they will both reflect and deepen our sense of disconnection from real life. AI content might turn out to be a boon for YouTube, but it offers an unsettling future for the rest of us.  – © 2025 NewsCentral Media

Get breaking news from TechCentral on WhatsApp. Sign up here.

Don’t miss:

This was the first video ever uploaded to YouTube