By Derk Westermeyer
A little over 4 years ago, comedian Ethan Klein uploaded his first video on his YouTube Channel, h3h3productions. That video’s premise was about how people use toilet paper. While this type of comedy may not be for everyone, Ethan’s channel has largely been a success. Since that first video, Ethan has uploaded hundreds more videos to his channel, a large portion of which generate millions of views each.
Ethan is able to monetize these views through YouTube’s advertising revenue policy. This policy grants YouTube creators, like Ethan, a small portion of an advertising fee whenever an advertisement is played on their channel. A video that garners millions of views can bring in thousands of dollars in advertising revenue for a creator. YouTube’s advertising policy has allowed many YouTube creators to make a living by uploading videos onto YouTube. YouTube’s advertising revenue policy has created a platform where people from all over can come together to discuss the news, review movies, and show off funny videos of their cats.
However, over the past few months, this landscape has been rapidly changing. Earlier this year, the Wall Street Journal reported ads from major retailers were being played on YouTube videos that displayed objectionable content, such as racism. This story sparked many of those retailers, such as PepsiCo, Wal-Mart, and Starbucks to pull their advertisements from YouTube. In response, YouTube assured advertisers their ads would not be placed in videos with content that did not align with advertisers’ values. YouTube arranged for advertisers to have more control over which content they would like their ads to appear next to. Additionally, YouTube ramped up its artificial intelligence system to crack down on videos that displayed objectionable content. This new system’s goal was to prevent videos with objectionable content from displaying any advertisements, and thus, prevent the video’s creator from making any money.
In theory, this new system seems like a great solution to this issue. Advertisers could sleep soundly knowing their ads would not be played on videos that did not align with their business views, and creators could no longer profit off of hateful speech. However, in practice, the system has proven to be far from perfect. This failure is due, in large part, to YouTube’s artificial intelligence.
YouTube’s AI is preventing a large number of videos from earning money. However, many of these videos do not contain the offensive material originally alluded to in the Wall Street Journal article. Videos can be demonetized for discussing sensitive subjects, regardless of the creator’s viewpoint on that subject. This results in YouTube creators who engage in critical discussions around sensitive topics, such as political events, potentially being barred from earning any advertising money.
The impact that this new system has on content creators is massive. Videos on Ethan’s channel are only earning 16% of what they were a few months ago. Back then, Ethan felt he could freely speak his mind. Now, he fears he will have to leave YouTube for good, unless he caters to the advertiser’s desires. Many other creators fear the same.
Improving artificial intelligence’s ability to detect hate speech could help bring balance to this mess. Alphabet Inc., is one company who is attempting just that. However, Alphabet still has its work cut out for it. Back in February, MIT Technology Review posted a blog on one of Alphabet’s latest anti-hate speech products, dubbed “Perspective.” While the review found that Perspective was great at detecting hate speech from a stylistic approach, the review noted how the program struggled to take into account the speaker’s intent and the message’s context. Thus, the program may miss content that contains hate speech while blocking other messages that are absent of any hate speech elements.
With around 400 videos being uploaded to YouTube every minute, YouTube needs a program that can efficiently filter out harmful content from the rest. This program would not only benefit YouTube creators, like Ethan, but it would also help crack down on hate speech in other arenas, such as cyber bullying. But until those improvements develop, user-created content platforms like YouTube could be in some serious trouble.