“Adpocalypse”

By: Carl Rustad

Youtube Hate Preachers Share Screen With Household Names.” “Google’s Youtube has Continued Showing Brands’ Ads With Racist and Other Objectionable Videos.” These are the headlines Google faced in March 2017, as ads for Google’s advertising partners allegedly appeared alongside hateful or inappropriate Youtube videos. Within days, high-profile advertisers including Wal-mart, Pepsico, General Motors, AT&T, Dish, and Starbucks all pulled their ads from the platform

Google responded to these allegations by “implementing broader demonetization policies around videos that are perceived to be hateful or inflammatory” and “strengthen[ing] advertiser controls for video and display ads.” Using algorithms, Youtube “automatically weed[s] out inappropriate content,” sorting each uploaded video into categories purportedly reflecting their desirability to advertisers. Advertisers can exclude videos from categories like “tragedy and conflict,” “sensitive social issues,” “sexually suggestive content,” “sensational and shocking,” and “profanity and rough language.” Clearly these options reach far more content than the originally-complained-of hate speech. Videos determined inappropriate for advertisers are “demonetized,” meaning ads will not appear on them, they are deprioritized in search, and content creators will not receive any ad revenue from the video. The resulting drop in ad revenue is referred to as “Adpocalypse.”

As a result of these efforts, Youtube claimed “many advertisers have resumed their media campaigns on Youtube,” but also acknowledged that content creators faced “revenue fluctuations” due to demonetization and promised to provide “more detail around advertiser-friendly guidelines.”  Meanwhile, some content creators on the platform claimed to see an initial 80 percent drop in ad revenue due to demonetization, leveling off to a “40, 50, 60 percent drop” as videos were deemed not suitable for all advertisers. Prominent vlogger Vlogbrothers opined “[demonetization] has really squeezed creators who are making content that’s maybe good, but not, like, super-happy-family-fun-time stuff.”

Private Platforms Provide Strong Extralegal IP Protections

Adpocalypse demonstrates both the interest and the power that companies have in protecting their brands on private platforms. Brands are already entitled to certain legal protections. A trademark holder is protected against damaging associations in several scenarios, including when unauthorized use of their trademark causes confusion as to the source or sponsorship of a product, or tarnishes the brand by association with “unsavory” ideas. See AMF, Inc. v. Sleekcraft Boats; Louis Vuitton Malletier S.A. v. Haute Diggity Dog, LLC. On the other hand, there is no trademark infringement when the trademark is being used to describe a product or talk about a competitor’s product. See KP Permanent Make-Up Inc. v. Lasting Impression I, Inc. These are considered “fair uses” of a trademark.

But the companies on Youtube, of course, do not have to point to their carefully balanced intellectual property rights in order to control their representation on the platform. They can simply refuse to advertise on a platform if it tends to associate their brand with any less-than-ideal content. This is not a new phenomenon. Media has long catered to advertisers, with media scholar C. Edwin Baker claiming “the greatest threat of censorship in this country comes not from the government, but from advertisers . . . .” As online platforms mediate a steadily increasing amount of our time, advertiser censorship may become correspondingly more pervasive and omnipresent. With algorithmic and computing advances, such censorship can be systematically extended to hosted individual speech as seen in Adpocalypse.  

Real Time Content Moderation: The Future of Advertising? 

Adpocalypse concerned advertisers’ association with undesirable uploaded videos, which are scanned for content and demonetized and search deprioritized if they are deemed unsuitable for advertisers. This hawkish breed of moderation is enabled by advances in automated decision making. Over 500 hours of content are uploaded to Youtube every minute; each video must be scanned and categorized as safe or unsafe for advertisers. 

Platforms are now facing pressure to provide real time moderation to prevent violations of their terms of service by censoring disinformation, incitations of violence, and other abuses. Facebook Horizons already includes real time moderation features  allowing it to instantly deplatform or censor abusive–as determined by Facebook alone–virtual reality users. The advantages of such a system are obvious. Hate speech, harassment, and other universally-condemned behavior can be taken offline before it happens. Unfortunately, the concerns real time moderation raises are just as obvious. 

Platforms will continue to compete for ad revenue. As Adpocalypse demonstrates, online platforms are not simply censoring hate speech; they are beginning to censor anything not “advertiser-friendly”. Allowing fine control over the spaces in which advertisers’ products appear, not just how their ads appear, is a profitable course of action. One easily foreseeable use of real time moderation is to limit the visibility of advertiser-unfriendly speech in VR chat. But there is no reason to believe the technology will be confined to such transparent and simplistic uses. Facebook already sells sophisticated and hyper-targeted ads. Plus, US advertisers are willing to pay about $250 billion a year to control what consumers associate with their products. The market is there.

Given the impending capability and incentives for online platforms to moderate speech and the environment of speech in real time, it is time to take a hard look at the role of advertisers in platform censorship. While the First Amendment does not apply to private platforms, consumers should demand transparency from platforms about how speech is moderated and hold them accountable when moderation technology is abused to accommodate advertisers. 

Leave a comment