Shallowfakes and Their Potential for Fake News

Screen Shot 2020-01-13 at 5.28.57 PM

Image: Generated Photos (Gallery of deepfake, AI generated photos)

By: Ashley Stoll

By now you have probably heard of “deepfakes”–a type of media in which a person’s image is replaced with a fake–and the growing concern that surrounds them.  But, what you probably don’t know is that most deepfakes you see are actually “shallowfakes” and the laws currently being written to stop these fraudulent videos will be ineffective on many of the videos being made. We need a solution that will target all types of disinformation, and for that we need to enlist the help of Internet platforms.

The Increasing Prevalence of Shallowfakes

Shallowfakes are videos that have been manually altered or selectively edited. Shallow fakes first became popularized in early 2018 when videos surfaced, mainly on the website Reddit, that appeared to show celebrities engaging in sexual activities. These videos were actually pornographic films that were manually altered so that the heads of celebrities were placed on the bodies of the adult film actors who were originally in the videos. Software to create videos like this can be downloaded and used by anyone with access to a computer. While shallowfake porn videos create a huge problem both for celebrities and when used as revenge porn, this technology is also used to spread misinformation. For example, after CNN reporter Jim Acosta had his White House Press Badge removed in November of 2018 a shallowfake video of the incident in question quickly spread online. The selectively edited video showed Mr. Acosta initiating an assault on a young White House intern, but in reality the intern had just grabbed a microphone out of Mr. Acosta’s hands. This fake version of the video spread so rapidly that even the White House shared it on its social media.

Another example of a shallowfake occurred in the summer of 2019, when a video circulated throughout social media that appeared to show Speaker of The House of Representatives, Nancy Pelosi, slurring her words during an interview as if she were intoxicated. In actuality the video of Speaker Pelosi had been slowed down and manipulated to make her appear drunk. A second shallowfake purporting to show Nancy Pelosi at a news conference was selectively edited to make it look like she was stammering. The video was of high quality and prominent figures including President Trump believed its authenticity.

Deepfakes, on the other hand are videos created by artificial intelligence using generative adversarial networks (GANs), in which two machine learning (ML) models work simultaneously. One ML model trains on a data set and creates video forgeries, while the other attempts to detect the forgeries. The forger continues to create fakes until the other ML model can’t detect the forgery.

For example, in May of 2019 a video of podcast host Joe Rogan surfaced showing what appeared to be Mr. Rogan making comments he had never actually made. This video was extremely convincing, and most people were unable to identify the deepfake, which was a version of Mr. Rogan created by artificial intelligence. This technology is much more advanced than the technology that creates shallowfakes.  Deepfake technology does not alter an existing video; it creates an entirely new one.  As a result, deepfakes are much harder to spot than shallow fakes and soon may be impossible to spot without the help of the A.I. that created them. Though these videos have not yet become widespread, as their use increases, they have infinite potential to spread disinformation.

Combatting the Rise of Shallowfakes and Deepfakes

Sources are conflicted as to what legal recourse creators of shallowfakes and deepfakes will face.  On one side, some fear that the First Amendment will protect shallowfakes and deepfakes as a creative product or as a parody, especially when they involve political figures. Other sources think that shallowfake and deepfake creators could face defamation charges, anti-harassment charges, or even copyright infringement.

Current laws have the potential to bring hefty penalties to creators, but some jurisdictions have passed more narrow laws aimed at combating the issue. For example, Texas was the first state to pass a law, SB 751, which criminalizes the creation and distribution of deepfake videos that are published and distributed within 30 days of an election and have the intent to injure a candidate or influence an election. Similarly, California has passed two different laws aimed at the problem. First, California passed AB 730, which prohibits the use of materially deceptive audio or visual media of a candidate within 60 days of an election with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, unless the media includes a disclosure stating that the media has been manipulated.  Second, California passed AB 602, which gives citizens a private cause of action to sue if their image is used in sexually explicit content without their permission.

While many have praised AB 602, some believe AB 730 and Texas’ SB 751 will not be effective because of the strong First Amendment protections that surround political speech, especially online. The U.S. Senate passed a bill that requires the Department of Homeland Security to monitor and report back on the technology that creates deepfakes.  There is also currently a bill making its way through the U.S. House of Representatives which is similar to California’s two new laws and would criminalize the known distribution of shallowfakes and deepfakes of politicians near elections.  This bill would give a private cause of action to citizens whose images are used to create pornographic material without their consent. Of course, these laws do not criminalize the shallowfakes of Jim Acosta, Nancy Pelosi or Joe Rogan because these videos do not fall under the requirement of being of politicians near an election cycle, and are not pornographic.

Social Media Platforms to the Rescue

Fortunately, some social media and video/image hosting platforms have voluntarily vowed to remove all shallowfake and deepfake videos from their sites. However, Section 230 of the Communications Decency Act shields internet platforms from liability for things their users post. This means that without repeal or significant modifications of Section 230 it is unlikely social media platforms will be the ones to regulate deepfakes and shallowfakes. These platforms may be in the best position to combat the threat that fake videos pose.  In many cases these videos spread rapidly and, even if they are later deemed to be fake, the damage to the person depicted in video is already done. Internet platforms already monitor their own content and are thus able to much more quickly remove or flag a video as fake.  Further, removing a video as a fake would take less time than it would for a federal prosecutor to bring charges or for an individual to sue.  The quick detection of fake videos is key. Incentivizing Internet platforms, perhaps through tax breaks, write-offs, or legislation to be responsible for the detection of shallowfakes and deepfakes may be an efficient way to stop these videos from spreading.

 

 

Leave a comment