The First Amendment Needs an Update

By: Katherine Czubakowski

Recent news about Twitter and Facebook banning former President Trump has many wondering about the legality of such action.  Although the consensus is that Twitter’s action does not violate the First Amendment, it does raise questions about whether or not the First Amendment should apply to, at least some, forums on the internet.  The Supreme Court’s precedent has focused largely on free speech rights in physical spaces, but with more communication happening over the web—particularly in the quarantined and socially-distanced world of today, it’s time for the Court to address when and how speech on the internet can be regulated.

Lower courts have recently begun questioning the assumption that speech on social media cannot be protected by the First Amendment.  Under current law, social media sites are allowed to ban whoever they want.  The Constitution, and by extension the Bill of Rights, only applies to government action, so  Twitter, Facebook, and other private social media platforms are allowed to regulate speech in any way they choose through their Terms of Service.  However, this view has been challenged numerous times by those wishing to comment on public officials’ Facebook pages, those who have been banned from Twitter for their own speech, and those who have been blocked by other Twitter users with varying success.  Although some courts have held that social media platforms are strictly private companies hosting private speech, other courts have come to the conclusion that some aspects of social media platforms  fall under First Amendment protection, such as when the account is run by a government officer speaking in their official capacity because the interactive features open to the public change the nature of these accounts from private to public.

One of these cases, Trump v. Knight First Amendment Institute, is currently pending petition at the Supreme Court, giving the Court the ideal opportunity to announce a new policy for how the First Amendment should apply to internet forums.  The plaintiffs in this case argue that by blocking them from his personal Twitter account, former President Trump violated the First Amendment by preventing them from speaking in a public forum based on the viewpoints they would have expressed.  Ensuring that citizens can disagree with politicians and engage in a public debate, if only in the comments section, is one of the most important ways to rebut the effects of confirmation bias, prevalent on social media.  As politicians and public officials take increasing advantage of social media and leverage it as a tool to promote their own policies, citizens’ rights should be expanded to ensure a fair and open public debate.

Now is the time for the Court to extend First Amendment protections to any website which encourages public discussion.  While expanding First Amendment protections to internet forums would likely allow former President Trump back on Twitter, it would also keep him from banning those with whom he disagreed from replying and challenging his opinions.  The Founding Fathers thought a fair, open, and representative public debate was the best way to protect our democracy – we just need to move their ideas into the digital age.

Catfish Bait: Too Few State Laws Protect the “Faces” of Catfishers

By: Cameron Cantrell

Most people on the Internet have heard of “catfishing,” or the act of deceiving someone by creating a false personal profile online, and its namesake 2010 documentary and subsequent MTV show. While the movie and show mostly document standard catfishing—where the perpetrator poses as a fictional person for romantic gain rather than to further a scam or fraud—the cases of malicious catfishing get more media attention. The difference between standard and malicious catfishing is not some legal term of art, but rather a reflection of the intent of the perpetrator. Malicious catfishers deceive their victims to further an immoral end, often monetary exploitation or other long-con “romance” scams (in season 5 of the MTV show, one catfisher said the charade was a “game” to him, after his victim sent him over $500). The direct victims of malicious catfishing can usually find some legal ground to sue, but what about the unknowing third party whose pictures or likeness the catfisher relied on? Unfortunately, states almost unilaterally leave the “catfish bait”—the person whose likeness a catfisher appropriates—without legal remedy.

Existing laws rarely protect these third-party victims of malicious catfishing

The regular lineup of privacy torts, such as false light, defamation, and right of publicity, do not protect the catfish bait if the catfish’s persona is fictitious. For catfish bait to take advantage of their remedies, the catfisher would have to use the catfish bait’s face and name, or some other combination of identifiers that makes it clear they are pretending to be that specific person. The same is true for the twelve or so states that criminalize impersonating someone online, because they require the catfisher impersonate an “actual person.” Noting this statutory void, some legal articles have tried sewing together a patchwork of civil and criminal offenses (intentional infliction of emotional distress and cyberbullying, for example) to provide some relief. Even these hopeful academics, however, acknowledge that the mosaic of existing laws comes up short because current laws can’t be read to allow catfish bait sue catfishes for using their likeness in “completely fictitious profile[s].” This is frustrating for prospective plaintiffs given that most situations people would describe as “catfishing” involve fictitious personas, not outright impersonation. 

Say that you are the catfish bait, and a catfisher uses your picture as their own on social media. Some websites like Twitter and Tinder let you report the profile on those grounds, but a federal law known as “section 230” prevents guaranteed action. In the case the platform does remove the profile, however, the clever catfisher may move to another website or remake their account, and resume using your likeness. In the 49 states without a legal framework for this type of “fictitious” catfishing, there’s no meaningful incentive to keep this catfisher from habitually using your likeness again, and again, and again.

Only one state protects “faces” of fictitious personas

Oklahoma’s Catfishing Liability Act of 2016 creates a private right of action for “catfish bait” (those whose names, images, or voices are being used to “create a false identity” online). It is the only state law of its kind. Through it, plaintiffs in Oklahoma can obtain preliminary injunctions against an alleged catfisher, forcing the catfisher to stop using their likeness. If they win, plaintiffs can get actual damages as well as at least $500 and court costs in punitive damages. This law is perfectly tailored to the harms a catfisher can cause to “catfish bait” because it recognizes that some catfishing—and some catfishers—wreak more havoc than others (for example, the person who uses another’s likeness to talk to 400 people may have caused more harm than the person catfishing an acquaintance out of insecurity or ignorant boredom). Accordingly, it awards punitive damages as a general deterrent for all catfishing and actual damages as an additional deterrent for especially undesirable catfishing, such as that done by high-profile or prolific catfishes.

More states should follow Oklahoma’s lead

While many population-dense, torts-focused states like California, Texas, and New York make it illegal for a catfisher to use another’s picture to impersonate that specific person, Oklahoma is the only state that outlaws using another’s  picture to become someone new. Curiously, even with catfishing’s notoriety in popular culture, there doesn’t seem to be any public opposition to laws like Oklahoma’s, nor any demonstrated political support. Yet catfishing is an increasingly common practice over the last decade, and is likely going to become even more frequent in years to come. With plenty of evidence that catfishing presents a growing problem, do legislatures really view redress for catfish bait as that low of a priority?

It seems like the only possible answer is yes; to date, only one other state has introduced substantially similar legislation, and it didn’t receive lively discussion. That state, Wisconsin, got its comparable bill easily out of committee in 2017 but never scheduled it for floor discussion. After being reintroduced in their next legislative session, it again died from neglect, this time in committee.

The question of “why not” remains, with the argument to legislate buttressed by airtight reasoning. The problem is growing, the harms are undue, the legislation is simple, the statute does not require funding, the policy it embodies is reasonable, and the penalties it carries are moderate at best. What else could other states’ legislatures possibly be waiting for?

Facial Deletion: The Everalbum “Course Correction” Should Scare Privacy Violators

By: Tallman Trask

For years, privacy rights violators have paid fines and been the subject of consent decrees. They’ve entered into settlement agreements with enforcers, agreed to change behaviors, and sometimes even been required to notify users or consumers of the privacy violations. The Federal Trade Commission’s (FTC) recent Everalbum settlement, however, offers regulators a new tool, and it is one companies ought to be very wary of: unlike previous settlements where companies simply paid a penalty and were able to keep hold of the products and algorithms developed through their privacy violations, the settlement here requires Everalbum to delete not only data they collected as part of a violation, but also to delete any “affected work product” created using that data.

Everalbum operated Ever, a free photo storage app, until August of 2020, when the company shut down the app. At its core, Ever served a simple purpose: users could upload, store, and organize photos to the company’s cloud storage. Over time, however, the app developed a hidden secondary purpose, and was used by Everbaum to develop and train a commercial facial recognition product the company designed “specifically for mission-critical applications” and offered to military and law enforcement buyers. The company’s software apparently worked well and achieved a high degree of accuracy in testing. According to the FTC’s complaint, Everalbum utilized user photos without first obtaining express consent. Problematically, Everalbum’s use of facial recognition may have even conflicted with the company’s own policies. The American Civil Liberties Union (ACLU) of Northern California (where Everalbum is based) called the use of user photos “an egregious violation of people’s privacy” and users described Everalbum’s use of their photos to create facial recognition products as “a huge invasion of privacy.”

The Everalbum settlement requires the company to provide notice and obtain affirmative consent before availing themselves of users’ biometric information in the future. Similarly, it prohibits future misrepresentations about the company’s data collection practices and imposes consent monitoring and recordkeeping requirements. While the settlement does not impose a fine against Everalbum, it requires the company to delete “affected work product” created through the alleged privacy violations highlighted in the complaint. Here, that means Everalbum must destroy years of work on facial recognition software where that work was based on photos uploaded by users who were unaware their photos were being used to develop facial recognition products and including the results of company’s efforts to train their facial recognition software.

The deletion requirements in the Everalbum settlement are novel; prior settlements have imposed fines without requiring deletion. For example, the FTC’s 2019 settlement with Facebook imposed a record $5 billion fine, but did not require Facebook to delete or remove portion of their software built through or  by using information gathered through the improper treatment of user data. Similarly, the FTC’s 2019 settlement with Google allowed the company to keep products it had built using data improperly collected in violation of children’s privacy rules. Praising a portion of the Everalbum settlement, former FTC Commissioner Rohit Chopra described the deletion requirement as a “course correction” from the FTC’s prior privacy settlements which allowed violators to keep products developed through privacy violations.

Beyond a simple “course correction” however, the Everalbum settlement is a new tool in the regulatory toolbox. While fines can serve as a powerful deterrent, they ought not be the only tool available to regulators. Where fines are the only available means, the system becomes little more than a pay-to-violate model. At least one commentator has suggested that the Everalbum settlement provides a roadmap for encouraging Big Tech to care more about privacy, and that very well may be true. However, larger impact may be felt by smaller technology companies and others who collect data, particularly where companies are subject to a wide range of state-level and international privacy regimes. An earlier stage company with a smaller user base may, for example, be likely only to face small fines if caught violating user privacy under the framework of the Facebook and Google settlements, but the Everalbum framework could result in deletion requirements which impact a smaller company’s entire range of product offerings. That is, while the earlier settlements may have wiped out bank accounts, the algorithm deletion framework of the Everalbum settlement could wipe out companies. And that is a tool companies and privacy officers should be very afraid of seeing pointed their way.

Streamer or Infringer? Copyright Law in the Video Game World

By: Joanna Mrsich

Fortnite, League of Legends, Minecraft, World of Warcraft—what do all of these games have in common? Each one of the above was in the top ten “Most Watched Games on Twitch” in December 2020. The video game industry generates billion dollars in revenue each year. In 2018 alone, the industry projected $137.9 billion in revenues. In today’s video game scene, however, it is not enough to just own and play a video game. The goal for many is to find popularity as a streamer. Twitch is the world’s leading live stream platform for gamers, allowing gamers to create free accounts to follow other streamers or stream their own content. However, when the stream uploads and the fun is done, is there a copyright infringement suit just waiting to happen?

Copyright protection, as codified in 17 U.S.C. §102, exists in original works of authorship fixed in any tangible medium of expression such as motion pictures and other audiovisual works. Under 17 U.S.C. §106 of the Copyright Act, copyright owners have six exclusive rights that they may do or authorize others to do with their work. This list includes rights to reproduce the copyrighted work, prepare derivative works based upon the copyrighted work, distribute copies of the copyrighted work to the public, and more. Therefore, under copyright law, game developers and publishers legally own exclusive rights to the use, images, and videos of their games when in a fixed form. The issue is likely not with streaming videogame play alone—this arguably does not satisfy the “fixation” requirement within copyright law—but rather the moment a user uploads their recorded stream.

However, today’s streamers arguably are not required to gain permission from game developers and publishers to record and upload their gameplay online on platforms like Twitch. Hours upon hours of copyright protected gameplay is uploaded to Twitch, YouTube, TikTok, and countless other platforms. While these platforms typically protect themselves through Digital Millennium Copyright Act of 1998 (DMCA) safe harbors and community guidelines, they still more or less allow and perpetuate an environment of infringement; enough game publishers and developers just have not enforced their copyrighted works. Ultimately, while there is little to no precedent for the enforcement of copyright within the video game streaming and uploading realm, users should be aware of the law and possible legal implications should the day arise when a copyright owner chooses to enforce their exclusive rights.  

Applicable law

As stated above, 17 U.S.C. §106 awards video game developers and publishers the rights to authorize, limit, and control who can reproduce, publicly distribute, create derivative works, publicly perform, publicly display, and/or digitally perform a sound recording from their copyrighted works. This means that developers are able to file takedown notices for infringing material and refuse to allow streamers to stream their game. However, popular social media platforms typically find protection under the DMCA. The DMCA amended and updated U.S. copyright law in three main ways: (1) established protections for online service providers in certain situations if their users engaged in copyright infringement; (2) encouraged copyright owners to give greater access to their works in digital formats by providing legal protections against unauthorized access to their works; and (3) made it unlawful to provide false copyright management information or to remove or alter that type of information in certain circumstances.

 Moreover, DMCA §512 shields online service providers from liability for infringement—also known as safe harbors—in exchange for implementing the notice-and-takedown system and other conditions.’ Essentially, the DMCA copyright law treats online service providers as “innocent middle-men” in disputes between the owners of a copyrighted work and a user who posted the infringing content. This means that sites like Twitch will not be held liable for any streamer who posts infringing content under the DMCA’s “safe harbor”—found in 17 U.S.C. §512—provided that platforms promptly remove or block access to infringing materials after being appropriately alerted. Therefore, so long as Twitch itself—the online service provider—is not engaging in infringing conduct themselves or enabling end-users to infringe, they will likely be protected under a safe harbor.

Most recently, Nintendo issued a mass DMCA takedown where 379 fan-made games were removed from a gaming website and hosting service, Game Jolt. Nintendo’s DMCA notices explained that all of these games infringed on trademarks owned by Nintendo, such as images of Nintendo’s video game characters, music, and other features of their video games. Additionally, Nintendo’s legal team also forced a popular TikTok user and Twitch streamer formerly known as “Pokeprincxss” to rebrand and pay them for infringing on the Pokémon franchise via merchandise based around her branding and her username. While all of this happened within the last year, it is possible for more developers and publishers to follow Nintendo’s footsteps. Furthermore, while these current examples are claims of copying popular protected works, uploaded video game streams are a direct reproduction of a protected work and thus susceptible to DMCA takedowns and further legal action should a copyright owner choose that route.

What copyright laws are flagged in video game streaming?

In the status quo—or existing state of affairs—platforms like Twitch clearly flag issues around copyright and channel content for their users. Within the site’s “Learn the Basics” page, Twitch clearly explains that creator content should be respected and the process for requesting a takedown notification. Moreover, while they do not require proof of permission to post content, their platform does state “…the rights that you need to secure for copyrighted material in your live broadcast may be different than the rights needed for the same material in your recorded content…”. This is followed by a suggestion to read Twitch’s DMCA Guidelines, Community Guidelines, Music Guidelines, and Terms of Service. YouTube and TikTok’s rules and copyright complaint systems and policies are very similar. Therefore, under status quo rules, so long as Twitch itself—the online service provider—is not engaging in infringing conduct or enabling end-users to infringe, they are likely protected under DMCA safe harbors.

However, just because these platforms are protected from legal liability does not mean that the users who uploaded the infringing content are also protected. Without a license, streamers do not have any legal right to upload streams of a copyright protected video game. Current copyright law clearly allows for video game publishers and developers to pursue legal courses of actions against streamers who upload recordings of their game play—it just has not occurred yet and there appears to be some kind of unspoken and informal agreement between the two groups. Some people within the gaming community also believe there is an incentive to allow streamers to upload infringing content on platforms like Twitch because it is a free and advanced form of social transmission that allows for their games to grow in popularity at rates much higher than normal advertising or word of mouth. The question is: will this always be the case?

Moving forward: what’s the plan?

The status quo seems to be a happy agreement between streamers and video game copyright owners. However, is this what the Copyright Act—and more specifically the DMCA—had in mind as the fiduciary duty of these platforms to their users and to owners of copyrighted works? As society grows increasingly technological and the role of these platforms becomes more interactive with users, should their responsibilities be as easy to fulfill when it comes to the DMCA safe harbors? As it stands right now, platforms do not have a duty to take down infringing content unless copyright holders give “appropriate notice.” Tune in for the next post in this series!

Will 2021 Mark the Beginning of the End of Big Tech?

By: Emily Lewis

“All roads lead to Rome”… or these days it seems like Big Tech. Like the fall of the Roman Empire, 2021 may mark the fall of Silicon Valley giants like Apple, Facebook, and Google.

Towards the end of 2020, the federal government filed massive lawsuits against Google and Facebook alleging violations of federal antitrust law. The Biden administration has signaled it plans of curtailing the wave of enforcement. The new administration has confirmed it plans  to continue to investigate potential antitrust violations of U.S. technology companies. Amazon and Apple may be the next tech giants to see antitrust action. There is increasing bipartisan pressure calling for the breakup of tech giants; 2021 may prove to be a watershed year for antitrust action in the technology industry.

Apple’s Antitrust Threat

Apple kicked off the new year by discreetly signaling to investors that federal antitrust action may be imminent. On January 5, 2021, Apple issued its annual proxy statement, but this time it included a new section for the company, a section specifically addressing antitrust concerns. The section states:

The Audit Committee and Board regularly review and discuss with management Apple’s antitrust risks. Apple’s Antitrust Compliance Officer is responsible for the development, review, and execution of Apple’s Antitrust Compliance Program and regularly reports to the Audit Committee. These reports cover, among other matters, the alignment of the program with Apple’s potential antitrust risks, and the effectiveness of the program’s design in detecting and preventing antitrust issues and promoting compliance with laws and Apple policies.

This addition to its proxy statement should not be too much of a surprise to investors. In October 2020, the House Judiciary Subcommittee on Antitrust published a 450 report, examining the competitive practices of tech giants such as Amazon, Apple, Facebook, and Google. Mere days after the House Judiciary Subcommittee issued its report, the Department of Justice sued Google. In late December, the FTC, along with 40 state attorneys general, filed suit against Facebook alleging that the company engaged in anticompetitive behavior and calling for the breakup of Facebook, Instagram, and WhatsApp. Notably, Apple and Amazon have yet to see complaints from federal enforcement agencies.

While Apple has yet to see a complaint from a federal agency alleging antitrust violations, Apple is no stranger to private antitrust action. Currently, Apple is facing a lawsuit from Epic Games. Epic’s complaint alleges that the Apple App Store is a monopoly and violates antitrust laws by

forcing app developers to pay steep royalty fees and use products and services that are tied in together, such as the in-app payment system. The trial has been set for May 3, 2021. The Department of Justice is also reportedly investigating Apple App Store’s competitive practices. The consequences of Epic’s lawsuit and the DOJ investigation have the potential to permanently disrupt a key source of income for Apple . In 2020 alone, the App Store generated $72.3 billion in revenue for Apple.

Epic’s lawsuit has opened the floodgates for similar lawsuits. In December 2020, Cydia, a once-popular app store for the iPhone launched in 2007, sued Apple, alleging Apple used anticompetitive means to nearly destroy Cydia, clearing the way for the App Store. Further, the complaint contends Apple has an illegal monopoly over software distribution on iOS.

Antitrust liability for Apple does not stop with the App Store. Its practices related to its streaming service, Apple Music, could potentially create antitrust liability as well. In September 2020, Spotify, an Apple Music competitor, publicly criticized Apple Music. In its public statement, Spotify claimed that Apple’s bundling of the iPhone and an Apple Music subscription constituted unfair competitive practices and called for federal authorities to act. This type of public comment has carried a lot of weight in the past. In June 2020, the European Union Commission announced it opened formal antitrust investigations on the App Store rules’ impact on competition. The press release credited a complaint filed by Spotify filed in March 2019.

Amazon May Escape Unscathed

 While last year’s House Subcommittee report concluded that Amazon has a monopoly over third party sellers, Amazon still may escape enforcement action. As outlined in Linda Khan’s acclaimed Yale Law Journal Note, Amazon’s Antitrust Paradox, Amazon poses a unique problem for antitrust enforcement compared to other Big Tech companies. Modern jurisprudence of antitrust law ties the perceived threat to competition to consumer welfare. Typically, the inquiry into the effect on consumer welfare looks at the impact on prices for consumers. According to David Balto, a Maryland lawyer who has worked both in the Bureau of Competition of the US Federal Trade Commission and in the US Department of Justice antitrust division, courts tend to be reluctant to challenge practices that appear to lead to lower, not higher prices. Thus, to prevail in an action challenging Amazon’s monopoly over third party sellers, there must be evidence of Amazon engaging in a predatory pricing scheme, not just providing lower prices to consumers.

Looking Ahead

The Biden administration is working with Sarah Miller, leader of the American Liberties Project antimonopoly group. The group recently issued a report calling on the DOJ to expand the scope of its suit against Google, in addition to calling on the FTC to sue  Amazon for its alleged monopoly practices. Only time will tell how the Biden administration will decide to take on Big Tech.