Navigating Piracy in the Streaming Era

By: Jack Dorsey

The arrival of internet streaming revolutionized consumers’ access to their favorite shows, movies, and music. It also significantly impacted physical media, media rentals, and  traditional cable subscriptions. In response, traditional broadcasters and nontraditional media companies like Amazon entered the streaming space. Each offered affordable and typically ad-free experiences that made access to a consumer’s preferred content affordable, accessible, and convenient. Whether it was movies, TV shows, music, or live sports there was something out there for most people. These streaming services also reversed the trend of internet piracy. 

In the early 2000’s, internet piracy was facilitated by websites like Napster and Limewire, which enabled internet users to conveniently download free music illegally, causing a sharp decline in music sales. Music sales peaked at over $25 billion in 1999 but were then halved over the next fourteen years due to piracy. However, the emergence of streaming services like Spotify and Apple Music helped stabilize this decline. Between 2022 and 2023, recorded music revenue rose 10.2% year over year, bringing the industry total to $28 billion dollars, with streaming subscriptions now exceeding 500 million globally. Despite this, music piracy remains a problem as traffic to music piracy websites increased by 13% in 2023.

While the TV and movie industries have experienced less dramatic shifts in revenue, streaming options also had an initially positive impact combating online piracy. A report released by the European Intellectual Property Office indicated that between 2017 and March of 2021, the European Union and its member states saw an overall decrease in internet piracy across the board. However, that trend reversed in 2022, primarily driven by illegal streaming and download of TV shows which rose 14% between 2017 and 2022. While there is not a clear explanation of this trend, theories span from the Covid-19 pandemic (studios debuting new shows and movies straight to their respective platforms, enabling easy upload to the wider internet), to inflation, and to the number of subscriptions required to access relevant content.

Online sentiment seems to indicate that people’s willingness to pirate content is driven in part by an increasingly crowded and expensive streaming space that has begun to deliver a lower quality product and user experience. For example, many people find themselves needing multiple subscriptions to keep up with the content they want to see. A Dr. Who fan living in the United States would need several subscriptions to watch the entirety of the series. Similarly, a fan of the NFL who wants to watch every game would need six different subscriptions to do so. Moreover, platforms like Disney+, Netflix, and Paramount + have all incrementally increased their subscription prices in recent years and introduced cheaper subscription tiers with advertisements.
As the tide of piracy continues to erode the revenue of the entertainment industry, some businesses have pursued legal action against internet service providers (ISPs). In February of 2024, fifty music labels sued the ISP Cox Communications in one of the largest ever intellectual property lawsuits. The labels argue that Cox should be held liable for online piracy committed by its customers, and they claim that by continuing to provide service to repeat infringers, Cox enabled and profited from the illegal downloading of music. The 4th Circuit Court of Appeals agreed that Cox had aided in the infringement by not addressing piracy by its customers, but ordered a new trial to reassess the damages awarded. In response, Cox has petitioned the Supreme Court, arguing that the case is part of a broader trend of lawsuits targeting ISPs. Cox claims that without intervention, such legal actions could threaten internet access for all users. While the outcome remains uncertain, if piracy continues to harm media companies’ profits then legal pressure on ISPs is likely to grow, leading to more lawsuits in the future.

Stolen Threads: Intellectual Property and Cultural Appropriation in the Fashion Industry

By: Nayomi Mendez Andrade 

Fashion designers are creators of apparel, footwear, and accessories. Historically, designers have used style cues, designs, and patterns from cultures that are not their own to create their work. This use has often been masked and justified by being labeled as inspiration. However, designers are “more than simply drawing inspiration; designers . . . have long mined from minority groups, adopting their underrepresented craftwork or techniques before passing them off as their own.” Minority and Indigenous communities remain vulnerable to the exploitation of their cultural designs without the proper acknowledgment or compensation because there are limited legal protections for them. 

What is Cultural Appropriation?

Patti T. Lenard and Peter Balint define cultural appropriation as a dominant group taking a valuable element from another culture for personal use, without consent and with a reasonable expectation that such taking will be objectionable. Sally E. Mary specifies that a valuable element of another culture includes “artistic, musical, and knowledge productions.”

The Legal Implications of Cultural Appropriation

Intellectual Property (IP) law is a common legal framework utilized to challenge cultural appropriation within the fashion industry. IP encompasses creations of the mind, which include symbols, names and images in commerce, literary and artistic works, and designs. These creations and inventions are legally protected by Trademarks, Copyrights, and Patents. IP protections offer individuals the opportunity to be recognized for their work or financially benefit from their inventions or creations. 

IP laws fail to protect against cultural appropriation because they typically “exclude traditional cultural expressions from protection.” A prevalent reason why IP law fails to protect against cultural appropriation is because IP laws only accord “exclusive rights to the creators or inventors” of a work. The issue is that the argument against cultural appropriation focuses on the ownership rights of an entire group who may not have directly contributed to the creation of the work. Since no specific creator can be identified, these cultural expressions do not fall within IP protections. 

For most, obtaining copyright protections is not difficult. However, it is nearly impossible for cultural groups to obtain copyright protections for their expressions. To obtain a copyright, there must be (1) a work of authorship, (2) originality, (3) and the work “must be fixed in a tangible medium of expression.” Originality requires that the work be original. Thus, if the work is a copy of an earlier work, then it is not eligible for copyright protection because it is not original. Since cultural works are generally passed down through generations and replicated, they rarely meet the originality requirement. 

Patent law specifically fails to protect groups from cultural appropriation because an invention must be novel to qualify for a patent. To qualify as novel, an invention must not have been known or used by others in the US, nor should it have been patented or described in a publication either in the US or internationally. Cultural designs, however, are often passed down from generation to generation. This makes it difficult for cultural works to meet the novelty requirement for patentability.

While trademark law provides protections for certain symbols or designs, it also falls short of protecting against cultural appropriation. For a symbol or design to qualify as a trademark, it must be used in commerce at the time of application, or the applicant must make a good-faith showing that it will be used in the stream of commerce at a future point in time. The issue here is that cultural designs and symbols are not created with the intent to be used for commerce, but to express spiritual or cultural significance.

A Global Attempt to Mitigate the Issue 

Some Countries have created heritage laws to protect cultural symbols and to mitigate cultural appropriation in the fashion industry. Mexico has been proactive in addressing cultural appropriation in the fashion industry, specifically with regard to the misuse of Indigenous designs. 

In 2021, Mexico’s Minister of Culture, Alejandra Frausto rightfully accused international fashion brands such as Zara, Patowl, and Anthropologie of cultural appropriation. Ms. Frausto claimed that these three brands had benefited from using indigenous patterns in their designs without compensating the communities. Ms. Frausto went on to demand an explanation from the three companies for using the Indigenous designs, claiming that the cultural elements were considered “collective property” of the communities. Ms. Frausto added that any commercial use should involve compensation and collaboration with the communities. 

In 2022, Mexico passed a law prohibiting and criminalizing the unauthorized use of Indigenous and Afro-Mexican cultural expressions. Mexico’s law is designed to protect the Intellectual Property rights of its people. The issue here is that Mexico’s law has no reach outside of its borders because when a country enacts a law, it is usually only applicable to the actions that take place within the geographic region of that country. 

Mexico has set a strong example of how governments can empower communities by ensuring they maintain control over their heritage. Similar protections could be introduced in the U.S. by establishing legislation that recognizes cultural symbols and traditional designs as collective IP. U.S. IP law falls short of safeguarding cultural groups from appropriation. This gap in U.S. IP law leaves minority and Indigenous communities vulnerable to the uncredited and uncompensated use of their cultural heritage by the fashion industry.

#culturalappropriation #intellectualproperty #Mexico #fashionlaw 

Reel Dilemma: Copyright and Movie Content on TikTok

By: Teagan Raffenbeul


Why purchase streaming service accounts when you can watch your favorite movies and TV shows on TikTok for free? It’s no secret TikTok users can effortlessly find clips from a wide array of movies and series as they scroll their For You Page. Often, movies appear in multiple parts, prompting users to navigate to account profiles to find additional clips.

It’s unclear exactly why people are drawn to watching movies in this format. Perhaps viewers are captivated by an attention-grabbing scene that hooks their interest, with the “best” part of the movie appearing on their For You Page. Maybe there is a sense of community created by shared observations and opinions posted in the comments section. Or is it because of the ability to easily skip parts of a movie viewers don’t resonate with, or that they find boring? Whatever the reason, this phenomenon is happening, and it’s widespread, with users having access to countless videos on TikTok under searches like “full-length movies.” Further, tips for users found under searches like “how to upload movies on TikTok without copyright” are easily accessible.

TikTok’s limited upload length led to movies being separated into 10 to 100 parts, requiring viewers to sift through accounts and comment sections to find the next clip. Some argue watching a movie in short clips fails to capture a movie’s essence, leaving viewers to miss the movie’s deeper meaning and fail to connect with the film in the same way they would while watching on a bigger screen. However, with TikTok now allowing users to upload and watch hour-long videos, users can watch nearly an entire movie simply by scrolling through their For You Page. Often, people aren’t going to TikTok to seek out these movies, but they watch simply because the clips happen to appear on their screens.

Despite this phenomenon becoming normalized, providing easy access to pirated movies openly violates U.S. copyright law. The U.S. Copyright Act protects “original works of authorship fixed in any tangible medium of expression from which they can be perceived, reproduced, or otherwise communicated.” Motion pictures and other audiovisual works are specifically covered under Section 102(a)(6). Additionally, owning a copyright in a work of authorship grants an exclusive right to reproduce, publish, sell, or distribute the work and is supposed to provide protection against online piracy.

Movies, clearly covered under the Copyright Act, generally obtain copyright protection the moment the work becomes “fixed.” A work is “fixed” when it is captured in a permanent medium so it may be perceived or reproduced. Copyright protection generally lasts for 70 years after the author’s death. However, movies are often considered a “work made for hire,” resulting in a longer copyright protection lasting 95 years from publication date, or 120 from its creation, whichever lasts longer. This means that any movies users find on TikTok are most likely still under copyright protection. 

Not only are users engaging in copyright infringement when uploading movie content, but they may also be profiting from it. To expand revenue sharing for creators, TikTok created an initiative allowing for longer videos to be posted. The longer the video, the more a user may get paid. Despite the threat of copyright infringement, hour-long uploads acquire hundreds of thousands of views, and in turn, increased revenue can create a strong incentive for individuals to post long-form videos. And with the only perceivable repercussion being potential account removal, users continue to upload entire movies.

TikTok clearly lays out in its policy that it does not tolerate posting, sharing, or sending content that violates another’s copyright or intellectual property right without proper authorization or a legally valid reason. However, this is not a “zero-tolerance” policy. TikTok allows for users to post up to three infringing videos per intellectual property type under their Repeat Infringer Policy before account deactivation. Under this policy, copyright and trademark strikes are counted separately, giving users additional opportunities to upload infringing content before deactivation. Further, the account may not be directly linked to them, allowing users to easily create a new account after being deactivated.

The question is, how can copyright holders combat this? Will studios create positions with titles like “Social Media Copyright Enforcer” where their only job will be to find and report infringing content on social media platforms? In reality, the only real recourse currently appears to be provided in the reporting mechanisms under the Digital Millennium Copyright Act. 

Under the Digital Millennium Copyright Act, there is a “safe harbor” provision for online service providers, like TikTok, to protect them from copyright infringement liability. If an online service provider creates a “notice-and-takedown” system that allows copyright holders to report and request the removal of infringing content, the online service provider will not be liable for infringing content that is uploaded to their platform.

TikTok does provide a “notice-and-takedown” system that allows copyright holders to submit reports of alleged copyright infringement to prompt removal of the content. TikTok, therefore, is not necessarily responsible for monitoring copyright infringement and cannot be held responsible for infringing content uploaded to its platform. 

As a result of TikTok’s policy, the responsibility of removing infringing content from TikTok falls on the copyright holders. However, finding and reporting the countless movie uploads may be a challenge, especially with new videos surfacing constantly. Last year alone, over 8 million content removals occurred, and 80,000 accounts reportedly faced copyright infringement violations. These numbers do not include the numerous uploaded movie clips that remain available on the platform, indicating the extent of this phenomenon is even larger. TikTok’s policy and the proliferation of this trend make it nearly impossible for copyright holders to find and report all infringing content.

Now, with TikTok expanding upload lengths to an hour long, users can upload almost an entire movie in one clip. Not only would this eliminate the negatives of viewing movies in short clips, but it might also encourage more individuals to engage with movies in this format. Additionally, if TikTok continues its trend of permitting longer video uploads, it’s possible that in the future, we will begin to see one-and-a-half to two-hour-long videos. This would easily cover countless movies where the entire work is at people’s fingertips for free. Extending upload lengths like this can further complicate the already challenging landscape of copyright enforcement. There’s a risk that surges in infringing content may occur as more viewers seek access to full movies, making it more difficult for copyright holders to monitor and report infringing content and for TikTok to manage takedown requests. As a result, more unauthorized content might go unnoticed, granting millions of people easy access to pirated movies.

#WJLTA #tiktok #copyrightlaw #movies 

Loan Sharks and Minnows: The Changing Face of Peer-to-Peer Lending

By: Alexander Okun

The financial technology (“Fintech”) industry has created a tidal wave of novel opportunities to borrow, save, and invest; one of these is “Peer-to-Peer” (“P2P”) lending. P2P platforms play as intermediaries between borrowers (usually individuals or small businesses) and lenders. Unlike traditional personal loans, the lenders are individuals who only provide the funds. The intermediary platform then handles the application process, debt collection, repayment. This new model created hope that credit would become more accessible for underserved communities and enable investors to reap greater rewards, while still serving socially conscious goals. The dream of greater inclusivity was rooted in the expectation that credit risk assessment would be fairer and lead to more reasonable loan terms offered to applicants. Unfortunately, this dream never materialized.

NEW MODELS EMERGE

The first P2P platforms in the US, Prosper and Lending Club, emerged in 2006. Both companies promoted new models for evaluating loan applicants that were more comprehensive than those used by banks. This became a key competitive advantage after the 2008 financial crisis, as traditional banks imposed more stringent lending standards for both individuals and small businesses. The P2P platforms’ risk assessment models rely on a variety of alternative data sources, but many include so-called “soft” information like social media, the age of the email address used to open the account, and even the amount of time spent on the platform’s website. Most platforms use this information alongside credit scores when screening applicants, offering access to borrowers whose traditional credit indicators would be rejected by banks. However, expanding the scope of access did not prevent discrimination in the lending process. In fact, an early study found that lenders on Prosper’s platform were offering substantially higher interest rates to Black borrowers than to their White counterparts. It also found that borrower profiles with Black photographs were approximately 25-35% less likely to receive funding than White profiles with similar objective credit indicators. Due to this and general data privacy concerns, many P2P platforms have chosen to make borrower profiles anonymous, even omitting their cities of residence. However, the increasing use of artificial intelligence in screening appears to simulate human biases contributing to the general fear of “algorithmic discrimination” in the marketplace. Seeing as these algorithms are one of the key selling points of P2P platforms, it is unclear how they will assuage public concerns with discrimination going forward.

BANKS GAIN INTEREST

Another key reason driving the hope for financial inclusion was the expectation that P2P platforms would circumvent the dominance of institutional lenders and investors. However, the data show that this was hardly the case; in the United States, P2P platforms are unable to “originate” loans – that is, they cannot make loans directly to the borrower. What usually occurs instead is a partner bank “originates” the loan and then sells the loan to the P2P platform for the remainder of its duration. Although this has proven lucrative for banks and expedient for the lending platforms, its legality is unclear. Some of the largest platforms have thus chosen to formally charter as banks (or purchase a preexisting one). As a result, the largest platforms have begun offering their own loans and other common banking services such as savings accounts, credit cards, and even financial consulting. Even for those platforms that have remained true to the original P2P model, the share of lenders who are banks or institutional investors has grown substantially. What was originally termed “P2P” lending is now but one part of the larger “Marketplace Lending” (“MPL”) sector. Data in the last decade show that this trend has led to more selective lending like traditional banks, even though they continue to use the platforms’ “fairer” algorithms to assess applicants. Some researchers now see the few remaining individual lenders on P2P platforms as passive investors, while institutional lenders act as gatekeepers who evaluate borrowers and set terms. Users are forced to choose between platforms that hide their conventional practices behind the guise of innovation and companies that have morphed into actual banks.

GOING FORWARD
Although P2P lending has failed thus far to deliver inclusivity and equitable lending, there are some public policy changes that could mitigate discriminatory aspects of the lending process. One key issue is that the most advanced AI models are nearly impossible to deconstruct and examine, so it is more difficult  to establish that a model’s developers had the intent to discriminate against applicants. The Consumer Financial Protection Bureau (CFPB) has attempted to fill these gaps in our civil rights laws by issuing guidance to lenders regarding their duties under the Fair Credit Reporting Act. However, a more reliable approach would be comprehensive legislation regulating the use of AI to ensure that its discriminatory effects do not spread to other sectors of the economy. The unchecked development of AI models has generated bipartisan concern, and created an opportunity to adopt legislation that better reflects the current state of technology. Until that occurs, we will continue to rely on a patchwork of laws that are ill-suited to the era of AI.

Could AI Change FOIA for the Better?

By: Lindsey Vickers

“You can’t FOIA without AI,” said no one, even though on the letters alone, it’s technically true. 

FOIA, or the Freedom of Information Act, is the federal framework that governs the public’s access to government agency records and information. It’s used by people across the country to request information in hopes of learning more about what our government is up to. 

Want to know about the TSA’s seizures of fireworks at airports in July? As journalists would say, you can FOIA that. Interested in state department agreements, memos, and treaties with foreign states pertaining to science and technology cooperation? You can FOIA that. Curious about the cost of a president’s golf excursions? You can FOIA that

However, as with many government processes, FOIA is not without its pitfalls. Agencies have a hard time processing the volume of FOIA requests they regularly receive, resulting in significant backlogs and delays. Now, agencies and FOIA officers are considering how AI might help with the process. 

What problems with FOIA could AI mitigate? 

It’s no secret that the federal government struggles to keep up with the volume of FOIA requests it receives. After all, anyone can submit a FOIA request, from private individuals, to business owners, to journalists. According to the Government Accountability Office, nearly 25% of FOIA requests are swept up in backlogs. This results in huge delays in fulfillment or even just acknowledgement of receipt. While just shy of a quarter of total requests might sound like a relatively small fraction of the total requests, the actual numbers paint a shocking picture: A whopping 200,000 requests got caught up in the backlog.

And that’s not specifically complex records, which include records spanning multiple subjects, sometimes “on different topics and in different formats.” In 2022, only 86% of complex requests had not been processed within 20 days of receipt. 

However, potential agency solutions are not ideal for requesters or the government. One solution is negotiating with the requester to narrow the scope of their request. This often includes shrinking the date range or the type of communication—such as altering a request to simply “emails” rather than requesting “all communications.

That’s where AI might come in. 

How could AI change the FOIA process? 

Government agencies and committees are looking into ways that AI could potentially be used in processing and responding to FOIA requests. 

A government group that’s part of the technology committee and studies the use of AI in the government and private industries, hopes AI can be used to help officials who fulfill federal FOIA requests to separate records into groups by concept or relationship. This technology could make records requests for things like “emails from employees of the Library of Congress complaining about the John Adams Building cafeteria food being cold” easier to fulfill. AI would likely be able to more quickly differentiate between other complaints about the food, such as qualms about the food being unhealthy, as well as complaints about other cafeterias. Instead of a person hand combing through each document to identify that it is about the John Adams building cafeteria, and the food being too cold, AI could quickly and accurately group documents that seem pertinent together.

What agencies have already applied AI to FOIA? 

A few agencies have already put machine learning to work at similar tasks. 

The state department first applied AI to the task of declassifying cables—or confidential typed messages sent and received by the government—which are normally assessed individually by humans to determine whether the information can be safely declassified on their 25th birthday, in accordance with an executive order. The AI program was trained on two years worth of human decisions made on official documents. It performed the same as human reviewers 97%-99% of the time. 

These impressive results led the state department to consider other applications of the technology, including FOIA. AI was then used in another pilot to address FOIA requests the department received. As with the cable declassification, the machine learning technology was highly accurate—performing as FOIA professionals would 97%-99% of the time. The state department has further broadened its potential applications of AI after these results. 

The Justice Department, and Centers for Disease Control are also testing out AI. These agencies are using the technology to manage record-breaking numbers of new requests, and the continual backup of existing ones. 

The implementation of AI into FOIA processes comes with other potential considerations. Would agencies use human professionals to check the technology’s work for each request? While it’s highly accurate, the results aren’t on par with human professionals 1%-3% of the time. This could have devastating effects on journalists, for example. AI misjudgments resulting in certain records being mistakenly withheld or excluded from FOIA requests would negatively impact journalist credibility. But, more significantly, it would jeopardize the public’s right to know, and ability to understand what the government is up to—a central tenet of the FOIA framework. 

#FOIA #AI #governmenttransparency