Is Code Killing Copyright?

By: Katherine Czubakowski

Early last month, the Supreme Court released its long-awaited decision in Google LLC v. Oracle America Inc.  The Court found that Google’s unauthorized copying of 11,500 lines of code from Oracle’s Java SE API was fair use because Google took only as much code as it needed to create a new and transformative program. While some argue that this outcome protects fundamental aspects of how code is created and the technology industry, others see this decision as a significant blow to copyright protections. This disagreement comes down to a fundamental question the Supreme Court seems to have side-stepped in this case: whether code should be protected under copyright at all.

An API, or Application Programming Interface, is a list of actions one can take regarding specific software and how one would take those actions. For example, if gardening were a software, you could choose the action you want to perform (dig, e.g.) and how you want to perform that action (with a shovel, a hoe, a pickaxe, your hands, etc.). The Java API in question contains a basic list of common actions (sorting a list, for example) and how those actions are accomplished (alphabetically, numerically, etc.). When Google began developing the Android software used in their smartphones, they wrote their own code to tell the program what to do and how to do it, but copied the declaring code—the part of the program which matches the name assigned to each task with the program necessary to perform the task—from 37 of Java’s listed tasks. By doing so, the programmers working on the Android software were able to continue using the commands with which they were familiar, such as PrintLn() (which tells the program to print the specified text on the user’s screen) and LocalDate.now() (which tells the computer to display the user’s current date and time), in their own code, but these commands relied on Google’s newly written code to perform the task.

In determining that Google was legally allowed to copy this code, the Court relied on the doctrine of fair use.  Although copyright owners generally hold exclusive rights to create derivative works, which are new works based on their own pre-existing work, fair use is a legal exemption which allows someone to use copyright protected work without the author’s permission in certain circumstances. Courts consider fair use on a case-by-case basis and analyze four different aspects of the otherwise-infringing use: its purpose and character, the nature of the copyrighted work, the amount and substantiality of the portion used, and the effect on the potential market for the copyrighted work.  In its recent cases regarding fair use, the Court has created a sub-factor that it considers under the purpose and character of the use: transformativeness. A work is considered transformative when it uses the original copyrighted work in an unexpected way or in a way which alters the original meaning or message.  Transformativeness weighs strongly in favor of fair use because it encourages creativity and furthering of the arts.  This sub-factor frequently affects all four factors in the fair use analysis and can sometimes even outweigh the importance of the other three factors. It can often be difficult to tell if a work used in a transformative way is a derivative work or if it falls under the fair use exception.

In Google LLC v. Oracle America Inc. the Court’s decision hinged on its finding that Google’s use was transformative. The Court first analyzed the nature of the work and found that APIs were fundamentally different than other types of code. Because the declaring code fuses together the uncopyrightable idea of how the code is organized with the copyrightable code which tells the computer how to perform a function, the Court saw the copied code as valuable only as a result of the programmer’s investment in learning it. Since the copied code did not hold independent value, the Court felt that applying fair use in this circumstance would not undermine general copyright protection for other programs. The Court then turned to the purpose and character of the use, which is where they discussed the work’s transformative nature. It found that Google’s purpose in using the copied code was “to create a different task-related system for a different computing environment” than the creators of Java had originally intended.  Google’s use of the code was part of the “creative progress” which the Court saw as copyright law’s objective, so they found that the use was transformative. The Court further found that, although Google copied “virtually all of the declaring code needed to call up hundreds of different tasks,” they copied a relatively small amount of the total API in question. Because this relatively small portion of the API was tied to a valid and transformative purpose, the Court felt that the third factor weighed in favor of fair use as well. Finally, the Court found that Android was not a market substitute for Java SE because the two products were substantially different. Weighing all these factors together, the Court found that because they only took as much as was necessary to allow their programmers to use “accrued talents to work in a new and transformative program,” Google’s “reimplementation of a user interface” was protected by the fair use doctrine.

The Court’s analysis and reliance on transformation in this case presents a danger to those seeking to copyright their code. This is because code is fundamentally different than many other works protected by copyright; it combines functionality with creative expression. Unlike traditionally copyrightable works, programs are usually created in a way which relies on previously created code to function. When writing new code, very few programmers actually write code which can interact directly with the computer. Instead, they use one of a number of programs which translate a more readable code, such as Java, into code which the computer can understand. Without being able to copy some fundamental aspects of the language, programmers would have to create a new language anytime they wanted to write new code. In practice, this means that many different programs with different purposes all rely on the same underlying program(s) to translate their code into a form the computer can understand. 

Although the Court likely reached the correct outcome in this case, the repercussions of its decision in other fields damages traditional copyright holder’s rights. The Court’s transformative analysis fails when applied in the context of programming because a program’s reliance on other code is a necessary aspect of its creation. Thousands of substantially different programs rely on the same underlying code in order to function. However, purely creative expression does not have this same reliance on preexisting works—as evidenced by Congress’s grant of derivative works rights to copyright holders. By trying to fit both pure creative expression and functional creative expression under the same body of law, the Court has blurred the lines between what is transformative and what is derivative and has put at risk the exclusive rights guaranteed to copyright owners of traditionally copyrightable works.

Peeved with Your Pre-Order? Part Two: Cyberpunk 2077, Federal Securities Laws, and the Lanham Act

By: Moses Merakov

In addition to  Section Five of the FTC Act and parallel state-level false advertising statutes (as discussed in part one), gamers can potentially pursue false advertising litigation under the Lanham Act and through federal securities laws.

The Lanham Act

The Lanham Act ,also known as the Trademark Act of 1946, is the federal statute that governs trademark infringement/dilution, false advertising, and related unfair competition. To win a false-advertising claim under Section 43(a) of the Lanham Act, a plaintiff must prove that the defendant made (1) a false or misleading statement of fact; that was (2) used in a commercial advertisement or promotion; that (3) deceives or is likely to deceive in a material way; (4) in interstate commerce; and (5) has caused or is likely to cause competitive or commercial injury to the plaintiff. However, this method of attack is generally unavailable for the general consumer. Only commercial competitors of the defendant, not typical consumers of the defendant’s product, can “allege an injury to a commercial interest in reputation or sales,” necessary to secure standing to sue. See Lexmark v. Static Control.  Nevertheless, consumers can typically still pursue a false advertising lawsuit under comparable state laws. As mentioned in part one, Washington State’s Consumer Protection Act embodies false advertisement claims and functions similarly, in effect, to the Lanham Act.

Federal Securities Laws

In 2020, Cyberpunk 2077 instantaneously turned from one of the anticipated games of the year to one of the year’s biggest flops. The game arrived to store shelves with such an intense pandemonium of game breaking bugs that both Sony and Microsoft offered refunds to distraught purchases of the game and Sony removed the game from its online digital store. Almost immediately, two different law firms, the LA-based Schall Law Firm and the NYC-based Rosen Law Firm, filed class-action lawsuits against CD Projekt Red, the game’s developer, alleging that the company misled its investors. According to both firms, there were violations of  §§10(b) and 20(a) of the Securities Exchange Act of 1934 and Rule 10b-5 promulgated thereunder by the U.S. Securities and Exchange Commission.

The Securities Exchange Act is complex, but essentially it is a composite of regulations that prevent “manipulative and deceptive” practices in securities trading.  Section 10(b) of the Securities Exchange Act of 1934 [15 USC § 78j(b)] provides that:

“It shall be unlawful for any person, directly or indirectly, by the use of any means or instrumentality of interstate commerce or of the mails, or of any facility of any national securities exchange… [to] use or employ, in connection with the purchase or sale of any security registered on a national securities exchange or any security not so registered, or any securities-based swap agreement… any manipulative or deceptive device or contrivance in contravention of such rules and regulations as the Commission may prescribe as necessary or appropriate in the public interest or for the protection of investors.”

To recover damages in a private securities-fraud action under § 10(b), a plaintiff must prove “(1) a material misrepresentation or omission by the defendant; (2) scienter (a mental state embracing intent to deceive, manipulate, or defraud); (3) a connection between the misrepresentation or omission and the purchase or sale of a security; (4) reliance upon the misrepresentation or omission; (5) economic loss; and (6) loss causation.” See Matrixx Initiatives, Inc. v. Siracusano.

In short, Rosen Law Firm and Schall Law Firm have an incredible burden in establishing that CD Projekt Red defrauded investors. The firms allege that CD Projekt Red lied to investors about the state of the game and failed to disclose that the game launch would be a financial catastrophe. To win, the firms would have to prove that CD Projekt Red made false or misleading statements, that it knew the statements were false or misleading and intentionally meant to mislead investors at the time the statements were made, and that those misleading statements would cause the company to be overvalued. Only litigation and proper discovery will truly tell whether there are enough facts for the firms to be successful, but some legal analysts say there is likely no case.

Conclusion

            Unless you are a “competitor” of the videogame developer or an investor, the Lanham Act and federal securities laws are likely not your best avenue for recovering for that falsely advertised video game you bought. 

Electric Soothsayers: The Ethics of Brain-Machine Interfaces

By: Mason Hudon

“Over one’s mind and over one’s body the individual is sovereign.” – John Stuart Mill

            In mid-April of this year, a company called Neuralink released a video of a male macaque monkey playing a version of the Atari classic game, “Pong”. At first glance, the video appears to be nothing more than a cute gimmick… that is until the viewer realizes that the joystick the monkey is using isn’t even plugged in—the program is being controlled by the creature’s brain by way of a complex, proprietary microchip.

            Neuralink, the brainchild of billionaire tech tycoon Elon Musk (better known as the CEO of Tesla, Inc. and founder of SpaceX) develops “breakthrough technology for the brain” known as brain-machine (or brain-computer) interfaces (BCIs). Essentially, Neuralink and other companies like it, are seeking to blur the line between human and machine, introducing computer hardware into human brains to do anything from making the world more accessible for disabled communities, to enhancing the video game experience, to “achiev[ing] a symbiosis with artificial intelligence” as Mr. Musk puts it. According to Limor Shmerlin Magazanik, Director of the Israel Tech Policy Institute, “a BCI decodes direct brain signals—colloquially known as the firing of neurons—into commands a machine can understand. Using either an invasive method—a chip implanted directly in the brain—or non-invasive neuroimaging tools, letting the machine pull raw data from the brain and translate it to action in the outside world.” While the technological singularity (the merging of human and machine into an inseparable existence) may still be quite far off for the human race, the introduction of BCI technologies that implicate the human brain raise very serious legal and ethical concerns regarding personal autonomy, privacy, and the rights and identities of humans as we currently perceive them.

            It’s true, “[b]asic neurotechnologies have been around for a while—including technologies like cochlear implants and deep brain stimulation and more complicated brain-computer interfaces,” but technologies of the kind that Neuralink and other companies involved in advanced BCI development are seeking to introduce are wholly unprecedented. In fact, Maja Larson, general counsel for the Seattle-based Allen Institute, has expressed that this “commercialization” of formerly purely medical applications for BCIs has never been seen and risks turning “benign research politicized”. When profit margins and the “bottom line” are introduced into an equation that previously sought to solve relatively narrow issues (typically divorced from the idea of revenue generation and solidly situated within the clinical environment), all bets might be off.

Legal regimes and regulations have not been crafted to deal with many of the dilemmas that these technologies will pose, for example: how college admissions should be handled for students that have brain implants that aid them in their school work or allow them to access the internet, how brain data should be protected when a BCI is communicating with a public WiFi network, how advertising will be implicated if companies can detect your needs (like hunger), or, even more complexly, how the regime of intellectual property will be impacted as a whole. Additionally, Scientific American writes “[o]ne tricky aspect is that most of the neurodata generated by the nervous systems is unconscious. It means it is very possible to unknowingly or unintentionally provide neurotech with information that one otherwise wouldn’t. So, in some applications of neurotech, the presumption of privacy within one’s own mind may simply no longer be a certainty.” The legal community will ultimately be tasked with addressing these deep concerns, and efforts should begin sooner, rather than later to develop new laws that preemptively protect against abuses of this technology before it is too late.

            Robert Gomulkiewicz, Charles I. Stone Professor of Law at the University of Washington School of Law, discusses in his Legal Protections of Software class that intellectual property protections for software don’t always work extremely well because lawmakers in the mid-20th century chose to conform existing IP regimes like copyright, patent and trademark to novel technologies far different than the items and ideas that they protected in years prior to the advent of the computer. Instead of creating sui generis laws that might account for all of the nuances and complexities of software, lawmakers opted for the “easy option” by retrofitting copyright, patent and trademark laws to fit contemporary needs. Such an “easy option” may work adequately for protecting software when financial concerns are the only issues implicated, but when it comes to the human mind and the privacy of one’s own thoughts and emotions, a retrofitted system leaves much to be desired because the stakes are so high. Sui generis laws are thus both a legal and moral imperative for lawmakers seeking to tackle BCI technologies moving forward. While new statutory regimes may and should draw important aspects from intellectual property and existing privacy regimes into their language, it remains clear that crafting brand new policy cannot and should not be avoided.

            Given the complexity of the issues inherent in BCI technologies, it will be critical to involve stakeholders from different backgrounds and paradigms including lawyers, engineers, bioethicists, doctors, and perhaps even philosophers in coalescing competing ideologies of the role of BCIs into workable legal doctrine. Particular focus should be directed towards ensuring privacy, equity, autonomy, and safety for those wishing to partake in BCI technologies. Specifically, discussions should concern: (1) securing the protection of the fundamental autonomy of the human mind, (2) securing the protection of the fundamental autonomy of the human body, (3) allowing for the ability of BCI users to control third-party access to their data, (4) ensuring accuracy in the interpretive methods used by software that attempts to translate the data from people’s brains, (5) ensuring disclosure of the use of performance enhancing BCIs in academic and competitive settings, (6) mitigating the effects of hacking and malware on BCIs, (7) elucidating the role and risks of allowing artificial intelligence a role in BCIs as Elon Musk has discussed, and (8) ensuring that people remain psychologically sound after installation. This list is not exhaustive, but these should cover some of the central issues that will underpin the legal framework for BCIs in the future. As with many technical innovations, things move pretty fast, and this means that legal entities need to act now to protect the qualities of human existence that we currently hold dear.

All Bark, No Bite: Washington’s 2021 Facial Recognition Regulation Lacks Enforcement Mechanism

By: Alex Coplan

Taking effect on July 1, 2021, Washington’s new facial recognition (FR) law will regulate state and local government use of FR technologies.  But will the new law be effective enough to protect your identity? The law serves as a middle ground between privacy advocates and government officials in favor of using FR programs. However, due to vague wording and lack of oversight, the bill’s intent may not always produce the desired results.

On its face, SB 6280 provides proper safeguards to prevent government use and abuse of FR technology. The law includes significant provisions requiring accountability and limitations on use, and signals the Washington legislature’s belief that FR use creates serious policy issues.

First, SB 6280 will apply to all state and local government agencies. This means that agencies operating in Washington must comply with the new law and be subject to its oversight. Some exceptions, however, include the Department of Licensing (DOL) and the Transportation Security Administration (TSA). While these agencies are not subject to SB 6280, they are required to disclose the use of FR technology if located in Washington.

Second, the law requires agencies to provide a notice of intent and produce an accountability report. In other words, government agencies must file notice to obtain and implement FR services. If approved, those agencies must provide accountability reports every two years. These reports disclose the capabilities of the FR system, the data types the system collects, the training procedures and security for protecting data, and how the system can benefit the community.

Third, and perhaps most importantly, SB 6280 requires “meaningful human review” when FR programs create “legal effects concerning individuals or similarly significant effects concerning individuals.” Meaningful human review requires review or oversight by one or more individuals who are trained in accordance with the act. Training includes coverage of the capabilities and limitations of the FR program, and how to interpret the program’s output. The bill considers “legal” or “similarly significant effects” to be decisions that result in the provision or denial of criminal justice, financial services, housing, education, employment opportunities, and other basic civil rights. Accordingly, if an individual faces a significant outcome following government use of FR, they have the right to meaningful human review.

Fourth, agencies must receive a warrant in order to use FR programs during real-time surveillance. This means government actors may not use FR programs simultaneously with a live video feed. Specifically, SB 6280 prohibits use of FR programs body-worn cameras used by law enforcement, meaning police may not use FR programs in the field without judicial authorization.

Companies like Microsoft—who create FR technology—favored and lobbied for the passage of SB 6280. Why, in a bill intended to limit FR use, would Microsoft approve of this legislation? Arguably, SB 6280 may be all bark, but no bite. While this law sounds effective, it lacks enforcement procedures.

For example, the accountability reports may not provide much accountability. At this point, the reports are not required to be approved by any regulatory or legislative body. As a result, there is no enforcement mechanism for the provision of this bill. If an agency chooses to not follow the procedure, who will stop them?

Further, the “meaningful human review” provision lacks substantial definition. The subsection defining this phrase is a single sentence, which fails to provide any direction for decision making. Further, the provision does not require review from any third party, allowing agencies to review any potential misconduct themselves, with their own employees.

Additionally, SB 6280’s warrant requirement only covers real-time identification, so agencies may freely use FR technology on previously shot footage without obtaining a warrant first. By allowing agencies to engage in this practice without a warrant, SB 6280 subjects the public to unchecked surveillance by law enforcement.

Washington enacted SB 6280 to provide compromise and regulation to an emerging field. However, the law lacks sufficient procedures for enforcing violations. Without those procedures, or a moratorium on FR use, government agencies can abuse these technologies to the detriment of every Washingtonian.

Is Discrimination Fair? The FTC’s Failure to Regulate Big Tech

By: Gabrielle Ayala-Montgomery

In the age of technological innovation, minorities fight to end discrimination both offline and online.  Biased outcomes produced by technology have material consequences for minorities as advancements in technologies replicate social inequalities and discrimination. Vulnerable populations are denied fair housing, education, loans, and employment because of biased data; however, the commercial use of biased data raises concerns for all consumers.

The FTC should consider using its authority to deliver effective deterrence for the harms to the economy when companies unfairly and disproportionately impact minorities. First, the FTC must bring enforcement actions against companies whose “unfair practices” disproportionately affect minority consumers. Second, the FTC should conduct rulemaking to expand its definition of “unfair practices” to encapsulate businesses practices unfairly affecting minorities.  

The FTC Breakdown

The FTC protects consumers from deceptive or unfair business practices and investigates alleged violations of federal laws or FTC regulations. Section 5 of the FTC Act prohibits “unfair or deceptive business practices in or affecting commerce.” The Act broadly vests the FTC’s authority to bring enforcement actions against businesses to protect consumers against unfair or deceptive practices. Section 18 enables the FTC to promulgate regulations to prevent unfair or deceptive practices. After the FTC issues a rule, it may seek penalties for unfair practices constituting violations.

The FTC explicitly defined its standards for determining whether a practice is deceptive but left the question of whether a practice is unfair to be interpreted by courts. In response, courts have developed a three-factor test to identify an unfair practice.  The test asks: (1) if the practice injures consumers; (2) if it violates established public policy; and (3) if it is unethical or unscrupulous.

Biased Practices in High Tech

Algorithms are quantitative data, a process, or a set of rules involving mathematical calculations that produce data to help humans make decisions. Algorithmic bias, machine learning bias, or AI bias is a systematic error in the coding, collection, or selection of data that produces unintended or unanticipated discriminatory results. Algorithmic bias is perpetuated by programmers who train algorithms based on patterns found in historical data. Humans then use these biased results to make decisions with implications systematically prejudiced towards minorities.  

Data and surveillance are now big businesses. However, some companies have failed to ensure AI products are fair to consumers and free from impermissible bias. For example, Amazon had to disband recruiting tools because the system’s data rated job candidates in a gender-biased manner. The AI models educated themselves from resume data compiled from the previous ten years, composed primarily of white men. Thus, Amazon’s recruiting tool taught itself male candidates were preferable.

The National Institute of Standards and Technology (NIST) examined the tech industry’s leading systems’ facial recognition technology (FRT) algorithms, including 189 algorithms voluntarily submitted by 99 companies, academic institutions, and other developers. The algorithms came from tech companies and surveillance contractors, including Idemia, Intel, Microsoft, Panasonic, SenseTime, and Vigilant Solutions. NIST found empirical evidence” many FRT algorithms exhibited “demographic differentials” that can worsen their accuracy based on a person’s age, gender, or race. Some algorithms produced no errors, where other software was up to 100 times more likely to return an error to a person of color than for a white individual. Overall, middle-aged white men generally benefited from the highest FRT accuracy rates or the least amount of errors. Such bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions that can have a collective, disparate impact on specific groups of people even without the programmer’s intention to discriminate.

Companies’ targeted advertisement systems have been utilized to exclude people of color from seeing ads for homes based on their “ethnic affinity.” For example, Facebook and other tech companies settled with civil rights groups for participating in illegal discrimination practices in their advertisement of housing, employment, and loans. Targeted marketing policies and practices by tech companies permitted users to exclude marginalized groups from seeing specific ads. As a condition of the settlement, Facebook agreed to establish a separate advertising portal for creating housing, employment, and credit “HEC” ads on Facebook, Instagram, and Messenger that will not allow users to block consumers based on gender, age, and multicultural affinity.” However, research demonstrates Facebook’s ad system can still unintentionally alter ad delivery based on demographics. These current advertisement practices online contribute to the systematic inequality faced by minorities’ income, housing, and wealth.

Expanding “Unfair Practices”

What if Facebook had not reexamined its practices or Amazon kept its biased recruiting tech? No single piece of data protection legislation exists in the USA to prevent biased data and surveillance technology. Instead, the country has patchwork laws at federal, state, and municipal levels.  Sen. Ron Wyden, D-Ore., plans to update and reintroduce his Algorithmic Accountability Act of 2019, a bill designed to fight AI bias and require tech companies to audit their AI systems for discrimination. The Act, if passed, would have directed the FTC to require companies to study and fix flawed algorithms resulting in inaccurate, unfair, biased, or discriminatory decisions. The passage of an Algorithmic Accountability Act would reduce decisions based on biased algorithms. However, instead of waiting on Congress, the FTC may use existing laws and policy options addressing such violations and apply them to unfair and discriminatory practices in the digital world.

In the absence of federal legislation, the FTCA could be used to protect consumers against unfair practices that are biased against minorities. For the purposes of Section 5 of the FTC Act, purposeful or negligent practices that disproportionally impact minorities should constitute (1) unethical or unscrupulous, (2) violations of public policy, and (3) actual harm to consumers. The FTC should prioritize enforcement of valid claims of unfair commercial practices that disproportionately impact minorities.

New Rulemaking Group May Provide Hope for Prevention

Following criticism that the FTC failed to use its authorities to address consumer protection harms adequately, it announced its new rulemaking group.  This rulemaking group will allow the FTC to take a strategic and harmonized approach to rulemaking across its different authorities and mission areas. With this new group in place, the FTC is poised to strengthen existing rules by undertaking new rulemakings that would interpret unfair practices further. Chairwoman Rebecca Kelly Slaughter stated, “I believe that we can and must use our rulemaking authority to deliver effective deterrence for the novel harms of the digital economy].” Perhaps under this new rulemaking group, the FTC can meaningfully protect and educate consumers about the harms of commercial practices that unfairly impact minorities by expanding their interpretations of “unfair” to include commercial practices biased against minorities.

The FTC needs to protect consumers by protecting minorities. Current laws do not adequately address biased data or practices that produce discriminatory outcomes for minorities. Without legislation or federal administrative action, high-tech companies will continue developing systems, intentionally or unintentionally, biased against minorities like people of color, women, immigrants, the incarcerated and formerly incarcerated, activists, and others. The FTC has the authority to effectively deter business practices unfairly and disproportionately impacting minorities by bringing enforcement actions against such practices. Further, the FTC should explicitly expand their interpretation of “unfair practices” to encapsulate practices disproportionately affecting minorities.