Remote Test Scans Expose Larger Privacy Failures

By: James Ostrowski

In a major challenge to pandemic remote learning practices, the court in Ogletree v. Cleveland State University ruled that scanning students’ rooms violates the Fourth Amendment’s prohibition against unreasonable searches. While this decision is a definitive rebuke of a widely used practice, the case also reveals systemic flaws in university privacy practices. This blog will build off Ogletree to strike a balance between test integrity and privacy rights. 

Covid Acceleration 

For technology companies, the coronavirus pandemic was an accelerant. Startups rushed out messaging apps, video platforms, and ecommerce sites to thaw a populace frozen by a blizzard of lockdowns. There was perhaps no greater market capture for technology companies than in education. Colleges moved entirely online, deploying previously known but relatively new technologies, such as Zoom, on an unprecedented scale. Legions of students attended class from their kitchen tables and bedrooms. Professors, intent on maintaining their in-person standards in a remote world, relied on proctoring tools, many of which required room scans from students who had little choice but to comply. Now, two years later, hundreds of programs still record students throughout remote tests. 

Remote Test Scans Ruled Unconstitutional 

In February 2021, a student at Cleveland State University, Aaron Ogletree, was sitting for a remote chemistry exam when his proctor told him to scan his bedroom. He was surprised. Ogletree assumed the room scan policy had been abolished, until, two hours before the test, Cleveland State emailed him that he would have to scan his room. Ogletree responded that he had sensitive tax documents exposed and could not remove them. Like many students, Ogletree had to stay home due to health considerations, and he could only take exams in the bedroom of his house. Faced with the false choice of complying with the search or failing the test, he panned his laptop’s webcam around his bedroom for the proctor and all the students present to see. 

Ogletree sued Cleveland State for violating his Fourth Amendment rights. The Fourth Amendment protects “[t]he right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” 

Ohio District Court judge J. Philip Calabrese decided in favor of the student because of the heightened Fourth Amendment protection afforded to the home, the lack of alternatives for Ogletree, and the short notice. Calabrese conceded that this intrusion may have been minor, but cited Boyd v. United States to support the slippery slope argument that “unconstitutional practices get their first footing…by silent approaches and slight deviations.” 

The facts of this case are a symptom of a larger problem. The university failed its students and its professors when it did not consistently apply its online education technology. 

Arbitrary Application and Lack of Policies 

Cleveland State provides professors with an arsenal of services to administer online classes. These tools include a plagiarism detection system that faculty can use to see students’ IP addresses, a proctoring service that records students and uses artificial intelligence to flag suspicious behavior, and, of course, pre-test room scans.

The school leaves it entirely to the discretion of faculty members—many of whom are not experts in student privacy—to choose which tools or combinations of tools to use. Cleveland State’s existing policies offer no guidance on the tradeoffs of using any one method. This is tantamount to JetBlue asking its pilots to fly through a whiteout without radar.

Toward a Unified Policy

What may have been an understandable oversight in the early pandemic whirlwind cannot be considered so now. The tension between privacy and security is well-known. Only by careful balancing of students’ privacy rights and university interest in test integrity will we find a workable solution. Schools across the country should take heed of the Ogletree ruling. University leadership holds the responsibility to balancing those interests and impart clear guidance to test administrators. To foster this progression, we offer two recommendations: 

  1. Cost-Benefit Guidance: The university should score tools on privacy interests involved and the expected benefit of its application. This should include guidance on whether a method can be easily circumvented. As individual teachers are not necessarily savvy on the legal implications of certain remote test policies, the university must provide clear analysis and guidance. An example entry may read, “Blackboard provides student location data. Though location tracking is a relatively common practice, students must be made aware of it. This tool can ensure that students are where they say there are, which is not usually relevant for test integrity. If students wished, they could easily evade this using a low-cost VPN.” 
  1. Test Policy Clearly Outlined in Syllabi: Professors should provide guidance within their course descriptions on what technologies and methods are used to administer tests, and students could sign an acknowledgment form. For example, a professor would delineate applications they use to administer exams, information about whether the exams are proctored, and recourse for not following a policy. This way, students can make affirmative decisions about their privacy exposure by choosing a course that aligns with their interests rather than be blindsided by heavy-handed policy in the final weeks of a semester. This way, professors will not have to worry about future disagreements because their students knowingly consented to the course’s policies.

The university must balance policy considerations around security and privacy rights. A failure to balance these conflicting pursuits can cause student anxiety, unnecessary privacy violations, and poor test integrity.

Closing the Loop: Solving the Impossibility of Data Deletion

By: Josephine Laing

Personal information is the newest and shiniest coin of the realm. The more personal the data, the more valuable it may be. While most consumers are aware that their data is worth its weight in gold, it is not always clear who is mining this data and what can be done to protect it. Luckily, efforts have been made to create consumer protections that shine a light on the notorious data broker industry. 

Data brokers collect personal information about consumers. Personal information is not directly gathered from consumers. Rather, personal information is collected from commercial entities, government, and other sources – unbeknownst to the consumer. This data is constantly being sold. For a consumer to track down their personal information, they would have to follow an ever-winding trail of sales between data brokers. As a result, this industry is commonly critiqued for its lack of transparency. While public awareness of this industry is crucial, the key issue is what consumer deletion rights are available to combat the collection. If consumers’ deletion rights are not extended to affect data brokers, deletion rights become meaningless. Meaningless deletion rights prevent consumers from exerting control over their personal information. Consequently, privacy rights are directly linked to one’s ability to require data brokers to delete information. Without this right to delete, there is no true right to privacy. 

The Delete Act 

On October 10th, 2023, California’s Governor Newsom signed the Delete Act into law. The Delete Act promises consumers a new age of data control. Starting in August 2026, California consumers will have the ability to effectively exercise their deletion rights. This might come as a surprise to some, as the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) already granted Californians deletion rights in 2018 and 2020 respectively. These deletion rights, however, were caveated by exceptions that were, until recently, abused by the data broker industry. 

The Delete Act, introduced by Senator Becker and sponsored by Privacy Rights Clearinghouse, amends and adds to Section 1798.99.80-87 of the California Civil Code. These amendments create important changes in the data broker provisions included in the CCPA. The changes embrace a more inclusive definition for data brokers, preventing a notoriously shifty industry from evading jurisdiction. This Act requires data brokers to disclose when they collect personal information about minors, consumers’ precise geolocations, and consumers’ reproductive health care data. Data brokers must also include informational links on their websites about collection techniques and deletion rights. Interestingly, brokers are forbidden from using dark patterns. While data brokers are already required to register in California, the penalty for failing to register has increased to $200 per day from $100. These daily penalties also apply for each deletion request that goes unheeded by the broker. These fines can add up, especially as many consumers in California are ready to make deletion requests.

The Delete Act addresses the Sisyphean task of data management. Consumers are constantly producing data. Thus, the management of data is never-ending. This law includes a provision that makes the deletion right effective. Data brokers must access the deletion mechanism and reassess the mechanism at least once every forty-five days. When a data broker accesses the mechanism, they must: (1) process all deletion requests; (2) direct all service providers or contractors to delete personal information related to the request; (3) send an affirmative representation of deletion to the California Privacy Protection Agency indicating number of records deleted and what service providers or contractors were contacted. After a consumer has submitted a deletion request, data brokers must continue to delete the consumer’s data every forty-five days unless otherwise requested. By requiring monthly engagement with the deletion mechanism, the Act actively protects consumer data.

Who cares? 

Why is this Act necessary? Why weren’t the original deletion rights enough? Through the CPRA’s amendments to the CCPA, California citizens are granted preliminary rights to delete their data. California consumers’ right to delete was limited to data retained by businesses providing services to Californians. And the CCPA only affects businesses that handle 50,000 California consumers, make $25 million in gross revenue, or profit primarily (50% or more) by selling data. This means that if a business qualifies, there are many exceptions the business can claim to avoid facing enforcement. Section 1798.145 outlines the right-to-delete exceptions and allows for businesses to “collect, use, retain, sell, share, or disclose consumers’ personal information that is identified or aggregate consumer information.” 1798.145(a)(6). Such exceptions allow for consumers’ personal information to be excluded from privacy protections. Information can still be used to identify consumers via aggregation efforts. Once the personal data is sold to a data broker (service provider or contractor) the consumer’s right to delete is vastly reduced. Thus, the exceptions carved out for data deletion effectively reduce consumer privacy protections. 

The Delete Act addresses the gaps in consumer privacy by empowering consumers to delete their personal information from data brokers. Since personal information is constantly collected from consumers, expecting consumers to repeatedly delete their information from data brokers is unreasonable. Accordingly, for consumers to efficiently utilize a right to delete they must be able to delete information at scale. The Delete Act calls for the right for consumers to delete “any personal information related” to them “held by the data broker or associated service provider or contractor” through a “single verifiable consumer request.” The bill addresses the persistence of data collection by eliminating the consumer’s need to continually and repetitively request deletion. 
So where is Washington’s Delete Act? Emory Roane of Privacy Rights Clearinghouse hopes that the Delete Act can “serve as an impetus – if not a direct model – for other states to model… [as] there is a massive blind spot when it comes to businesses that don’t have a direct relationship with the consumer.” Emory notes that data brokers are a bipartisan issue, pointing to the passing of data broker registries in both Texas and Oregon in 2023. Washington has yet to establish a data broker registry. Getting to the heart of the issue, Emory states that: “Republican or Democrat, old or young, across the country and across every demographic, everyone rightfully feels like they’ve lost control of their personal information and privacy and data brokers are a huge part of that problem.” Tackling the data broker industry is a tall task, and creating an effective right to delete is a necessary start. As California tries out its deletion portal, Washington should take heed.

Emojis Speak Louder Than Words: A Legal Perspective

By: Lauren Lee

Imagine being legally bound to a contract with nothing more than a ‘thumbs-up’ emoji. In our ever-evolving digital landscape, each new phone software update introduces an array of new emojis and emoticons to our keyboards. These small digital icons serve as time-saving tools, enabling more efficient expression of emotions and tone. However, emojis and emoticons bring forth the challenge of potential ambiguity, as many lack a ‘defined meaning.’ For example, the “praying hands” emoji is sometimes misconstrued as a “high five” emoji. In the legal realm, while interpreting emojis may be complex, their admissibility as evidence in trials holds undeniable importance.

A seemingly uncontroversial smiley face emoji or emoticon can have significant implications on cases. In 2015, U.S. District Judge for the Southern District of New York, Katherine Forrest, ruled that all symbols, including emojis or emoticons, be read by jury members. Tyler Schnoeblen, a linguist at Stanford, explained how the use of emoticons provides insight into a writer’s intention. A smiley face may indicate politeness, a frowning face may signal disapproval, while a winking face may convey flirtatiousness. More recently, in July 2023, the District Court of D.C. ruled that when the Bed, Bath, and Beyond CEO tweeted a smiling moon emoji, it symbolized “to the moon” or “take it to the moon,” reflecting optimism about the company’s stock. This interpretation influenced investors to purchase the stock, and the court found that the moon emoji was actionable.

While civil cases often focus on interpreting emoji meanings rather than their admissibility, attorneys should prepare for litigation by understanding the bar of procedural requirements when submitting emojis for evidence. Texts or messages containing emojis or emoticons must be relevant for presentation to the jury. Testimony from the sender can offer context and highlight the intent of the sender when they send the emoji. Once relevance is established, the messages must be authenticated, with the admitting party ensuring that both the sender and receiver saw the same image.

Already, tens of cases each year in the U.S. address the meaning of emojis in a legal context and some states have permitted the use of emojis as evidence. In a report sponsored by the State Bar of Texas, the authors suggest that emoticons and emojis resemble hearsay statements, which is admissible evidence. Rule 801(d)(2) of the Federal Rules of Evidence (FRE) defines a hearsay statement as an oral, written, or nonverbal assertion that is made outside of trial. Emojis could be admitted as hearsay statements for evidence if authenticated because they likely fall under the written assertion category.

Admitting emojis as evidence in a trial has its challenges. Undoubtedly, expanding the scope of what is permitted as evidence complicates litigation. The downside of allowing emojis as evidence lies in the potential increase in the duration and cost of litigation, increased reliance on the jury or judge’s interpretation of emojis, and potential for parties to evade liability through emoji use. Additionally, emojis may appear differently on different devices (e.g. Apple products vs. Androids). Admitting emojis as evidence might also lead to unintended agreements or commitments.Despite the increasing complexity of emoji interpretation, their admissibility in trials should be acknowledged. Emojis expand our means of expression and can play a crucial role in conveying nuanced emotional and contextual information, fostering more accurate communication within the legal system. It is vital to understand that language should not be interpreted solely within its plain meaning but also in the context in which it is used. This concept is similar to statutory interpretation canons in administrative law, where various interpretive modes are employed to derive meaning. Emojis and emoticons, in this context, can be likened to symbols that effectively convey ideas and the author’s tone, making them a significant component for establishing contextual evidence in cases. To prepare for the ever-expanding use of emojis and emoticons, courts and attorneys should deploy appropriate tools to develop fluency in this new ‘emoji language.’

The Complexities of Racism in AI Art

By: Imaad Huda

AI generative art is a recent advance in the field of consumer and social artificial intelligence. Anybody can write a few words into a program, and, within seconds, the AI will generate an image that roughly depicts that prompt. AI generative art can incorporate a number of artistic styles to develop digital art without somebody lifting a pen. While many users are simply fascinated with art being created by their computers, few are aware of how the AI generates its images and the implications of what it produces. Now that AI art programs have made their way into consumer hands, users have noticed stereotypical and racialized depictions in their auto-generated images. Entering prompts that incorporate types of employment, education, and history often produce images that incorporate racial bias. As AI becomes more mainstream, racist and sexist depictions by AI will only serve to entrench long standing stereotypes, and the lack of a legal standard will only make the matter worse. 

Quantifying the Racism 

Leonard Nicoletti and Dina Bass for Bloomberg note that the generative images take the “human” biases to the extreme. In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs. Prompts including “fast-food workers” yielded an image with a darker skinned person seventy percent of the time, even though Bloomberg noted that seventy percent of fast-food workers are white. Meanwhile, prompts for higher-paying jobs, such as “CEO and lawyer” generated images of people with lighter skin at a rate of over eighty percent, potentially proportional to the eighty percent of people that hold those jobs. When it came to occupations, Stable Diffusion showed the most bias when depicting occupations for women, “amplify[ing] both gender and racial stereotypes.” Among all generations for high-paying jobs, only one image, that of a judge, generated of a person of color. Commercial facial-recognition software, a tool specifically designed to identify the genders of people, had “the lowest accuracy on darker skinned people,” presenting a problem when these softwares are “implemented for healthcare and law enforcement.” 

Stable Diffusion was also biased when comparing criminality. For depictions of “inmate,” the AI generated a person of color eighty percent of the time when only half of the inmates in the U.S. are people of color. Bloomberg notes that the rates for generating criminals could be skewed by the racial bias by the U.S. “policing and sentencing” mechanisms. 

The Legality

Is racism in AI legal? The answer is complicated for a number of reasons. The law surrounding AI generative imaging is new. In 2021, the Federal Trade Commission (FTC) declared the use of discriminatory algorithms to make automated decisions illegal, citing opportunities for “jobs, housing, education, or banking.” New York City has also enacted its own Local Law 144, which requires that AI tools undergo a “bias audit” before aiding in employment decisions.” The National Law Review states that a bias audit includes a calculation of the “rate at which individuals in a category are either selected to move on or assigned a classification” by the hiring tool. The law also states that audits “include historical data in their analysis,” and the results of the audit “must be made publically available.” 

The advancement of anti-racism laws regulating AI tools represents progress. However, how these laws pertain to AI art still has yet to be seen. Laws concerning AI generated art are currently focused on theft, as AI art often copies the originalism and stylistic choices of human artists. The racial depictions of AI art have not been touched on legally but could perpetuate stereotypes when used in an educational context, which the FTC prohibits under its 2021 declaration. Judges and lawmakers may not see AI art’s contribution to systemic racism as a legal issue that could stand in the courtroom just yet. 

What’s The Solution?

The bias in generated art results from its algorithm, which, depending on the user’s prompt, pulls together images that match a description and style to develop into a new image. From multiple prompts from many different users and the data available on the internet, the algorithm continuously produces these images. Almost a decade ago, Google postponed its consumer AI search program because images of black people were being filtered into searches for “gorillas” and “monkeys.” The reason for this, according to former Google employees, was Google not training its AI with enough images of black people. The problem in this case, again, could be a lack of representation, from too few AI algorithm employees of color to inadequate representation in the data sets being used to generate images. However, a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand. As issues with machines get more complicated, so do the solutions. Derogatory depictions should not be allowed to stand in the absence of a legal standard, and lawmakers should take the necessary measures to end AI discrimination before it becomes a true social problem.

Even Better Than The Real Thing: U2 and Brambilla Bring Elvis to Life

By: Bella Hood

Thanks to social media, U2’s visual performance at the Las Vegas Sphere is one few can claim they aren’t at a minimum tickled by. The ominous round structure seats nearly 18,000 people sitting at 366 feet tall and 516 feet wide. A creative project of James Dolan, the executive chair of Madison Square Garden and owner of the New York Knicks and Rangers, the novel entertainment venue was completed in September 2023 on an astounding $2.3B budget.

U2 holds the honor of christening the venue with a multi-month residency that has been so well-received that the band just announced 11 additional shows to occur in January and February of 2024, for a total of 36 performances. Perhaps surprisingly, while droves of middle-aged suburbanites filed in to scratch their 80s Rock nostalgic itch, the music took a backseat to the immersive visual experience encompassing a 16K wraparound LED screen.

Several songs into U2’s performance, a 4-minute whimsical display of hundreds of images of Elvis Presley engulfs the venue and transcends all existing mental perceptions of The King.

An artist known for his elaborate re-contextualizations of popular and found imagery, as well as his pioneering use of digital imaging technologies in video installation and art, Marco Brambilla leveraged AI to portray Elvis in a fantastical, sci-fi-esc light. He fed clips from over 30 Elvis movies into the 2022 text-to-image model Stable Diffusion, the more realistic-looking sibling to Dall-E2.

U2 and Elvis may sound like an odd coupling, but the band’s lead singer, Bono, has been a vocal supporter of the icon for decades. In fact, U2’s lyrics are sprinkled with allusions to The King of Rock and Roll and even patent references at times, including the song titled “Elvis Presley and America.”

Regardless of how famous a musician or band may be, one cannot use just any person’s likeness on a whim. Failure to obtain permission from said person, or their estate, can result in a potential defamation claim. While many aspects of entertainment law involve overlapping state and federal government oversight, this issue is largely state-specific. According to the American Bar Association, the modern test requires two elements:

  • the defendant, without permission, has used some aspect of the plaintiff’s identity or persona in such a way that the plaintiff is identifiable from the defendant’s use; and
  • the defendant’s use is likely to cause damage to the commercial value of that persona.

Without a doubt, Elvis’ estate is well-versed in likeness laws. In 1981, his ex-wife, Priscilla Presley, established Elvis Presley Enterprises, Inc (EPE). Currently, Authentic Brands Group (ABG) owns roughly 85% of EPE, the remainder belonging to The King’s only descendant, Lisa Marie Presley. The estate does not shy of the legal system, vigorously protecting the cultural icon’s legacy.

Past targets of EPE include wedding chapels in Las Vegas, Nevada, gun manufacturer Beretta (headquartered near Milan, Italy), and a nightclub in Houston, Texas called the Velvet Elvis.

This begs the question, how was Brambilla able to create and U2 able to display an entire video montage of hundreds of versions of Elvis for the entire length of the song “Even Better Than The Real Thing”? Despite speaking to multiple outlets, Brambilla has yet to confirm or deny his permission to use over 12,000 film samples of Elvis’s performances.

In looking at the propensity to sue over likeness or otherwise, one should consider the parties involved. ABG was valued at $12.7 billion in 2021 after nearly going public and also owns the intellectual property rights of Marilyn Monroe, Muhammad Ali, and Shaquille O’Neal.

Between the behemoth’s unlimited legal resources, the Sphere’s already infamous reputation, and U2’s success thus far with the residency, it seems unlikely at this point that Authentic Brands Group could be unaware of the Elvis tribute. Therefore, if ABG wanted to send a cease and desist, they would have done so by now. Even if a lawsuit were imminent, ABG would be hard-pressed to demonstrate that U2 and Brambilla’s portrayal of Elvis is even remotely damaging to his commercial persona.