Closing the Loop: Solving the Impossibility of Data Deletion

By: Josephine Laing

Personal information is the newest and shiniest coin of the realm. The more personal the data, the more valuable it may be. While most consumers are aware that their data is worth its weight in gold, it is not always clear who is mining this data and what can be done to protect it. Luckily, efforts have been made to create consumer protections that shine a light on the notorious data broker industry. 

Data brokers collect personal information about consumers. Personal information is not directly gathered from consumers. Rather, personal information is collected from commercial entities, government, and other sources – unbeknownst to the consumer. This data is constantly being sold. For a consumer to track down their personal information, they would have to follow an ever-winding trail of sales between data brokers. As a result, this industry is commonly critiqued for its lack of transparency. While public awareness of this industry is crucial, the key issue is what consumer deletion rights are available to combat the collection. If consumers’ deletion rights are not extended to affect data brokers, deletion rights become meaningless. Meaningless deletion rights prevent consumers from exerting control over their personal information. Consequently, privacy rights are directly linked to one’s ability to require data brokers to delete information. Without this right to delete, there is no true right to privacy. 

The Delete Act 

On October 10th, 2023, California’s Governor Newsom signed the Delete Act into law. The Delete Act promises consumers a new age of data control. Starting in August 2026, California consumers will have the ability to effectively exercise their deletion rights. This might come as a surprise to some, as the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) already granted Californians deletion rights in 2018 and 2020 respectively. These deletion rights, however, were caveated by exceptions that were, until recently, abused by the data broker industry. 

The Delete Act, introduced by Senator Becker and sponsored by Privacy Rights Clearinghouse, amends and adds to Section 1798.99.80-87 of the California Civil Code. These amendments create important changes in the data broker provisions included in the CCPA. The changes embrace a more inclusive definition for data brokers, preventing a notoriously shifty industry from evading jurisdiction. This Act requires data brokers to disclose when they collect personal information about minors, consumers’ precise geolocations, and consumers’ reproductive health care data. Data brokers must also include informational links on their websites about collection techniques and deletion rights. Interestingly, brokers are forbidden from using dark patterns. While data brokers are already required to register in California, the penalty for failing to register has increased to $200 per day from $100. These daily penalties also apply for each deletion request that goes unheeded by the broker. These fines can add up, especially as many consumers in California are ready to make deletion requests.

The Delete Act addresses the Sisyphean task of data management. Consumers are constantly producing data. Thus, the management of data is never-ending. This law includes a provision that makes the deletion right effective. Data brokers must access the deletion mechanism and reassess the mechanism at least once every forty-five days. When a data broker accesses the mechanism, they must: (1) process all deletion requests; (2) direct all service providers or contractors to delete personal information related to the request; (3) send an affirmative representation of deletion to the California Privacy Protection Agency indicating number of records deleted and what service providers or contractors were contacted. After a consumer has submitted a deletion request, data brokers must continue to delete the consumer’s data every forty-five days unless otherwise requested. By requiring monthly engagement with the deletion mechanism, the Act actively protects consumer data.

Who cares? 

Why is this Act necessary? Why weren’t the original deletion rights enough? Through the CPRA’s amendments to the CCPA, California citizens are granted preliminary rights to delete their data. California consumers’ right to delete was limited to data retained by businesses providing services to Californians. And the CCPA only affects businesses that handle 50,000 California consumers, make $25 million in gross revenue, or profit primarily (50% or more) by selling data. This means that if a business qualifies, there are many exceptions the business can claim to avoid facing enforcement. Section 1798.145 outlines the right-to-delete exceptions and allows for businesses to “collect, use, retain, sell, share, or disclose consumers’ personal information that is identified or aggregate consumer information.” 1798.145(a)(6). Such exceptions allow for consumers’ personal information to be excluded from privacy protections. Information can still be used to identify consumers via aggregation efforts. Once the personal data is sold to a data broker (service provider or contractor) the consumer’s right to delete is vastly reduced. Thus, the exceptions carved out for data deletion effectively reduce consumer privacy protections. 

The Delete Act addresses the gaps in consumer privacy by empowering consumers to delete their personal information from data brokers. Since personal information is constantly collected from consumers, expecting consumers to repeatedly delete their information from data brokers is unreasonable. Accordingly, for consumers to efficiently utilize a right to delete they must be able to delete information at scale. The Delete Act calls for the right for consumers to delete “any personal information related” to them “held by the data broker or associated service provider or contractor” through a “single verifiable consumer request.” The bill addresses the persistence of data collection by eliminating the consumer’s need to continually and repetitively request deletion. 
So where is Washington’s Delete Act? Emory Roane of Privacy Rights Clearinghouse hopes that the Delete Act can “serve as an impetus – if not a direct model – for other states to model… [as] there is a massive blind spot when it comes to businesses that don’t have a direct relationship with the consumer.” Emory notes that data brokers are a bipartisan issue, pointing to the passing of data broker registries in both Texas and Oregon in 2023. Washington has yet to establish a data broker registry. Getting to the heart of the issue, Emory states that: “Republican or Democrat, old or young, across the country and across every demographic, everyone rightfully feels like they’ve lost control of their personal information and privacy and data brokers are a huge part of that problem.” Tackling the data broker industry is a tall task, and creating an effective right to delete is a necessary start. As California tries out its deletion portal, Washington should take heed.

Emojis Speak Louder Than Words: A Legal Perspective

By: Lauren Lee

Imagine being legally bound to a contract with nothing more than a ‘thumbs-up’ emoji. In our ever-evolving digital landscape, each new phone software update introduces an array of new emojis and emoticons to our keyboards. These small digital icons serve as time-saving tools, enabling more efficient expression of emotions and tone. However, emojis and emoticons bring forth the challenge of potential ambiguity, as many lack a ‘defined meaning.’ For example, the “praying hands” emoji is sometimes misconstrued as a “high five” emoji. In the legal realm, while interpreting emojis may be complex, their admissibility as evidence in trials holds undeniable importance.

A seemingly uncontroversial smiley face emoji or emoticon can have significant implications on cases. In 2015, U.S. District Judge for the Southern District of New York, Katherine Forrest, ruled that all symbols, including emojis or emoticons, be read by jury members. Tyler Schnoeblen, a linguist at Stanford, explained how the use of emoticons provides insight into a writer’s intention. A smiley face may indicate politeness, a frowning face may signal disapproval, while a winking face may convey flirtatiousness. More recently, in July 2023, the District Court of D.C. ruled that when the Bed, Bath, and Beyond CEO tweeted a smiling moon emoji, it symbolized “to the moon” or “take it to the moon,” reflecting optimism about the company’s stock. This interpretation influenced investors to purchase the stock, and the court found that the moon emoji was actionable.

While civil cases often focus on interpreting emoji meanings rather than their admissibility, attorneys should prepare for litigation by understanding the bar of procedural requirements when submitting emojis for evidence. Texts or messages containing emojis or emoticons must be relevant for presentation to the jury. Testimony from the sender can offer context and highlight the intent of the sender when they send the emoji. Once relevance is established, the messages must be authenticated, with the admitting party ensuring that both the sender and receiver saw the same image.

Already, tens of cases each year in the U.S. address the meaning of emojis in a legal context and some states have permitted the use of emojis as evidence. In a report sponsored by the State Bar of Texas, the authors suggest that emoticons and emojis resemble hearsay statements, which is admissible evidence. Rule 801(d)(2) of the Federal Rules of Evidence (FRE) defines a hearsay statement as an oral, written, or nonverbal assertion that is made outside of trial. Emojis could be admitted as hearsay statements for evidence if authenticated because they likely fall under the written assertion category.

Admitting emojis as evidence in a trial has its challenges. Undoubtedly, expanding the scope of what is permitted as evidence complicates litigation. The downside of allowing emojis as evidence lies in the potential increase in the duration and cost of litigation, increased reliance on the jury or judge’s interpretation of emojis, and potential for parties to evade liability through emoji use. Additionally, emojis may appear differently on different devices (e.g. Apple products vs. Androids). Admitting emojis as evidence might also lead to unintended agreements or commitments.Despite the increasing complexity of emoji interpretation, their admissibility in trials should be acknowledged. Emojis expand our means of expression and can play a crucial role in conveying nuanced emotional and contextual information, fostering more accurate communication within the legal system. It is vital to understand that language should not be interpreted solely within its plain meaning but also in the context in which it is used. This concept is similar to statutory interpretation canons in administrative law, where various interpretive modes are employed to derive meaning. Emojis and emoticons, in this context, can be likened to symbols that effectively convey ideas and the author’s tone, making them a significant component for establishing contextual evidence in cases. To prepare for the ever-expanding use of emojis and emoticons, courts and attorneys should deploy appropriate tools to develop fluency in this new ‘emoji language.’

The Complexities of Racism in AI Art

By: Imaad Huda

AI generative art is a recent advance in the field of consumer and social artificial intelligence. Anybody can write a few words into a program, and, within seconds, the AI will generate an image that roughly depicts that prompt. AI generative art can incorporate a number of artistic styles to develop digital art without somebody lifting a pen. While many users are simply fascinated with art being created by their computers, few are aware of how the AI generates its images and the implications of what it produces. Now that AI art programs have made their way into consumer hands, users have noticed stereotypical and racialized depictions in their auto-generated images. Entering prompts that incorporate types of employment, education, and history often produce images that incorporate racial bias. As AI becomes more mainstream, racist and sexist depictions by AI will only serve to entrench long standing stereotypes, and the lack of a legal standard will only make the matter worse. 

Quantifying the Racism 

Leonard Nicoletti and Dina Bass for Bloomberg note that the generative images take the “human” biases to the extreme. In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs. Prompts including “fast-food workers” yielded an image with a darker skinned person seventy percent of the time, even though Bloomberg noted that seventy percent of fast-food workers are white. Meanwhile, prompts for higher-paying jobs, such as “CEO and lawyer” generated images of people with lighter skin at a rate of over eighty percent, potentially proportional to the eighty percent of people that hold those jobs. When it came to occupations, Stable Diffusion showed the most bias when depicting occupations for women, “amplify[ing] both gender and racial stereotypes.” Among all generations for high-paying jobs, only one image, that of a judge, generated of a person of color. Commercial facial-recognition software, a tool specifically designed to identify the genders of people, had “the lowest accuracy on darker skinned people,” presenting a problem when these softwares are “implemented for healthcare and law enforcement.” 

Stable Diffusion was also biased when comparing criminality. For depictions of “inmate,” the AI generated a person of color eighty percent of the time when only half of the inmates in the U.S. are people of color. Bloomberg notes that the rates for generating criminals could be skewed by the racial bias by the U.S. “policing and sentencing” mechanisms. 

The Legality

Is racism in AI legal? The answer is complicated for a number of reasons. The law surrounding AI generative imaging is new. In 2021, the Federal Trade Commission (FTC) declared the use of discriminatory algorithms to make automated decisions illegal, citing opportunities for “jobs, housing, education, or banking.” New York City has also enacted its own Local Law 144, which requires that AI tools undergo a “bias audit” before aiding in employment decisions.” The National Law Review states that a bias audit includes a calculation of the “rate at which individuals in a category are either selected to move on or assigned a classification” by the hiring tool. The law also states that audits “include historical data in their analysis,” and the results of the audit “must be made publically available.” 

The advancement of anti-racism laws regulating AI tools represents progress. However, how these laws pertain to AI art still has yet to be seen. Laws concerning AI generated art are currently focused on theft, as AI art often copies the originalism and stylistic choices of human artists. The racial depictions of AI art have not been touched on legally but could perpetuate stereotypes when used in an educational context, which the FTC prohibits under its 2021 declaration. Judges and lawmakers may not see AI art’s contribution to systemic racism as a legal issue that could stand in the courtroom just yet. 

What’s The Solution?

The bias in generated art results from its algorithm, which, depending on the user’s prompt, pulls together images that match a description and style to develop into a new image. From multiple prompts from many different users and the data available on the internet, the algorithm continuously produces these images. Almost a decade ago, Google postponed its consumer AI search program because images of black people were being filtered into searches for “gorillas” and “monkeys.” The reason for this, according to former Google employees, was Google not training its AI with enough images of black people. The problem in this case, again, could be a lack of representation, from too few AI algorithm employees of color to inadequate representation in the data sets being used to generate images. However, a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand. As issues with machines get more complicated, so do the solutions. Derogatory depictions should not be allowed to stand in the absence of a legal standard, and lawmakers should take the necessary measures to end AI discrimination before it becomes a true social problem.

Even Better Than The Real Thing: U2 and Brambilla Bring Elvis to Life

By: Bella Hood

Thanks to social media, U2’s visual performance at the Las Vegas Sphere is one few can claim they aren’t at a minimum tickled by. The ominous round structure seats nearly 18,000 people sitting at 366 feet tall and 516 feet wide. A creative project of James Dolan, the executive chair of Madison Square Garden and owner of the New York Knicks and Rangers, the novel entertainment venue was completed in September 2023 on an astounding $2.3B budget.

U2 holds the honor of christening the venue with a multi-month residency that has been so well-received that the band just announced 11 additional shows to occur in January and February of 2024, for a total of 36 performances. Perhaps surprisingly, while droves of middle-aged suburbanites filed in to scratch their 80s Rock nostalgic itch, the music took a backseat to the immersive visual experience encompassing a 16K wraparound LED screen.

Several songs into U2’s performance, a 4-minute whimsical display of hundreds of images of Elvis Presley engulfs the venue and transcends all existing mental perceptions of The King.

An artist known for his elaborate re-contextualizations of popular and found imagery, as well as his pioneering use of digital imaging technologies in video installation and art, Marco Brambilla leveraged AI to portray Elvis in a fantastical, sci-fi-esc light. He fed clips from over 30 Elvis movies into the 2022 text-to-image model Stable Diffusion, the more realistic-looking sibling to Dall-E2.

U2 and Elvis may sound like an odd coupling, but the band’s lead singer, Bono, has been a vocal supporter of the icon for decades. In fact, U2’s lyrics are sprinkled with allusions to The King of Rock and Roll and even patent references at times, including the song titled “Elvis Presley and America.”

Regardless of how famous a musician or band may be, one cannot use just any person’s likeness on a whim. Failure to obtain permission from said person, or their estate, can result in a potential defamation claim. While many aspects of entertainment law involve overlapping state and federal government oversight, this issue is largely state-specific. According to the American Bar Association, the modern test requires two elements:

  • the defendant, without permission, has used some aspect of the plaintiff’s identity or persona in such a way that the plaintiff is identifiable from the defendant’s use; and
  • the defendant’s use is likely to cause damage to the commercial value of that persona.

Without a doubt, Elvis’ estate is well-versed in likeness laws. In 1981, his ex-wife, Priscilla Presley, established Elvis Presley Enterprises, Inc (EPE). Currently, Authentic Brands Group (ABG) owns roughly 85% of EPE, the remainder belonging to The King’s only descendant, Lisa Marie Presley. The estate does not shy of the legal system, vigorously protecting the cultural icon’s legacy.

Past targets of EPE include wedding chapels in Las Vegas, Nevada, gun manufacturer Beretta (headquartered near Milan, Italy), and a nightclub in Houston, Texas called the Velvet Elvis.

This begs the question, how was Brambilla able to create and U2 able to display an entire video montage of hundreds of versions of Elvis for the entire length of the song “Even Better Than The Real Thing”? Despite speaking to multiple outlets, Brambilla has yet to confirm or deny his permission to use over 12,000 film samples of Elvis’s performances.

In looking at the propensity to sue over likeness or otherwise, one should consider the parties involved. ABG was valued at $12.7 billion in 2021 after nearly going public and also owns the intellectual property rights of Marilyn Monroe, Muhammad Ali, and Shaquille O’Neal.

Between the behemoth’s unlimited legal resources, the Sphere’s already infamous reputation, and U2’s success thus far with the residency, it seems unlikely at this point that Authentic Brands Group could be unaware of the Elvis tribute. Therefore, if ABG wanted to send a cease and desist, they would have done so by now. Even if a lawsuit were imminent, ABG would be hard-pressed to demonstrate that U2 and Brambilla’s portrayal of Elvis is even remotely damaging to his commercial persona.

Move Fast and Break Things: Ethical Concerns in AI

By: Taylor Dumaine

In Jurassic Park, Dr. Ian Malcolm famously admonished the park’s creator by saying “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” Technological advancement for the sake of advancement alone ignores the genuine negative effects that advancement could cause or contribute to. The negative externalities of technological advancement have often been overlooked or ignored. There is also often a reliance on federal and state governments to regulate industry rather than self-regulation or ethics standards.  That reliance has become especially true in the AI and generative AI spaces. The lack of government regulation in AI technology is far outpaced by its rapid development, hindering the government’s ability to address ethical issues adequately.

Relying on government regulation is a copout for large tech companies. Congress’s record on technology regulation is poor at best, with most bills failing to become law and those that do being insufficient to effectively regulate. The United States still does not have a national privacy law and there is little political will to pass one. The increasingly octogenarian makeup of Congress does not have the best track record of actually understanding basic concepts in technology let alone increasingly complicated technology, such as AI, they are tasked with regulating. During Senate testimony regarding the Cambridge Analytical scandal, Meta CEO, Mark Zuckerberg, had to explain some pretty rudimentary internet concepts.

Earlier this year, Open AI CEO, Sam Altman, called for government regulation of AI in testimony before Congress. Altman also carries a backpack around that would allow him to remotely detonate ChatGPT datacenters in the scenario where the generative AI goes rogue. While by no means a perfect example of ethics in the AI space, Altman seems to at least be aware of the risks of his technology. Altman relies on the federal government to regulate his technology rather than engaging in any meaningful self-regulation.

In contrast to Altman, David Holz, Founder and CEO of Midjourney, an image generation AI program,  is wary of regulation, saying in an interview with Forbes, “You have to balance the freedom to do something with the freedom to be protected. The technology itself isn’t the problem. It’s like water. Water can be dangerous, you can drown in it. But it’s also essential. We don’t want to ban water just to avoid the dangerous parts.” Holz highlights that his goal is to promote imagination and is less concerned with how his goal may impact people so long as others benefit. This thinking is common in tech spaces.

 Even the serious issues in generative AI, such as copyright infringement, seem almost mundane when faced with facial recognition tools such as Clearview AI. Dubbed “The Technology Facebook and Google Didn’t Dare Release,” these facial recognition tools have the disturbing ability to recognize faces across the internet. Clearview AI specifically has raised serious Fourth and Fifth Amendment concerns regarding police use of the software. Surprisingly, the large tech companies, Apple, Google, and Facebook, served as de facto gatekeepers of this technology for over a decade due to their acquisitions of facial recognition technology, recognizing the dangers of this technology. Facebook was subject to a $650 million lawsuit related to its use of facial recognition on the platform.  Clearview AI’s CEO Hoan Ton-That has no ethical qualms regarding the technology he is creating and marketing specifically to law enforcement. Clearview AI is backed by Peter Thiel who founded Palantir, which has its own issues regarding police and government surveillance. The potential integration of the two companies could result in an Orwellian situation. Therefore, Clearview AI represents a worst-case scenario for tech without ethical limits, the effects of which have already been disastrous.

Law students, medical students, and Ph.D. students are all required to take an ethics class at some point. Many self-taught programmers do not incorporate ethics classes or study into their learning. There are very real and important ethical concerns when it comes to technology development. In an age, culture, and society that values advancement without taking the time to consider the negative ramifications, it is unlikely that society’s concern over ethics in technology will change much. In a perfect scenario, government regulation would be swift, well-informed, and effective to protect against the dangers of AI. With the rate of technological innovation, it is hard to stay proactive in the ethics space, but that does not mean there should be no attempt to. Arguing for a professional ethics standard in computer science and software engineering is not without its own serious problems and would be almost entirely impossible to implement. However, by creating a culture where ethical concerns are not just valued but considered in the development of new technology, we can hopefully avoid a Jurassic Park scenario.