Emojis Speak Louder Than Words: A Legal Perspective

By: Lauren Lee

Imagine being legally bound to a contract with nothing more than a ‘thumbs-up’ emoji. In our ever-evolving digital landscape, each new phone software update introduces an array of new emojis and emoticons to our keyboards. These small digital icons serve as time-saving tools, enabling more efficient expression of emotions and tone. However, emojis and emoticons bring forth the challenge of potential ambiguity, as many lack a ‘defined meaning.’ For example, the “praying hands” emoji is sometimes misconstrued as a “high five” emoji. In the legal realm, while interpreting emojis may be complex, their admissibility as evidence in trials holds undeniable importance.

A seemingly uncontroversial smiley face emoji or emoticon can have significant implications on cases. In 2015, U.S. District Judge for the Southern District of New York, Katherine Forrest, ruled that all symbols, including emojis or emoticons, be read by jury members. Tyler Schnoeblen, a linguist at Stanford, explained how the use of emoticons provides insight into a writer’s intention. A smiley face may indicate politeness, a frowning face may signal disapproval, while a winking face may convey flirtatiousness. More recently, in July 2023, the District Court of D.C. ruled that when the Bed, Bath, and Beyond CEO tweeted a smiling moon emoji, it symbolized “to the moon” or “take it to the moon,” reflecting optimism about the company’s stock. This interpretation influenced investors to purchase the stock, and the court found that the moon emoji was actionable.

While civil cases often focus on interpreting emoji meanings rather than their admissibility, attorneys should prepare for litigation by understanding the bar of procedural requirements when submitting emojis for evidence. Texts or messages containing emojis or emoticons must be relevant for presentation to the jury. Testimony from the sender can offer context and highlight the intent of the sender when they send the emoji. Once relevance is established, the messages must be authenticated, with the admitting party ensuring that both the sender and receiver saw the same image.

Already, tens of cases each year in the U.S. address the meaning of emojis in a legal context and some states have permitted the use of emojis as evidence. In a report sponsored by the State Bar of Texas, the authors suggest that emoticons and emojis resemble hearsay statements, which is admissible evidence. Rule 801(d)(2) of the Federal Rules of Evidence (FRE) defines a hearsay statement as an oral, written, or nonverbal assertion that is made outside of trial. Emojis could be admitted as hearsay statements for evidence if authenticated because they likely fall under the written assertion category.

Admitting emojis as evidence in a trial has its challenges. Undoubtedly, expanding the scope of what is permitted as evidence complicates litigation. The downside of allowing emojis as evidence lies in the potential increase in the duration and cost of litigation, increased reliance on the jury or judge’s interpretation of emojis, and potential for parties to evade liability through emoji use. Additionally, emojis may appear differently on different devices (e.g. Apple products vs. Androids). Admitting emojis as evidence might also lead to unintended agreements or commitments.Despite the increasing complexity of emoji interpretation, their admissibility in trials should be acknowledged. Emojis expand our means of expression and can play a crucial role in conveying nuanced emotional and contextual information, fostering more accurate communication within the legal system. It is vital to understand that language should not be interpreted solely within its plain meaning but also in the context in which it is used. This concept is similar to statutory interpretation canons in administrative law, where various interpretive modes are employed to derive meaning. Emojis and emoticons, in this context, can be likened to symbols that effectively convey ideas and the author’s tone, making them a significant component for establishing contextual evidence in cases. To prepare for the ever-expanding use of emojis and emoticons, courts and attorneys should deploy appropriate tools to develop fluency in this new ‘emoji language.’

The Complexities of Racism in AI Art

By: Imaad Huda

AI generative art is a recent advance in the field of consumer and social artificial intelligence. Anybody can write a few words into a program, and, within seconds, the AI will generate an image that roughly depicts that prompt. AI generative art can incorporate a number of artistic styles to develop digital art without somebody lifting a pen. While many users are simply fascinated with art being created by their computers, few are aware of how the AI generates its images and the implications of what it produces. Now that AI art programs have made their way into consumer hands, users have noticed stereotypical and racialized depictions in their auto-generated images. Entering prompts that incorporate types of employment, education, and history often produce images that incorporate racial bias. As AI becomes more mainstream, racist and sexist depictions by AI will only serve to entrench long standing stereotypes, and the lack of a legal standard will only make the matter worse. 

Quantifying the Racism 

Leonard Nicoletti and Dina Bass for Bloomberg note that the generative images take the “human” biases to the extreme. In a generative span of 5,000 images with the Stable Diffusion AI, depictions of prompts for people with higher-paying jobs were compared to people with lower-paying jobs. The result was an overrepresentation of people of color for lower-paying jobs. Prompts including “fast-food workers” yielded an image with a darker skinned person seventy percent of the time, even though Bloomberg noted that seventy percent of fast-food workers are white. Meanwhile, prompts for higher-paying jobs, such as “CEO and lawyer” generated images of people with lighter skin at a rate of over eighty percent, potentially proportional to the eighty percent of people that hold those jobs. When it came to occupations, Stable Diffusion showed the most bias when depicting occupations for women, “amplify[ing] both gender and racial stereotypes.” Among all generations for high-paying jobs, only one image, that of a judge, generated of a person of color. Commercial facial-recognition software, a tool specifically designed to identify the genders of people, had “the lowest accuracy on darker skinned people,” presenting a problem when these softwares are “implemented for healthcare and law enforcement.” 

Stable Diffusion was also biased when comparing criminality. For depictions of “inmate,” the AI generated a person of color eighty percent of the time when only half of the inmates in the U.S. are people of color. Bloomberg notes that the rates for generating criminals could be skewed by the racial bias by the U.S. “policing and sentencing” mechanisms. 

The Legality

Is racism in AI legal? The answer is complicated for a number of reasons. The law surrounding AI generative imaging is new. In 2021, the Federal Trade Commission (FTC) declared the use of discriminatory algorithms to make automated decisions illegal, citing opportunities for “jobs, housing, education, or banking.” New York City has also enacted its own Local Law 144, which requires that AI tools undergo a “bias audit” before aiding in employment decisions.” The National Law Review states that a bias audit includes a calculation of the “rate at which individuals in a category are either selected to move on or assigned a classification” by the hiring tool. The law also states that audits “include historical data in their analysis,” and the results of the audit “must be made publically available.” 

The advancement of anti-racism laws regulating AI tools represents progress. However, how these laws pertain to AI art still has yet to be seen. Laws concerning AI generated art are currently focused on theft, as AI art often copies the originalism and stylistic choices of human artists. The racial depictions of AI art have not been touched on legally but could perpetuate stereotypes when used in an educational context, which the FTC prohibits under its 2021 declaration. Judges and lawmakers may not see AI art’s contribution to systemic racism as a legal issue that could stand in the courtroom just yet. 

What’s The Solution?

The bias in generated art results from its algorithm, which, depending on the user’s prompt, pulls together images that match a description and style to develop into a new image. From multiple prompts from many different users and the data available on the internet, the algorithm continuously produces these images. Almost a decade ago, Google postponed its consumer AI search program because images of black people were being filtered into searches for “gorillas” and “monkeys.” The reason for this, according to former Google employees, was Google not training its AI with enough images of black people. The problem in this case, again, could be a lack of representation, from too few AI algorithm employees of color to inadequate representation in the data sets being used to generate images. However, a simple fix to increase representation is not so easy. AI computing is built based on models that already exist; a new model will be based off of an older model, and the biases present in the older algorithm may stand. As issues with machines get more complicated, so do the solutions. Derogatory depictions should not be allowed to stand in the absence of a legal standard, and lawmakers should take the necessary measures to end AI discrimination before it becomes a true social problem.

Even Better Than The Real Thing: U2 and Brambilla Bring Elvis to Life

By: Bella Hood

Thanks to social media, U2’s visual performance at the Las Vegas Sphere is one few can claim they aren’t at a minimum tickled by. The ominous round structure seats nearly 18,000 people sitting at 366 feet tall and 516 feet wide. A creative project of James Dolan, the executive chair of Madison Square Garden and owner of the New York Knicks and Rangers, the novel entertainment venue was completed in September 2023 on an astounding $2.3B budget.

U2 holds the honor of christening the venue with a multi-month residency that has been so well-received that the band just announced 11 additional shows to occur in January and February of 2024, for a total of 36 performances. Perhaps surprisingly, while droves of middle-aged suburbanites filed in to scratch their 80s Rock nostalgic itch, the music took a backseat to the immersive visual experience encompassing a 16K wraparound LED screen.

Several songs into U2’s performance, a 4-minute whimsical display of hundreds of images of Elvis Presley engulfs the venue and transcends all existing mental perceptions of The King.

An artist known for his elaborate re-contextualizations of popular and found imagery, as well as his pioneering use of digital imaging technologies in video installation and art, Marco Brambilla leveraged AI to portray Elvis in a fantastical, sci-fi-esc light. He fed clips from over 30 Elvis movies into the 2022 text-to-image model Stable Diffusion, the more realistic-looking sibling to Dall-E2.

U2 and Elvis may sound like an odd coupling, but the band’s lead singer, Bono, has been a vocal supporter of the icon for decades. In fact, U2’s lyrics are sprinkled with allusions to The King of Rock and Roll and even patent references at times, including the song titled “Elvis Presley and America.”

Regardless of how famous a musician or band may be, one cannot use just any person’s likeness on a whim. Failure to obtain permission from said person, or their estate, can result in a potential defamation claim. While many aspects of entertainment law involve overlapping state and federal government oversight, this issue is largely state-specific. According to the American Bar Association, the modern test requires two elements:

  • the defendant, without permission, has used some aspect of the plaintiff’s identity or persona in such a way that the plaintiff is identifiable from the defendant’s use; and
  • the defendant’s use is likely to cause damage to the commercial value of that persona.

Without a doubt, Elvis’ estate is well-versed in likeness laws. In 1981, his ex-wife, Priscilla Presley, established Elvis Presley Enterprises, Inc (EPE). Currently, Authentic Brands Group (ABG) owns roughly 85% of EPE, the remainder belonging to The King’s only descendant, Lisa Marie Presley. The estate does not shy of the legal system, vigorously protecting the cultural icon’s legacy.

Past targets of EPE include wedding chapels in Las Vegas, Nevada, gun manufacturer Beretta (headquartered near Milan, Italy), and a nightclub in Houston, Texas called the Velvet Elvis.

This begs the question, how was Brambilla able to create and U2 able to display an entire video montage of hundreds of versions of Elvis for the entire length of the song “Even Better Than The Real Thing”? Despite speaking to multiple outlets, Brambilla has yet to confirm or deny his permission to use over 12,000 film samples of Elvis’s performances.

In looking at the propensity to sue over likeness or otherwise, one should consider the parties involved. ABG was valued at $12.7 billion in 2021 after nearly going public and also owns the intellectual property rights of Marilyn Monroe, Muhammad Ali, and Shaquille O’Neal.

Between the behemoth’s unlimited legal resources, the Sphere’s already infamous reputation, and U2’s success thus far with the residency, it seems unlikely at this point that Authentic Brands Group could be unaware of the Elvis tribute. Therefore, if ABG wanted to send a cease and desist, they would have done so by now. Even if a lawsuit were imminent, ABG would be hard-pressed to demonstrate that U2 and Brambilla’s portrayal of Elvis is even remotely damaging to his commercial persona.

Move Fast and Break Things: Ethical Concerns in AI

By: Taylor Dumaine

In Jurassic Park, Dr. Ian Malcolm famously admonished the park’s creator by saying “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” Technological advancement for the sake of advancement alone ignores the genuine negative effects that advancement could cause or contribute to. The negative externalities of technological advancement have often been overlooked or ignored. There is also often a reliance on federal and state governments to regulate industry rather than self-regulation or ethics standards.  That reliance has become especially true in the AI and generative AI spaces. The lack of government regulation in AI technology is far outpaced by its rapid development, hindering the government’s ability to address ethical issues adequately.

Relying on government regulation is a copout for large tech companies. Congress’s record on technology regulation is poor at best, with most bills failing to become law and those that do being insufficient to effectively regulate. The United States still does not have a national privacy law and there is little political will to pass one. The increasingly octogenarian makeup of Congress does not have the best track record of actually understanding basic concepts in technology let alone increasingly complicated technology, such as AI, they are tasked with regulating. During Senate testimony regarding the Cambridge Analytical scandal, Meta CEO, Mark Zuckerberg, had to explain some pretty rudimentary internet concepts.

Earlier this year, Open AI CEO, Sam Altman, called for government regulation of AI in testimony before Congress. Altman also carries a backpack around that would allow him to remotely detonate ChatGPT datacenters in the scenario where the generative AI goes rogue. While by no means a perfect example of ethics in the AI space, Altman seems to at least be aware of the risks of his technology. Altman relies on the federal government to regulate his technology rather than engaging in any meaningful self-regulation.

In contrast to Altman, David Holz, Founder and CEO of Midjourney, an image generation AI program,  is wary of regulation, saying in an interview with Forbes, “You have to balance the freedom to do something with the freedom to be protected. The technology itself isn’t the problem. It’s like water. Water can be dangerous, you can drown in it. But it’s also essential. We don’t want to ban water just to avoid the dangerous parts.” Holz highlights that his goal is to promote imagination and is less concerned with how his goal may impact people so long as others benefit. This thinking is common in tech spaces.

 Even the serious issues in generative AI, such as copyright infringement, seem almost mundane when faced with facial recognition tools such as Clearview AI. Dubbed “The Technology Facebook and Google Didn’t Dare Release,” these facial recognition tools have the disturbing ability to recognize faces across the internet. Clearview AI specifically has raised serious Fourth and Fifth Amendment concerns regarding police use of the software. Surprisingly, the large tech companies, Apple, Google, and Facebook, served as de facto gatekeepers of this technology for over a decade due to their acquisitions of facial recognition technology, recognizing the dangers of this technology. Facebook was subject to a $650 million lawsuit related to its use of facial recognition on the platform.  Clearview AI’s CEO Hoan Ton-That has no ethical qualms regarding the technology he is creating and marketing specifically to law enforcement. Clearview AI is backed by Peter Thiel who founded Palantir, which has its own issues regarding police and government surveillance. The potential integration of the two companies could result in an Orwellian situation. Therefore, Clearview AI represents a worst-case scenario for tech without ethical limits, the effects of which have already been disastrous.

Law students, medical students, and Ph.D. students are all required to take an ethics class at some point. Many self-taught programmers do not incorporate ethics classes or study into their learning. There are very real and important ethical concerns when it comes to technology development. In an age, culture, and society that values advancement without taking the time to consider the negative ramifications, it is unlikely that society’s concern over ethics in technology will change much. In a perfect scenario, government regulation would be swift, well-informed, and effective to protect against the dangers of AI. With the rate of technological innovation, it is hard to stay proactive in the ethics space, but that does not mean there should be no attempt to. Arguing for a professional ethics standard in computer science and software engineering is not without its own serious problems and would be almost entirely impossible to implement. However, by creating a culture where ethical concerns are not just valued but considered in the development of new technology, we can hopefully avoid a Jurassic Park scenario.

Rights of the Dead: Human Remains and Museum Collections

By: Sarah Fassio

There are 255 human brains in the Smithsonian’s Natural History Museum storage facility in Maryland. In Pennsylvania, the Penn Museum housed over 900 human skulls as recently as 2021. The American Museum of Natural History in New York hosts some 12,000 human remains. These collections, amassed throughout the nineteenth and twentieth centuries, represent the non-consensually acquired human remains of historically exploited groups—particularly Indigenous populations and people of color—and are a tangible legacy of white-supremacy pseudoscience in the United States.  

It is an almost cliché tableau: to think of a museum of natural history, anthropology, or medical science is to envision carefully curated exhibits of skeletons, cuts of brains, and jars of preserved organs. Such displays often disquietingly blur the distinction between an impartial, academic teaching tool and the actual body of a real person. They also raise questions about who is being displayed. How did these human remains come to be displayed or acquired by museums and academic institutions?  

There are, of course, those who donated their body to science—a practice that is still around today. The University of Washington has the Willed Body Program, a whole-body donation opportunity for individuals from Washington State. Dozens of universities across the country have similar programs. However, whole-body donation in the twenty-first century is a process laden with paperwork and legal boundaries. The Mayo Clinic, for example, requires the prospective donor themselves sign an Anatomical Bequest Consent Form. Signatures from an individual’s medical power of attorney or guardian are insufficient for this process—and if a next of kin opposes the donation, it will not occur. 

Historically, such formalized donation schematics did not underpin some of the grander museum collections of human remains nor was their inception so necessarily scientific.  Viewed today as discredited pseudoscience, many collections of human remains were predicated on proving the principles of white supremacy and anatomical racism: the belief that white superiority was due to structural differences between races. White scientists robbed graves, exploited those too poor to afford proper burials, or outright stole bodies from Black and Indigenous communities. Many immediate family members were unaware their loved ones’ bodies were held by museums. Many are still unaware.

Faced with such a horrifying past, what can be done to move forward? Are there currently legal structures aimed at encouraging museums to properly confront their human inventories?  

The law is not entirely silent on the issue of misplaced human remains. One avenue for recourse for some Indigenous communities is the Native American Graves Protection and Repatriation Act of 1990 (NAGPRA). NAGPRA is concerned with items of Indigenous cultural significance, covering things like human remains and funerary or sacred items. It provides a process for federal agencies and museums to repatriate or transfer those pieces back to their rightful homes.

But the more unfortunate truth is that there are not comprehensive laws aimed at ameliorating the ugly collection processes of yesterday’s anatomical racism. The Smithsonian, for instance, requires those with a personal interest or legal right to the remains to submit a formal request. But doing so remains difficult, if not impossible, since many living relatives are unaware these collections even exist—much less that their lineage is an unwilling part of it. The American Museum of Natural History, for instance, holds records naming the individuals to whom the human remains once belonged, but declined as recently as this month to release a list.

Ultimately, change and repatriation seems largely left up to the museum institutions themselves, often motivated by public pressure and activism. The Penn Museum’s recent treatment of the Morton Cranial Collection—900 human skulls obtained by early-nineteenth-century scientist Dr. Samuel Morton for the purpose of articulating racial differences—is one encouraging example of visible change.

Beginning in 2020, the Penn Museum formed an evaluation committee, published a report on contributions to the Morton Collection by Black Philadelphians, and recommended burial and commemorative actions. Among the recommendations were an interfaith memorial service, the erection of a permanent remembrance marker on the University of Pennsylvania’s campus, and participation in a community-led transparency forum. 

In February 2023, the Philadelphia Orphans’ Court granted the museum’s request to respectfully bury the cranial remains of twenty individuals in a historic African American cemetery. For those, at last, a final rest.