Spicy Lawsuit May Lead to Sweet Payout: Fireball Class-Action

By: Nicholas Neathamer

Known for its slogan, “Tastes Like Heaven, Burns Like Hell,” Fireball Cinnamon Whisky is a popular liquor with a sweet taste and a spicy, cinnamon aftertaste. The mixture of Canadian whisky, cinnamon flavoring, and sweeteners is produced by the Sazerac Company, and shots of the alcohol are sold in distinctive 50 milliliter bottles around the world. But to the dismay of at least one customer, that distinctive packaging has recently been used with only minor changes to sell shots of Fireball Cinnamon, a beverage that does not contain any whisky but rather is a flavored malt beverage (FMB) containing only half of the alcohol by volume as its whisky counterpart. Anna Marquez, that spurned purchaser of the FMB variant, has filed a class-action lawsuit in the United States District Court for the Northern District of Illinois against Sazerac Company, Inc.

Marquez seeks to certify two classes of Fireball Cinnamon purchasers for the lawsuit. The first consists of purchasers in Illinois, her home state, while the second consists of those from eleven other states: North Dakota, Wyoming, Idaho, Alaska, Iowa, Mississippi, Arkansas, Kansas, Arizona, South Carolina and Utah. Marquez’s complaint lists a variety of claims, including violations of the Illinois Consumer Fraud and Deceptive Business Practices Act, various state consumer fraud acts, breaches of express and implied warranties, negligent misrepresentation, fraud, and unjust enrichment. While these claims contain different elements, they collectively boil down to one main premise: by using near-identical packaging, Sazerac has deceived whisky-seeking consumers into purchasing a flavored malt beverage at whisky-level prices.

Aside from the one-word change in its name, Fireball Cinnamon (the FMB) has nearly identical packaging as Fireball Cinnamon Whisky. Both come in 50 mL, identically-colored bottles with a red cap and yellow label, and both drinks have an amber color. Both feature the brand’s red devil on their labels, sandwiched by the words “RED” and “HOT.” Combined, the similarities allegedly caused Marquez to mistake Fireball Cinnamon for Fireball Cinnamon Whisky, as well as pay more than she would have had she realized that it did not contain distilled spirits of any kind. To this point, Marquez claims that the similarities have allowed Sazerac to take advantage of “consumers’ cognitive shortcuts made at the point-of-sale.” The only other differentiation in Fireball Cinnamon’s packaging is a statement of the beverage’s composition in the smallest allowable font, stating that it is a “Malt Beverage with Natural Whisky & Other Flavors and Caramel Color.” Even this description is misleading, as it can easily be misinterpreted as meaning that Fireball Cinnamon contains whisky, when in fact it only contains whisky flavoring. 

Many may wonder why this matters. After all, haven’t consumers been happily enjoying Fireball Cinnamon’s familiar taste and getting buzzed regardless? Aside from the general principle that consumers should be able to have reasonable confidence in distinguishing between products they purchase, one point that Marquez’s complaint fails to elaborate on thoroughly is its argument that the false and misleading representations have allowed Fireball Cinnamon to be “sold at a premium price” of $0.99 per 50 mL bottle. While it may not sound like a steep price, this is far above the average price of malt-based beverages. In fact, as of December 2022, the average price per 50 mL of all malt-based beverages in the United States is only $0.1834. This means that consumers are paying over five times more for Fireball Cinnamon than beverages of similar composition. And while one could argue that shoppers are willing to spend this higher price, such an argument fails when considering the fact that many purchasers may be mistaking the FMB for Fireball Cinnamon Whisky. 

Another reason to care is that Sazerac’s tactics allow them to sell beverages where they previously could not. Many states heavily restrict where distilled spirits and liquors above a certain ABV can be sold, typically only allowing products such as Fireball Cinnamon Whisky shot bottles to be purchased in liquor stores. Meanwhile, by introducing the malt-based Fireball Cinnamon that contains a lower ABV, Sazerac is able to sell its products in a variety of locations that are able to sell beers, wine, and malt-based beverages. This includes being able to sell Fireball Cinnamon shot bottles in many grocery stores and gas stations, and Sazerac itself touts that the company is now able to sell the FMB in approximately 170,000 additional stores in the United States. If whisky-seeking consumers continue to be deceived into purchasing Fireball Cinnamon at these locations where hard liquors are not allowed, it may provide Sazerac an unfair advantage in both sales and brand recognition. 

While we will have to wait to see whether Marquez is able to certify the two classes of purchasers and prevail on any of her claims in federal court, the issue of deception in marketing and packaging is made clear by Sazerac’s two beverages in question. The clever similarities in the packaging of Fireball Cinnamon Whisky and Fireball Cinnamon and the success of the FMB in grocery stores demonstrate that despite how sophisticated we may like to consider ourselves as consumers, companies are still able to take advantage of our lack of time and our desire for convenience. No matter the outcome, this case may cause shoppers to take a second look at what they toss into their shopping carts. 

The Space Regulation Race: Modernizing Space Law for Modern Industry

By: Cooper Cuene

“We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard . . .”

  • President John F. Kennedy, Address at Rice University on the Nation’s Space Effort (1962)

“Gotta go to space. Yeah. Gotta go to space.”

  • Space Core, Portal 2 (2011)

50 years after the success of Apollo 11, the United States is returning to the Moon. Following the success of the first mission of NASA’s Artemis program, the stage has been set for American astronauts to venture beyond low earth orbit for the first time in decades. The landscape of space flight, however, is much different and the industry far more crowded than the days of the Apollo program. As NASA prepares its missions to the Moon and eventually beyond to Mars, an environment crowded with other stakeholders both public and private awaits it. Inevitably, zero-gravity disputes will arise between these stakeholders, but when they do, what forums and rules are used to resolve them?

To put it gently, the law governing space is far from concrete. The majority of international space law is laid out in a small handful of treaties with the most important being the United Nations Outer Space Treaty. Counting 109 nations among its signatories, the treaty’s provisions are an international consensus. That being said, reading the Outer Space Treaty makes it clear that the agreement is a product of the Cold War and may not be suitable for the increasing number of public and private stakeholders that participate in today’s space industry. Notably, the treaty’s terms read more closely to principles than well-defined regulations. Examples of the treaty’s terms are “outer space shall be free for exploration and use for all states” and “astronauts shall be regarded as the envoys of mankind.” A nice sentiment, to be sure, but not the clear regulations needed to manage the growing space industry.

One area where this lack of regulation could prove especially damaging is the growing problem of space debris. While international regulations lag behind the lightning quick pace of private spaceflight, the debris left behind by those flights has accumulated in low earth orbit, defined as the range between 160 and 1000 kilometers above earth’s surface. SpaceX, for example, recently lost 40 Starlink satellites as a result of a solar flare. While those satellites reentered Earth’s atmosphere and burned up, plenty of other junk has taken up a concerning amount of space in lower earth orbit where active satellites normally reside. Much of the debris is harmless, but the FAA has so far failed to implement a meaningful regulatory regime to govern the creation and disposal of this kind of space junk.

Space agencies and Non-governmental organizations (NGOs) have begun to step up and propose regulations where Congress and other international bodies have failed to take action. The Inter-Agency Space Debris Coordination Committee (IADC), for example, is a coalition of space administrations that seeks to publish guidance on how to avoid the creation of space debris. While various government space agencies typically follow the rules suggested by the IADC, the same can’t be said about private spaceflight companies. Moreover, efforts to give IADC guidance the force of law is met (unsurprisingly) by pushback by both private spaceflight companies and nations that are opposed to new regulations on their own space agencies. A failure to curtail the creation of space junk could jeopardize future space missions as well as traditional flight and even people on the ground. While the IADC guidelines are a start, without true government action to create regulatory boundaries for private spaceflight companies, we risk an unregulated and dangerous environment in low-earth orbit.

So, what happens when a space-related commercial dispute occurs? As of now, there’s no good answer to that question. Like the other areas of space law discussed above, the UN’s Convention on International Liability for Damage Caused by Space Objects, or simply the Liability Convention, governs dispute resolution at a high level. However, the Liability Convention has only been invoked once, after a Russian spacecraft scattered a load of radioactive material over Canada. Hopefully, this won’t be the type of dispute that becomes common going forward. 

Existing commercial litigation doesn’t give us many answers either. Cases so far have involved government contract and intellectual property disputes within the space industry rather than cases that are unique to space. Both law firms and the government alike have generally embedded any space law attorneys within larger aerospace practice groups, again with the most common disputes centering around patents on satellites and rockets. In contrast to the sluggish pace of legal innovation in the US, the UAE has made efforts to establish a dedicated space court in Dubai for handling disputes ranging from collisions between spacecraft to litigation over satellite purchases. Only time will tell which approach wins out, but regardless of where space law begins to take shape, it will be an area of law ripe for innovation in the decades to come. 

The Reality of Deepfakes: The Dark Side of Technology

By: Kayleigh McNiel

We’ve all seen the viral Tom Cruise Deepfake or played around with the face-swapping Snapchat filters. But the dark reality of deepfake technology is far more terrifying than an ever-youthful Top Gun star. 

Deepfakes are images and videos digitally altered using artificial intelligence (AI) and machine learning algorithms to superimpose one person’s face seamlessly onto another’s. They can be incredibly realistic and impossible to detect with the naked eye. Many websites and apps allow anyone with access to a computer to produce images and videos of someone saying or doing something that never actually happened. 

While law-makers and the media have focused their concerns on the potential impact of political deepfakes, nearly all deepfakes online are actually non-consensual porn targeting women. Gaps in the law and easy access to deepfake technology have created a perfect storm, where anyone can make their most perverse fantasy come to life, at the expense of real people.

The Tech Behind The Fakes

Deepfakes are created using generative adversarial networks (GANs) that use AI and two machine learning algorithms (an image generator and an image discriminator) which work in tandem to create and refine the fakes. The process begins by feeding each algorithm the same source data, i.e., images, video, or even audio. Then, the generator iteratively creates new samples with the target image until the discriminator cannot tell whether the generated image is a real image of the target or a fake.

Historically, creating a truly realistic and quality deepfake required dozens of images of a person with enough similarities to the original subject. That is, until July 2022, when Samsung developed MegaPortriats, a technique that creates high-resolution deepfakes from a single image. Now, highly realistic deepfakes can be made from a single innocuous selfie posted online. 

With advancements in technology, detecting deepfakes has become increasingly more difficult. In response, researchers have raced to develop more accurate detection tools. For example, in July 2022 Computer scientists at University of California Riverside created a program that detects manipulated facial expressions in videos and images with up to 99% accuracy. While promising, there is still a long way to go before this or similar detection tools are widely available to law enforcement, consumer protection agencies, and the public. 

The Dark Side of Deepfakes

Realistic deepfakes pose an enormous risk to politicians and fair elections. Many deepfakes have already surfaced of high-profile politicians engaging in acts designed to undermine their credibility. In March 2022, Russian hackers posted a deepfake video of Ukrainian President, Volodymyr Zelenskyy, telling his soldiers to surrender on Ukrainian news outlets and social media. While the video was quickly debunked, it demonstrates how this technology is likely to become a standard tactic used by adversaries to interfere in politics.  

While political deepfakes do pose a very real danger to our democratic institutions, the technology is currently primarily used to victimize women. A 2019 report by Deeptrace confirmed that 96% of all deepfakes online are actually non-consensual porn targeting women and the  number of such deepfakes is rapidly growing. Cybersecurity firm Sensity reports the volume of deepfakes online nearly doubles every six months, largely due to the increase in availability of cheap and easy deepfake technology. Free face-swapping software found on apps like Deepnude, Deepswap, and FaceMagic are commonly used to create deepfake porn. Scammers have even begun using these in extortion cases; threatening to release the fake videos to victims’ family, friends, and employers unless they pay up. 

Having your likeness stolen and used to perform degrading sex acts without your consent is becoming a disturbing reality for celebrities and women in the public eye. A quick Google search reveals nearly a dozen websites with hundreds of deepfake porn videos using the faces of celebrities like Emma Watson, Gal Gadot, and Maisey Williams, among many others. Earlier this month, Twitch streamer, Atrioc, was forced to apologize after he accidentally revealed he used a website dedicated to sharing deepfake porn of popular female streamers, many of whom he is friends with in real-life. 

While celebrities are most at risk, there are websites (which I will not name here) specifically designed for men to create non-consensual deepfake porn of the women in their lives. While no longer publicly active, an anonymous user released an AI bot on right-wing messaging app, Telegram, which rapidly generated thousands of deepfakes of women and underage girls from photos uploaded by men seeking revenge. An investigation by Sensity found that these deepfakes were shared over 100,000 times before the bot was reported to the platform.

To add insult to injury, women who speak out against revenge porn are often the targets of relentless online harassment. Kate Isaacs, a 30-year-old woman from the UK, was the victim of deepfake porn after she successfully campaigned Pornhub to remove nearly 10 million non-consensual and child porn videos. Afterwards she was subjected to humiliating and terrifying harassment from men who “felt they were entitled to non-consensual porn.” They posted her work and home addresses online and threatened to follow her, rape her, and then post the video of it on Pornhub. Shortly thereafter, deepfake porn videos of her began to circulate online. 

Many victims of deepfake and revenge porn are forced to shut down their social media accounts and minimize their online presence to avoid further harassment and embarrassment. Notably, it is somewhat ironic that the men who seek to silence women by creating and sharing these videos often do so under the guise of the First Amendment. The dangers of deepfakes are undeniable, but women have largely been left to fend for themselves.

Our Legal System Is Not Ready for This

The combination of a lack of awareness and the difficulty in detecting deepfakes creates a significant challenge for victims when reporting.. Most law enforcement agencies lack the training and software to confirm that a video is a deepfake. Even if law enforcement can prove it is a forgery, by the time they do so significant damage is already done. People have already seen what they believe to be the victim engaging in degrading sex acts. Those images can never be unseen and will continue to damage victims’ reputations, relationships, and mental health. 

The legal system has been slow to react to the threat women face from deepfake porn. While 48 states and Washington D.C. finally have laws against the creation and distribution of non-consensual “revenge” pornography, only three have specifically banned deepfake porn. In 2018 proposed federal deepfake legislation died in the Senate. The state laws prohibiting deepfakes will likely face huge hurdles from First Amendment and personal jurisdiction challenges:

  • In 2019, Texas was the first State to ban deepfakes, but only those intended to influence elections. 
  • Also in 2019, Virginia amended its “revenge porn” statute to include deepfakes. 
  • In 2020, California prohibited the creation of deepfakes within 60 days of an election and for unauthorized use in pornography.
  • Also in 2020, New York passed a law protecting a person’s likenesses from unauthorized commercial use as well as non-consensual deepfake pornography.

In states without laws against deepfakes, victims will be forced to find relief through a patchwork of consumer privacy protection, defamation, and revenge porn laws. Notably, many state’s revenge porn laws do not apply to deepfakes because the victim’s body is not actually being portrayed. 

Biometric privacy laws could be used to combat deepfake porn in states like Illinois, Texas, Washington, New York, and Arkansas, where residents can file a civil claim against those who use their faceprints, facial mapping, or identifiable images without their consent. Similarly, defamation claims could potentially be brought against the creators of deepfake porn. 

Even if a clearly applicable law exists, bringing any civil claim requires the victim to be able to prove the identity of the video’s creator. This can be incredibly challenging when websites and apps allow users to upload videos with near total anonymity. The bottom line is that current laws do little to deter deepfake creators from continuing to victimize women for their own pleasure. 

What Are Tech Platforms Doing To Fix the Problem They Created?

Furthermore, the tech platforms on which deepfakes are widely shared are completely shielded from any legal liability under Section 230 of the Communications Decency Act. Without any consequences, it has been difficult to get platforms to address the impact that content shared on their site has on people’s lives. Still, some have taken action against deepfakes. In 2018 both Reddit and Pornhub banned deepfake porn, categorizing it as inherently non-consensual. The following year Discord banned the sale of Deepnude, an app designed to remove clothing from women (yes—only women) in photos. Apple removed the Telegram deepfake bot from its iOS for violating its guidelines. Pornhub and YouPorn both redirect users searching for deepfakes to a warning that they’re searching for potentially illegal and abusive sexual material. Users are then provided with directions on how to request the removal of content and resources for victims. Telegram, on the other hand, has never publicly commented on it and has never identified its creator. 

While these efforts are promising, more still needs to be done. Tech companies, lawmakers, and communities must work together to regulate the use of deepfake technology.

If you or someone you know has been the victim of online sex abuse, you are not alone. Support is available through the Cyber Civil Rights Initiative online or via their 24-hour hotline at 1-844-878-2274.

Post-Dobbs: A Whole New World of Privacy Law

By: Enny Olaleye

Last summer, The United States was rocked by the U.S. Supreme Court’s (SCOTUS) ruling in Dobbs v. Jackson Women’s Health Organization, a landmark decision striking down the right to abortion, thereby overruling both Roe v. Wade and Planned Parenthood v. Casey. In its wake, the Dobbs decision left many questioning whether their most sensitive information—information relating to their reproductive health care—would remain private. Dobbs set in motion a web of state laws which make having, providing, or aiding and abetting the provision of abortion a criminal offense, and many now fear that enforcing those laws will require data tracking. Private groups and state agencies ranging from the health tech sector to hospitality industries may be asked to turn over data as a form of cooperation or a part of their prosecution of these new crimes. 

Thus, the question arises: Exactly how much of my information is actually private?

When determining one’s respective right to privacy, it is important to consider what “privacy” actually is. Ultimately, the scope of privacy is wide-ranging. Some may consider the term by its literal definition, where privacy is the quality or state of being apart from company or observation. Alternatively, some may conceptualize privacy a bit further and view privacy as 

a dignitary right focused on knowledge someone may or may not possess about a person. Others may not view privacy by its definition at all, but rather cement their views in the belief that a person’s private information should be free from public scrutiny and that all people have a right to be left alone. 

Regardless of one’s opinions on privacy, it is important to understand that, with respect to the U.S Constitution, you have no explicitly recognized right to privacy. 

How could that be possible?  Some may point to the First Amendment, which preserves a person’s rights of speech and assembly or perhaps the Fourth Amendment, which restricts the government’s intrusion into people’s private property and belongings. However, these amendments focus more on a specific right to privacy with respect to freedom and liberty, with the goal of limiting government interference. They do not constitute an explicit, overarching constitutional right to privacy. While the right to privacy is not specifically listed in the Constitution, the Supreme Court has recognized it as an outgrowth of protections for individual liberty. 

In Griswold v, Connecticut, the Supreme Court concluded that people have privacy rights that prevent the government from forbidding married couples from using contraception. Such a ruling first identified people’s right to independently control the most personal aspects of their lives—thus creating an implicit right to privacy. Later, the Court extended this right of privacy to include a woman’s right to have an abortion in Roe v Wade, holding that “the right of decisional privacy is based in the Constitution’s assurance that people cannot be ‘deprived of life, liberty or property, without due process of law.’” The Roe decision was largely made by the notion that the 14th Amendment contains an implicit right to privacy, as well as protects against state interference in a person’s private decisions more generally. However, the Dobbs ruling has now dismissed this precedent, with the implicit right of privacy no longer extending to abortion. With a 6-3 majority, the Court reasoned that abortion lacked due process protection, as it was not mentioned in the Constitution and was outlawed in many states at the time of the Roe decision. 

Fast forward to today—some government entities have attempted to make progress in preserving an individual’s privacy, particularly in relation to their healthcare. The Biden administration released an executive order aimed at protecting access to abortion and treatment for pregnancy complications. Additionally, the Federal Trade Commission has started to implement federal privacy rules for consumer data, citing “a need to protect people’s right to seek healthcare information.” However, most of this progress centers on a misconception that “privacy” and “data protection” are the same thing. 

So, let’s set the record straight: privacy and data protection are not the same thing. 

While data protection does stem from the right to privacy, it mainly focuses on ensuring that data has been fairly processed. With the concept of privacy constantly being intertwined with freedom and liberty over the past few decades, it can be difficult for people to fully grasp which exactly of their information is private. The Dobbs majority pointed out a distinction between privacy and liberty, citing that “as to precedent, citing a broad array of cases, the Court found support for a constitutional ‘right of personal privacy.’ But Roe conflated the right to shield information from disclosure and to make and implement important personal decisions without governmental interference.” 

There is a valid concern that personal information, ranging from instant messages and location history to third-party app usage and digital records, can end up being subpoenaed or sold to law enforcement. In response to the Dobbs decision, the U.S. Department of Health and Human Services issued a guidance that unless a state law “expressly requires” reporting on certain health conditions, the HIPAA exemption for disclosure to law enforcement would not apply. However, some people may not realize that the application privacy agreements and HIPAA medical privacy rules are not automatically protected against subpoenas. Wholeheartedly, data brokers will not hesitate to sell to the highest bidder any and all personal information they have access to. 

“So now what?” 


Ultimately, the Dobbs decision serves as a rather harsh reminder of just how valuable our privacy is, and what can happen if we lose it. As some of us have already realized, companies, governments, and even our peers are incredibly interested in our private lives. With respect to protecting reproductive freedom, it is imperative to establish federal privacy laws that protect information related to health care from being handed over to law enforcement unless doing so is absolutely necessary to avert substantial public harm. While it is unfortunate that individuals are placed in positions where they are solely responsible for protecting themselves against corporate or governmental surveillance, it is imperative for everyone to remain vigilant and aware of where their information is going.

Alice in Algorithm-land: Legal recourse for victims of content-recommendation rabbit holes

By: Cameron Eldridge

There was a time early on in the social media landscape when all anyone would be able to tell about you based on the content of your feed was who you followed: friends, family, preferred news networks, favorite tv shows, or bands. However, content-recommendation algorithms, which were once only used for advertising, are now the backbone of social media platforms, determining what users see and when they see it. 

The content-recommendation algorithms used by Facebook, Instagram, Twitter, and Tiktok have one goal: maximizing user engagement, which means showing users whatever will keep them looking. This can benefit users when liking one video of an adorable baby animal means they get fed more. But it can also be dangerous, when a single interaction with content about mental illness or a terrorist organization can trigger the algorithm to send users spiraling down a rabbit hole, slowly distorting how they view themselves and how they interact with the world. Unfortunately, due to Section 230, when users find themselves or their loved ones have been victims of these rabbit holes they’re often left with no one to legally blame.  

Shattering the Section 230 shield

Section 230(c)(1) of the Communications Decency Act immunizes “interactive computer services” like social media platforms for publishing content created by another party. Historically, Section 230 has served as a shield protecting social media platforms from any and all liability for harmful videos, comments, and posts made on their platforms. So when a Louisiana teen’s family sues Meta because she killed herself after being fed content about suicide and self-harm, or when the family of a ten-year-old who choked themselves to death while participating in TikTok challenge sues Tiktok, the companies can avoid any consequences. If victims of the algorithm want any chance at holding social media platforms accountable, they’ll need a more creative legal strategy than content-based attacks.

A flaw in the design

A recent products liability claim against Meta brought by the Social Media Victims Law Center on behalf of plaintiff Alexis Spence is attempting to hold Instagram accountable by arguing that Instagram’s feed and explore features are defective by design. Spence, who was eleven years old when she first started using Instagram, and now at twenty years old suffers from severe mental illness, claims that these design features of the Instagram app are the but-for cause of her injuries. While it is too early to tell how Spence’s case will pan out, there is some supporting precedent in another recent case, Lemmon v. Snap, Inc. The court in this case held Snapchat liable for foreseeable injuries resulting from its ‘speed filter,’ another design-based claim. 

Another promising strategy that is currently being tested is an attack against the recommendation algorithm itself. Next month the question of whether Section 230 should protect platforms when they make targeted recommendations of information, or only protect platforms when they engage in traditional editorial functions like publishing or withdrawing content, will be raised in front of the Supreme Court by University of Washington Law Professor Eric Schnapper in Gonzalez v. Google

Gonzalez is brought on behalf of Nohemi Gonzalez, a 23-year-old U.S. citizen who was studying in Paris in November 2015, when he was murdered in one of a series of violent ISIS attacks that resulted in the deaths of over a hundred people. The complaint alleges that YouTube not only unknowingly published hundreds of ISIS recruitment videos but also affirmatively recommended those videos to users and that these recommendations go beyond the traditional editorial functions of a publisher which Section 230 textually protects. 

Many in the tech world fear that alterations to Section 230 protections like those Gonzalez seeks to make would render the existence of social media platforms legally impossible. How would apps like TikTok, which is based almost entirely on its content-recommendation algorithm, continue to function if they could be held liable for its every consequence? A ruling against Google would certainly change social media platforms as we know them, but it may also force them to take more responsibility for the kind of rabbit holes they’re sending users down. While this would pose a financial and logistical burden, it’s one that tech companies like Meta and Google probably can and should bear.