Post-Dobbs: A Whole New World of Privacy Law

By: Enny Olaleye

Last summer, The United States was rocked by the U.S. Supreme Court’s (SCOTUS) ruling in Dobbs v. Jackson Women’s Health Organization, a landmark decision striking down the right to abortion, thereby overruling both Roe v. Wade and Planned Parenthood v. Casey. In its wake, the Dobbs decision left many questioning whether their most sensitive information—information relating to their reproductive health care—would remain private. Dobbs set in motion a web of state laws which make having, providing, or aiding and abetting the provision of abortion a criminal offense, and many now fear that enforcing those laws will require data tracking. Private groups and state agencies ranging from the health tech sector to hospitality industries may be asked to turn over data as a form of cooperation or a part of their prosecution of these new crimes. 

Thus, the question arises: Exactly how much of my information is actually private?

When determining one’s respective right to privacy, it is important to consider what “privacy” actually is. Ultimately, the scope of privacy is wide-ranging. Some may consider the term by its literal definition, where privacy is the quality or state of being apart from company or observation. Alternatively, some may conceptualize privacy a bit further and view privacy as 

a dignitary right focused on knowledge someone may or may not possess about a person. Others may not view privacy by its definition at all, but rather cement their views in the belief that a person’s private information should be free from public scrutiny and that all people have a right to be left alone. 

Regardless of one’s opinions on privacy, it is important to understand that, with respect to the U.S Constitution, you have no explicitly recognized right to privacy. 

How could that be possible?  Some may point to the First Amendment, which preserves a person’s rights of speech and assembly or perhaps the Fourth Amendment, which restricts the government’s intrusion into people’s private property and belongings. However, these amendments focus more on a specific right to privacy with respect to freedom and liberty, with the goal of limiting government interference. They do not constitute an explicit, overarching constitutional right to privacy. While the right to privacy is not specifically listed in the Constitution, the Supreme Court has recognized it as an outgrowth of protections for individual liberty. 

In Griswold v, Connecticut, the Supreme Court concluded that people have privacy rights that prevent the government from forbidding married couples from using contraception. Such a ruling first identified people’s right to independently control the most personal aspects of their lives—thus creating an implicit right to privacy. Later, the Court extended this right of privacy to include a woman’s right to have an abortion in Roe v Wade, holding that “the right of decisional privacy is based in the Constitution’s assurance that people cannot be ‘deprived of life, liberty or property, without due process of law.’” The Roe decision was largely made by the notion that the 14th Amendment contains an implicit right to privacy, as well as protects against state interference in a person’s private decisions more generally. However, the Dobbs ruling has now dismissed this precedent, with the implicit right of privacy no longer extending to abortion. With a 6-3 majority, the Court reasoned that abortion lacked due process protection, as it was not mentioned in the Constitution and was outlawed in many states at the time of the Roe decision. 

Fast forward to today—some government entities have attempted to make progress in preserving an individual’s privacy, particularly in relation to their healthcare. The Biden administration released an executive order aimed at protecting access to abortion and treatment for pregnancy complications. Additionally, the Federal Trade Commission has started to implement federal privacy rules for consumer data, citing “a need to protect people’s right to seek healthcare information.” However, most of this progress centers on a misconception that “privacy” and “data protection” are the same thing. 

So, let’s set the record straight: privacy and data protection are not the same thing. 

While data protection does stem from the right to privacy, it mainly focuses on ensuring that data has been fairly processed. With the concept of privacy constantly being intertwined with freedom and liberty over the past few decades, it can be difficult for people to fully grasp which exactly of their information is private. The Dobbs majority pointed out a distinction between privacy and liberty, citing that “as to precedent, citing a broad array of cases, the Court found support for a constitutional ‘right of personal privacy.’ But Roe conflated the right to shield information from disclosure and to make and implement important personal decisions without governmental interference.” 

There is a valid concern that personal information, ranging from instant messages and location history to third-party app usage and digital records, can end up being subpoenaed or sold to law enforcement. In response to the Dobbs decision, the U.S. Department of Health and Human Services issued a guidance that unless a state law “expressly requires” reporting on certain health conditions, the HIPAA exemption for disclosure to law enforcement would not apply. However, some people may not realize that the application privacy agreements and HIPAA medical privacy rules are not automatically protected against subpoenas. Wholeheartedly, data brokers will not hesitate to sell to the highest bidder any and all personal information they have access to. 

“So now what?” 


Ultimately, the Dobbs decision serves as a rather harsh reminder of just how valuable our privacy is, and what can happen if we lose it. As some of us have already realized, companies, governments, and even our peers are incredibly interested in our private lives. With respect to protecting reproductive freedom, it is imperative to establish federal privacy laws that protect information related to health care from being handed over to law enforcement unless doing so is absolutely necessary to avert substantial public harm. While it is unfortunate that individuals are placed in positions where they are solely responsible for protecting themselves against corporate or governmental surveillance, it is imperative for everyone to remain vigilant and aware of where their information is going.

Alice in Algorithm-land: Legal recourse for victims of content-recommendation rabbit holes

By: Cameron Eldridge

There was a time early on in the social media landscape when all anyone would be able to tell about you based on the content of your feed was who you followed: friends, family, preferred news networks, favorite tv shows, or bands. However, content-recommendation algorithms, which were once only used for advertising, are now the backbone of social media platforms, determining what users see and when they see it. 

The content-recommendation algorithms used by Facebook, Instagram, Twitter, and Tiktok have one goal: maximizing user engagement, which means showing users whatever will keep them looking. This can benefit users when liking one video of an adorable baby animal means they get fed more. But it can also be dangerous, when a single interaction with content about mental illness or a terrorist organization can trigger the algorithm to send users spiraling down a rabbit hole, slowly distorting how they view themselves and how they interact with the world. Unfortunately, due to Section 230, when users find themselves or their loved ones have been victims of these rabbit holes they’re often left with no one to legally blame.  

Shattering the Section 230 shield

Section 230(c)(1) of the Communications Decency Act immunizes “interactive computer services” like social media platforms for publishing content created by another party. Historically, Section 230 has served as a shield protecting social media platforms from any and all liability for harmful videos, comments, and posts made on their platforms. So when a Louisiana teen’s family sues Meta because she killed herself after being fed content about suicide and self-harm, or when the family of a ten-year-old who choked themselves to death while participating in TikTok challenge sues Tiktok, the companies can avoid any consequences. If victims of the algorithm want any chance at holding social media platforms accountable, they’ll need a more creative legal strategy than content-based attacks.

A flaw in the design

A recent products liability claim against Meta brought by the Social Media Victims Law Center on behalf of plaintiff Alexis Spence is attempting to hold Instagram accountable by arguing that Instagram’s feed and explore features are defective by design. Spence, who was eleven years old when she first started using Instagram, and now at twenty years old suffers from severe mental illness, claims that these design features of the Instagram app are the but-for cause of her injuries. While it is too early to tell how Spence’s case will pan out, there is some supporting precedent in another recent case, Lemmon v. Snap, Inc. The court in this case held Snapchat liable for foreseeable injuries resulting from its ‘speed filter,’ another design-based claim. 

Another promising strategy that is currently being tested is an attack against the recommendation algorithm itself. Next month the question of whether Section 230 should protect platforms when they make targeted recommendations of information, or only protect platforms when they engage in traditional editorial functions like publishing or withdrawing content, will be raised in front of the Supreme Court by University of Washington Law Professor Eric Schnapper in Gonzalez v. Google

Gonzalez is brought on behalf of Nohemi Gonzalez, a 23-year-old U.S. citizen who was studying in Paris in November 2015, when he was murdered in one of a series of violent ISIS attacks that resulted in the deaths of over a hundred people. The complaint alleges that YouTube not only unknowingly published hundreds of ISIS recruitment videos but also affirmatively recommended those videos to users and that these recommendations go beyond the traditional editorial functions of a publisher which Section 230 textually protects. 

Many in the tech world fear that alterations to Section 230 protections like those Gonzalez seeks to make would render the existence of social media platforms legally impossible. How would apps like TikTok, which is based almost entirely on its content-recommendation algorithm, continue to function if they could be held liable for its every consequence? A ruling against Google would certainly change social media platforms as we know them, but it may also force them to take more responsibility for the kind of rabbit holes they’re sending users down. While this would pose a financial and logistical burden, it’s one that tech companies like Meta and Google probably can and should bear. 

First AI Art Generator Lawsuit Hits the Courts

By: HR Fitzmorris

Your social media accounts may have recently been inundated with spookily elegant renderings of your once-familiar friends’ faces. Or, if you’re on a particular side of the internet, you may have seen any number of info-graphics scolding users for contributing to the devaluation of flesh and blood artists’ livelihoods. What you may not have seen is news of the recent class-action lawsuit filed on behalf of artists who are unhappy with technological advances that, in their view, were ‘advanced’ through art theft.

The Complaint

In the first-of-its-kind proposed class action, named plaintiffs allege copyright infringement, asking for damages to the tune of one billion dollars. Specifically, artists allege that the named AI companies downloaded and fed billions of copyrighted images into their AI software to ‘train’ the artificial intelligence software to create its own digital ‘art.’ In addition to damages, the plaintiffs have asked the court to issue an injunction preventing the AI companies from using artists’ work without permission and requiring the companies to seek appropriate licensing in the future.

The Plaintiffs

The named plaintiffs, who will represent the pool of affected artists if the class is certified by the court, are Sarah Andersen, a popular webcomic artist; Kelly McKernan, who specializes in colorful watercolor and acryla gouache paintings; and Karla Ortiz, a professional concept artist with clients such as Wizards of the Coast and Ubisoft.

In a New York Times opinion piece about the appropriation of her art by both the Alt-Right and artificial intelligence art generators, Ms. Andersen stated, “[t]he notion that someone could type my name into a generator and produce an image in my style immediately disturbed me.” She also explains that the appropriation made her “feel violated” by the way the AI stripped her artwork of its personal meaning and of her human mark that she honed and defined through the “complex culmination of [her] education, the comics [she] devoured as a child and the many small choices that make up the sum of [her] life.” Clearly, for these artists, there is more at stake than the threat to their livelihoods.

The Defendants

The plaintiffs named four entities as defendants in the suit: Stability AI Ltd., Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. Each of these companies has a hand in creating, hosting, or perpetuating the use of engines that use AI to create art.

The Legal Issues

The Stable Diffusion engine, for example, is described as a “deep learning, text-to-image model” that anyone can use “to generate detailed images conditioned on text descriptions.” In layperson’s terms, users input text (such as an artist’s name or a specific medium) to generate images with those attributes. This is the heart of the issue. In order to do this, the tool (and others like it) must be “trained,” which involves, in the words of Plaintiff Sarah Andersen

[B]uil[ding] on collections of images known as “data sets,” from which a detailed map of the data set’s contents, the “model,” is formed by finding the connections among images and between images and words. Images and text are linked in the data set, so the model learns how to associate words with images. It can then make a new image based on the words you type in.

Stable Diffusion was built using a dataset that contained somewhere in the neighborhood of six billion images culled from the internet without regard to intellectual property and copyright laws or creator consent. Additionally, these companies are not building these engines out of the goodness of their hearts, they are making immense revenue. Stability AI, for example, is currently valued at approximately $1 billion.

The suit, which was filed in the Northern District of California, alleges violations of federal as well as state copyright laws, including “direct copyright infringement, vicarious copyright infringement related to forgeries, violations of the Digital Millennium Copyright Act (DMCA), violation of class members’ rights of publicity, breach of contract related to the DeviantArt Terms of Service, and various violations of California’s unfair competition laws.” The crucial argument for the plaintiffs is that “[e]very output image from the system is derived exclusively from the latent images, which are copies of copyrighted images. For these reasons, every hybrid image is necessarily a derivative work.” (emphasis added).

The defendant companies, though, will likely argue that some version of the “fair use doctrine” protects their activity. To prevail, the defendants must prove that their use of the images was sufficiently “transformative”—unlikely to be confused for, or usurp the market for, the original artwork. 

Whatever the court decides, this type of intersection between art and technology will likely remain a hotbed of intellectual and legal debate as artificial intelligence continues to grow in prevalence and accessibility.

Apple AirTags – Stalking made easy in the age of convenience

By: Kayleigh McNiel

Marketed as a means of locating lost or stolen items, Apple AirTags are a convenient and affordable tool for tracking down your lost keys, misplaced luggage, and even your ex-partner. Weighing less than half an ounce, these small tracking devices fit in the palm of your hand and can be easily hidden inside backpacks, purses, and vehicles without arousing the owner’s suspicion. 

Reports of AirTag stalking began emerging almost immediately upon their release in April of 2021. Apple’s assurances that AirTag’s built-in abuse prevention features would protect against “unwanted tracking” have fallen woefully short of the reality that these $29 devices are increasingly being used to monitor, surveil and stalk women across the country.

The Wrong Tool in the Wrong Hands – Women Are Being Targeted with AirTags

Through an expansive review of 150 police reports involving Apple AirTags from eight law enforcement agencies across the nation, an investigative report by Motherboard confirmed the disturbing truth. One third of the reports were filed by women who received notifications that they were being tracked by someone else’s AirTag. The majority of these cases involved women being stalked by a current or former partner. Of the 150 reports reviewed by Motherboard, less than half involved people using their own AirTags to find their lost or stolen property.   

AirTags pose a significant danger to victims of domestic violence and have been used in at least two grisly murders. In January 2022, Heidi Moon, a 43-year-old mother from Akron, Ohio, was shot and killed by her abusive ex-boyfriend who tracked her movements using an AirTag hidden in the back seat of her car. In June 2022, Andre Smith, a 26-year-old Indianapolis man, died after he was repeatedly run over by his girlfriend after she found him at a bar with another woman by tracking him with an AirTag.

It’s not just domestic violence victims who are in danger. Stories are emerging on social media of women discovering AirTags under their license plate covers or receiving notifications that they are being tracked after traveling in public places. One woman’s viral TikTok describes how she received repeated notifications that an unknown device was tracking her after visiting a Walmart in Texas. Unable to locate the device, she tried unsuccessfully to disable it, and continued receiving notifications even after she turned off the location services and Bluetooth on all of her Apple devices.   

In January 2022, Sports Illustrated Swimsuit model Book Nader discovered that a stranger slipped an Apple AirTag into her coat pocket while she was sitting in a restaurant. The device tracked her location for hours before the built-in safety mechanism triggered a notification sent to her phone. 

One Georgia woman, Anna Mahaney, began receiving the alerts after going to a shopping mall but was unable to locate the tracker. When she tried to disable the device, she received an error message that it was unable to connect to the server. She immediately went to an Apple Store for help and was told that no beep sounded because the owner of the AirTag had apparently tracked her until she got home and then disabled it

Apple’s haphazard release of these button-sized trackers, with near complete disregard for the danger they pose to the public, has resulted in a recent federal class action lawsuit filed by two California women who were stalked by men using AirTags. One plaintiff, identified only as Jane Doe, was tracked by her ex-husband who hid an AirTag in their child’s backpack. The other plaintiff, Lauren Hughes, fled her home and moved into a hotel after being stalked and threatened by a man she dated for only three months. After she began receiving notifications that an AirTag was tracking her, Hughes found one in the wheel well of her back tire. 

The plaintiffs in Hughes et al v. Apple, Inc., 3:22-cv-07668, say Apple ignored the warnings from advocates and put the safety of consumers and the general public at risk by “revolutionizing the scope, breadth, and ease of location-based stalking.” 

The Tech Behind the Tags – Insufficient Safety Warnings and a Lack of Prevention

AirTags work by establishing a Bluetooth connection with nearby Apple devices. Once connected, it uses that device’s GPS and internet connection to transmit the AirTag’s location to the iCloud where users can track it via the Find My app. With a vast network of more than 1.8 billion Apple devices worldwide, AirTags can essentially track anyone, anywhere.  

While the accuracy of Bluetooth tracking can vary, newer iPhone devices (models 11 and up) come equipped with ultra-wide broadband technology that allows AirTag owners to use Precision Tracking to get within feet of its location

In its initial release in April 2021, Apple included minimal safety measures including alerts that inform iPhone users if someone else’s AirTag had been traveling with them.Additionally, AirTags chime if separated from its owner after three days. 

When someone discovers an AirTag and taps it with their iPhone, it tells them only the information the owner allows. If an AirTag has been separated from its owner for somewhere between eight and twenty-four hours, it begins chirping regularly. By then, the AirTag owner may have already been able to track their target for hours, learning where they live, work, or go to school. The chirp is only about 60 decibels which is the average sound level of a restaurant or office. This sound is easy to muffle especially if the AirTag is hidden under a car license plate or in a wheel well. This quiet alarm is the only automatic protection against stalking Apple can provide to those who do not have an iPhone. 

Apple did eventually release an app that Android users can download to scan for rogue AirTags, but it requires Android users to know about AirTag tracking and then manually scan for the devices. With only 2.4 stars, many complain that it is ineffective and does not provide enough information.  

In response to the wave of criticism and reports of stalking and harassment, Apple has begun to increase these safety measures in piecemeal updates, which so far have failed to resolve the problem. Just three months after its release, Apple shortened the amount of time it takes for AirTags to chime if separated from its owner; from three days to somewhere between eight and twenty-four hours. But it’s easy to register an AirTag, and then disable it before the target begins receiving notifications.

Our Legal Systems Are Not Prepared to Protect Victims From AirTag Stalking.

Our criminal and civil legal systems are painfully slow to respond to the way technology has changed the way we engage with our families and communities and how we experience harm in those relationships. One of the biggest challenges victims face in reporting AirTag stalking is that many police departments and Courts do not even know what AirTags are or how they can be used to harass and stalk women.

In some states, it is not even a crime to monitor someone’s movements with a tracking device like an AirTag without their knowledge or consent. At least 26 states and the District of Columbia have some kind of law prohibiting the tracking of others without their knowledge. While 11 of these states, including Washington, incorporate this into their stalking statutes, nine others (Delaware, Illinois, Michigan, Oregon, Rhode Island, Tennessee, Texas, Utah and Wisconsin) only prohibit the use of location-tracking devices on motor vehicles without the owner’s consent. These state laws do nothing to protect against AirTags being placed in your bag or purse. These laws also don’t protect those who share a vehicle with their abuser, since the other party is also technically the owner of the vehicle. 

Many states are rapidly seeing the need to beef up their laws in response to AirTags. The Attorneys General of both New York and Pennsylvania have issued consumer protection alerts warning people about the dangers of AirTags. But much more needs to be done.

The fact that Apple released this product without considering the disproportionate impact it would have on the safety of women across the globe shows a clear lack of diversity in Apple’s design and manufacturing process. 

Is Amazon’s APEX the Top Option for Patent Rights?

By: Nicholas Lipperd

Are more avenues to resolve patent disputes a good thing? Patent litigation is a process that can easily cost millions of dollars and which lasts years; it is not exactly an option available to every patent holder. Even with the availability of arbitration, options to protect patents remain limited. Amazon has determined that a private patent evaluation program is a good thing, at least for its Amazon Marketplace. After beta-testing for three years under the name “Utility Patent Neutral Evaluation (UPNE),” Amazon formally implemented its Amazon Patent Evaluation Express (“APEX”) system in 2022, which allows sellers to flag possibly infringing products for Amazon to analyze without the use of the judicial patent system. This system advertises cheap, fast, and fair outcomes to sellers on Amazon Marketplace asserting their utility patent rights, yet has drawn criticism for disproportionately one-sided outcomes leading to its use as a retaliatory tool. Does the fact that this cheap, quick process reduces barriers to litigation offset these shortcomings? Should Amazon make changes to its process to achieve more balanced results?

A case brought in Federal Court for patent infringement takes two to four years to adjudicate, not including an additional year if an appeal is sought. Intrinsically tied to this lengthy timeline is the hefty price tag. Though the median cost for patent infringement cases with $1 million-$10 million at risk fell 250% from 2015 -2019, a full patent trial will still average $1.5 million. How does a patent holder without such resources assert the patent’s rights? Arbitration or mediation are cheaper options, at $50,000 on average, but often requires the other side to agree to participate. When the patent owner wants the patent rights asserted within Amazon Marketplace, though, the owner generally has a cheaper and faster option.

Amazon’s APEX program allows patent holders to have their patents examined by a neutral third-party patent examiner, rather than the United States Patent and Trademark Office (“USPTO”). APEX begins with the patent holder submitting a complaint through Amazon’s Brand Registry, providing the Amazon Standard Identification Numbers (ASINs) of the allegedly infringing sellers and upon which claim in which patent the holder believes the ASINs infringe. For each alleged infringer, Amazon sends a notice and allows up to three weeks for a response. Should Amazon receive no response, such products will be automatically delisted, similar to a default judgment. Upon receipt of the response, an evaluator independent of Amazon and each party is assigned to the issue, and each side is required to pay a $4000 fee, refundable to the winner. The patent holder gets three weeks to submit arguments. The sellers then have two weeks to respond, with the patent holder given one week to submit an optional reply. The evaluator then decides within two weeks, making only the determination if the sellers’ products likely infringe on the patent holder’s claim. It is noteworthy that the APEX evaluator does not make any determination on the validity of the claims in the patent at issue. If the evaluator decides in favor of the seller, the product stays on the platform; if not, the products are removed. There is no appeal process from the evaluator’s decision. The entire process takes fewer than three months, and at a price tag of $4000 per party, creates a fiscal barrier of a fraction of the cost of formal patent litigation.

This process is not, though, without its drawbacks. The patent holder wins a disproportionate amount in APEX proceedings, creating incentives to initiate the process without valid claims. Because the evaluator does not look at the validity of the asserted patent, the accused sellers can do nothing but play defense. In legal terms, they are without the affirmative defense of invalidity. They can’t win, they can only hope to survive. Further, the evaluation is not subject to formal rules like the Federal Rules of Civil Procedure or the Federal Rules of Evidence. The evaluators are hired for their expertise in the patent field, not for their investigative skills in the information provided. With no process of verification from Amazon, patent holders are submitting fraudulent information to obtain favorable judgments. With loose evidentiary rules, a low fiscal barrier, and no chance for the patent to be ruled invalid, the incentives all line up for patent holders to abuse this process, especially considering there is no chance for appeal. Should a competitor be cutting significantly into profits, $4000 is a very low risk for a possibly high reward of ejecting your competition from the market. Tortious interference claims stemming from the APEX process are already coming to light. 

Perhaps the most well-known legal spat involving Amazon’s patent evaluation process is the case of Tineco Intelligence Tech. Co. v. Bissell Inc. (W.D. Wash, 2022). Bissell is a US company that sells vacuums, and Tineco is a Chinese company that does the same. When Bissell initiated a UPNE proceeding, Tineco ignored it, leading to the automatic removal of its products. Tineco moved for a ruling in district court that Bissell’s patent claims were invalid and that their products did not infringe. Luckily, perhaps in part because of the sheer volume of business both entities do, Amazon deviated from its set UPNE/APEX process and reinstated Tineco’s listings before the District Court case finished, though U.S. International Trade Commission (“ITC”) proceedings continued. This case and Amazon’s deviation are seen by some as the exception to the rule. Many entities are still using APEX as a hammer to bludgeon competition into settlements and licensing agreements, despite the tortious interference claims that sometimes follow.

Amazon’s APEX has the potential to be the first of many commercial patent dispute programs due to its budget-friendly, expedited decisions. Yet before it can be considered a system after which other businesses should model their systems, it must rebalance and overcome the issues outlined above. Although a large burden is placed on “neutral evaluators” hired by Amazon, these evaluators currently do not review the patent at issue for invalidity. To establish a more balanced approach and to disincentivize misuse of APEX by predatory sellers, invalidity must be considered. Even if such consideration drives up the required fee slightly, the trade-off would be worthwhile to promote fairness in the process. Amazon has three years of beta-testing under its belt with this system and thus has the data available to see where fraud and misuse are most prevalent. A thorough review of this data should lead to the tightening of its evidentiary standards throughout the process. Despite the name inviting such a pun, APEX must not be allowed to thrive as a predatory tool.

While barriers to justice should not be so high that patent holders may not assert their rights, the process should not be so favorable and easy that it inadvertently incentivizes abuse of the process. Through small tweaks, APEX can continue to serve patent holders’ rights without demanding the time and money that large-scale patent litigation requires.