Apple AirTags – Stalking made easy in the age of convenience

By: Kayleigh McNiel

Marketed as a means of locating lost or stolen items, Apple AirTags are a convenient and affordable tool for tracking down your lost keys, misplaced luggage, and even your ex-partner. Weighing less than half an ounce, these small tracking devices fit in the palm of your hand and can be easily hidden inside backpacks, purses, and vehicles without arousing the owner’s suspicion. 

Reports of AirTag stalking began emerging almost immediately upon their release in April of 2021. Apple’s assurances that AirTag’s built-in abuse prevention features would protect against “unwanted tracking” have fallen woefully short of the reality that these $29 devices are increasingly being used to monitor, surveil and stalk women across the country.

The Wrong Tool in the Wrong Hands – Women Are Being Targeted with AirTags

Through an expansive review of 150 police reports involving Apple AirTags from eight law enforcement agencies across the nation, an investigative report by Motherboard confirmed the disturbing truth. One third of the reports were filed by women who received notifications that they were being tracked by someone else’s AirTag. The majority of these cases involved women being stalked by a current or former partner. Of the 150 reports reviewed by Motherboard, less than half involved people using their own AirTags to find their lost or stolen property.   

AirTags pose a significant danger to victims of domestic violence and have been used in at least two grisly murders. In January 2022, Heidi Moon, a 43-year-old mother from Akron, Ohio, was shot and killed by her abusive ex-boyfriend who tracked her movements using an AirTag hidden in the back seat of her car. In June 2022, Andre Smith, a 26-year-old Indianapolis man, died after he was repeatedly run over by his girlfriend after she found him at a bar with another woman by tracking him with an AirTag.

It’s not just domestic violence victims who are in danger. Stories are emerging on social media of women discovering AirTags under their license plate covers or receiving notifications that they are being tracked after traveling in public places. One woman’s viral TikTok describes how she received repeated notifications that an unknown device was tracking her after visiting a Walmart in Texas. Unable to locate the device, she tried unsuccessfully to disable it, and continued receiving notifications even after she turned off the location services and Bluetooth on all of her Apple devices.   

In January 2022, Sports Illustrated Swimsuit model Book Nader discovered that a stranger slipped an Apple AirTag into her coat pocket while she was sitting in a restaurant. The device tracked her location for hours before the built-in safety mechanism triggered a notification sent to her phone. 

One Georgia woman, Anna Mahaney, began receiving the alerts after going to a shopping mall but was unable to locate the tracker. When she tried to disable the device, she received an error message that it was unable to connect to the server. She immediately went to an Apple Store for help and was told that no beep sounded because the owner of the AirTag had apparently tracked her until she got home and then disabled it

Apple’s haphazard release of these button-sized trackers, with near complete disregard for the danger they pose to the public, has resulted in a recent federal class action lawsuit filed by two California women who were stalked by men using AirTags. One plaintiff, identified only as Jane Doe, was tracked by her ex-husband who hid an AirTag in their child’s backpack. The other plaintiff, Lauren Hughes, fled her home and moved into a hotel after being stalked and threatened by a man she dated for only three months. After she began receiving notifications that an AirTag was tracking her, Hughes found one in the wheel well of her back tire. 

The plaintiffs in Hughes et al v. Apple, Inc., 3:22-cv-07668, say Apple ignored the warnings from advocates and put the safety of consumers and the general public at risk by “revolutionizing the scope, breadth, and ease of location-based stalking.” 

The Tech Behind the Tags – Insufficient Safety Warnings and a Lack of Prevention

AirTags work by establishing a Bluetooth connection with nearby Apple devices. Once connected, it uses that device’s GPS and internet connection to transmit the AirTag’s location to the iCloud where users can track it via the Find My app. With a vast network of more than 1.8 billion Apple devices worldwide, AirTags can essentially track anyone, anywhere.  

While the accuracy of Bluetooth tracking can vary, newer iPhone devices (models 11 and up) come equipped with ultra-wide broadband technology that allows AirTag owners to use Precision Tracking to get within feet of its location

In its initial release in April 2021, Apple included minimal safety measures including alerts that inform iPhone users if someone else’s AirTag had been traveling with them.Additionally, AirTags chime if separated from its owner after three days. 

When someone discovers an AirTag and taps it with their iPhone, it tells them only the information the owner allows. If an AirTag has been separated from its owner for somewhere between eight and twenty-four hours, it begins chirping regularly. By then, the AirTag owner may have already been able to track their target for hours, learning where they live, work, or go to school. The chirp is only about 60 decibels which is the average sound level of a restaurant or office. This sound is easy to muffle especially if the AirTag is hidden under a car license plate or in a wheel well. This quiet alarm is the only automatic protection against stalking Apple can provide to those who do not have an iPhone. 

Apple did eventually release an app that Android users can download to scan for rogue AirTags, but it requires Android users to know about AirTag tracking and then manually scan for the devices. With only 2.4 stars, many complain that it is ineffective and does not provide enough information.  

In response to the wave of criticism and reports of stalking and harassment, Apple has begun to increase these safety measures in piecemeal updates, which so far have failed to resolve the problem. Just three months after its release, Apple shortened the amount of time it takes for AirTags to chime if separated from its owner; from three days to somewhere between eight and twenty-four hours. But it’s easy to register an AirTag, and then disable it before the target begins receiving notifications.

Our Legal Systems Are Not Prepared to Protect Victims From AirTag Stalking.

Our criminal and civil legal systems are painfully slow to respond to the way technology has changed the way we engage with our families and communities and how we experience harm in those relationships. One of the biggest challenges victims face in reporting AirTag stalking is that many police departments and Courts do not even know what AirTags are or how they can be used to harass and stalk women.

In some states, it is not even a crime to monitor someone’s movements with a tracking device like an AirTag without their knowledge or consent. At least 26 states and the District of Columbia have some kind of law prohibiting the tracking of others without their knowledge. While 11 of these states, including Washington, incorporate this into their stalking statutes, nine others (Delaware, Illinois, Michigan, Oregon, Rhode Island, Tennessee, Texas, Utah and Wisconsin) only prohibit the use of location-tracking devices on motor vehicles without the owner’s consent. These state laws do nothing to protect against AirTags being placed in your bag or purse. These laws also don’t protect those who share a vehicle with their abuser, since the other party is also technically the owner of the vehicle. 

Many states are rapidly seeing the need to beef up their laws in response to AirTags. The Attorneys General of both New York and Pennsylvania have issued consumer protection alerts warning people about the dangers of AirTags. But much more needs to be done.

The fact that Apple released this product without considering the disproportionate impact it would have on the safety of women across the globe shows a clear lack of diversity in Apple’s design and manufacturing process. 

Administrative Agencies & Their Role in Technological Regulation

By: Chi Kim

On January 7, 2023, Kevin McCarthy became Speaker of the House after his colleagues from the House of Representatives held fifteen separate voting sessions. The House demonstrated an equally impressive and depressing feat given the inability of our current elected officials to achieve results for even seemingly mundane decisions. While many liberal observers may have rejoiced at the chaos, the fifteen votes is emblematic of an overall trend of inefficiency within the legislative branch and political processes, especially when tackling more fluid concepts and problems within the technology sector. Creating regulations requires large amounts of information, lobbying, and time to convince policymakers with inflexible positions and procedures around fluid and emerging technologies of the merits of the proposed regulations. In addition to the typical policy lag, the timeline for proposed technological regulations are further exacerbated by the following intrinsic and extrinsic factors. 

Intrinsically, Congress is not equipped to handle technological regulation by design. Although our most recent Congress is younger than its predecessor by one year, this small change alone is a historical anomaly. The 118th Congress is the third oldest since 1789 and generally has been climbing since the early 1980s.The average ages in the Senate and House are 63.9 and 57.5, respectively. While this could be the result of modern medical advancements, the increasing age of our elected officials bodes negatively for the hope that our policymakers will understand the technology that they are regulating. Remember, for instance, the famous Facebook hearings? Even the generally unpopular Mark Zuckerberg looked relatable when forced into the position of explaining a new technology to an older person. Beyond the general lack of subject matter expertise, congressional officials cannot invest the requisite time to learn about these issues while also tackling persistent issues within voting rights legislation, labor and supply chain constraints from international pressures, and a looming recession creeping closer layoff by layoff. 

Extrinsically, big tech still has a massive voice within our congressional chambers. During the 2020 election cycle fifteen major tech companies, including Amazon, Facebook, Google, Microsoft, Oracle, and others, spent $96.3 million to influence forthcoming bills like the National Defense Authorization Act, Fairness for High Skilled Immigrants Act, and the CHIPS for America Act. While Congress receives input from stakeholders, there is often a cost to frame their political positions. 

Despite our political gridlock, the American government is not completely unarmed against big tech. In political law, hydraulics is the concept that political energy is never destroyed but rather manifests into new forms, finding new gaps and openings within the regulatory or political landscape, much like water does on earth. In the context of the technological landscape, the responsibility of passing regulations has flowed to administrative bodies. The Federal Trade Commission (FTC), for example, influences technology policy in a number of different ways. The FTC recently filed a lawsuit against data broker Kochava Inc. for selling geolocation data from millions of mobile devices. If the FTC is successful, such a ruling would likely affect the overall data broker industry. Notably, the FTC leadership impacts the policy direction advanced by the agency. For FTC Commissioner, President Biden appointed Alvaro Bedoya, who previously served as the founding director of the Center on Privacy and Technology at Georgetown Law Center where he worked at the intersection of privacy and civil rights. Additionally, as of the writing of this article, the FTC is accepting public comments for a proposed rule to ban non-compete clauses. This rule is intended to increase worker earnings and create more competition among big tech. While administrative agencies do have their own procedural “policy lags,” the FTC can still actively tackle issues while receiving input from internal and external industry experts without being directly tainted by lobbying efforts. 

Law and technology are often portrayed as incompatible ideas — rising technology  meeting archaic regulations. However, policymakers need to realize that law and technology are not so different — both policymaking and technology development require troubleshooting and reiterations over time. However, unlike the software engineers in the companies that they regulate, policymakers do not have endless opportunities to sandbox their regulations before fully staking their political careers and capital. The responsibility of making such regulations has often flowed to administrative agencies that can take measured steps on the daunting task of regulating big tech companies. However, Congress should build on administrative agency efforts by passing bills based on the failures or successes of the agency actions. Doing so could result in more relevant and long-lasting technology regulations. 

Transforming the Litigation Experience with Virtual Reality

By: Abigael Diaz

As technology advances and evolves, so should our legal system. The legal system can use technology to increase efficiency and accessibility to justice, while also decreasing costs for courts. Allowing modern technology such as virtual reality into the courtroom will likely have a strong influence on participation by increasing the number of people who can take part in the judicial system as well. 

Virtual reality is a computer-generated simulated experience that takes over users’ visual and auditory perceptions and allows them to enter a completely new virtual environment. There are various types of virtual reality, including but not limited to first-person immersion, augmented reality, and desktop view. In a first-person immersion setting, the user is completely engaged in the virtual environment, with as many senses as possible co-opted to maximize the experience. In augmented reality, which is frequently used in medical practice and aviation training, there is a virtual overlay of the environment the user is in; computer-generated visuals superimposed on a human body or plane controls can allow for better reinforcement of skills and more consistent results. Desktop view is a first-person virtual experience accessed through a desktop computer using a standard keyboard and mouse to navigate the experience. 

First-person immersion virtual reality can be used with a headset or head-mounted display that covers a user’s eyes, but some headsets also cover ears or have gloves to increase the sensory experience. Virtual reality often engages sight, hearing, and touch, but some systems have even experimented with virtual smell applications. The headset is frequently tethered to a computer using USB or HDMI chords to increase the system’s capabilities. Using tethered headsets will be imperative in the legal field in the next couple of years because currently, cordless headsets make sacrifices in the quality and accuracy of information delivered through the syste,. Nevertheless, even cordless headsets will eventually have a satisfactory quality with advancing technologies. 

Covid-19 has required courts to adapt to the pandemic and use new technologies to facilitate litigation, many courts opting for two-way live video options like Zoom. Simple video chat programs are a satisfactory solution for the pandemic but using virtual reality would enhance the experience. Virtual reality differs from two-way live video because it attempts to mimic life-like experiences within a 3D virtual space while applications like zoom create 2D face grids for users to view. The judicial system can use virtual reality to simulate a 3D social interaction that gives users a virtual experience similar to what the experience would be like in person. 

Virtual reality offers an alternative way to interact with the justice system as a party, judge, jury, or bystander. Virtual reality can increase accessibility to the justice system for all, which will result in a more diverse law system. The employment of virtual reality in the legal system will likely result in many benefits, and most importantly, it will radically change how individuals participate in litigation. 

Benefits from Virtual Reality in the Legal System

Increasing accessibility to the various stages of litigation will result in a more diverse courtroom experience. The American Bar Association believes a diverse legal profession will result in a more productive, just, and intelligent system on both a cognitive and cultural level. Diversity in the law is a good thing, as supported by legal psychologist Samuel Sommers’ experiment from 2006 that used over 200 mock jury participants and demonstrated that racially diverse juries deliberate longer, discuss facts from the case more, make fewer factual errors, and are more open talking about race’s role in the case. Having a diverse judicial system is necessary to ensure that the system is truly working for all people. 

Virtual reality can increase efficiency and, in turn, lower costs throughout the legal system. With virtual reality, travel costs can be saved by allowing people to meet remotely rather than in person. For example, legal teams could save money bringing by mailing witnesses virtual reality headsets and courts could save travel expenses by providing them to jurors when theyare required to travel to crime scenes. Saving the courts time and money will result in the ability to accept more cases at each level of the court’s system and increase the number of people able to render justice for themselves. Lower legal fees can increase the number of people who can afford lawyers and increase access to the judicial system as well. Currently, not everyone in the United States has access to an attorney or legal services, so lowering legal fees would diversify the client pool to include those experiencing poverty who frequently from either Black, Indigenous, or other communities of color.

Increasing efficiency is a potential opportunity for the court systems to catch up on cases too. There is currently a large case backlog, primarily in sexual assault and immigration cases, and the effects of Covid-19 have further exacerbated this problem. Some are describing the pandemic as a “double disaster.” In addition to the disease, there is also an increase in gender-based violence, impaired reproductive and sexual health, a loss of jobs and livelihoods, and increased forced marriages, migration, and human trafficking. Covid-19 is slowing down the efficiency of the courts and increasing the frequency of litigation as a factor of new arising cases. Those harmed most by the case backlog are disproportionally from marginalized communities, including immigrants, people of color, women, and nonbinary and trans individuals. 

Virtual reality can do more than increase the racial and class diversity in the courts and diversify the ability of people involved, as more disabled people could participate in a virtual reality system that accommodates their needs. The Center for Disease Control and Prevention reports that one in four adults in the United States is living with a disability. Virtual reality can accommodate people’s individual needs to increase participation. 

Virtual reality has even been known to help improve senses for some, and many people with low vision report that virtual reality technology actually improves their ability to see. Alex Lee lost his sight to a rare disease, but he was able to see again using virtual reality technology after five years. Lee went from seeing everything as an indecipherable blur to being able to see and play in a 1940’s virtual world. Lee’s sight improvement is made possible because of the high color contrast that virtual reality implements as well as intense magnification of objects that occurs from placing virtual reality screens so close to the eyes. 

Automatic and instant language translation for all court members is possible with virtual reality in addition to the benefits already discussed. Automatic interpretation for the variety of languages spoken in American courtrooms can remove language barriers that previously prevented individuals from accessing the legal system and might save courts significant money by reducing the need to pay for in person interpretation experts. It would also allow for more access to the courts. Another benefit to the adoption of such technology includes automatic closed captioning that might allow persons hard of hearing or the deaf to participate to a greater degree. It might also be helpful, or even imperative, for the neurodivergent community such that it could encourage people to remain focused and process their surroundings more effectively. 

Virtual reality generally increases the understanding of those using it. Users can have words defined for them instantly, leading to a better-informed population that can participate in a legal system that better caters to the individual’s understanding and information needs. Furthermore, automatic definitions could be useful for defining unfamiliar legal terms to non-lawyers who find themselves in an environment replete with “legalese” terminology. And moreover, virtual reality in certain contexts can allow individuals to more accurately recall important memories, which promotes increased accuracy in factfinding and prevents court reliance on fallible memories. For example, it would likely be possible for a jury member to rewatch a portion of a witness’s testimony before deliberating. As such, virtual reality can promote justice by increasing the quantity and quality of informed participants in the judicial system. 

Virtual reality has the ability to increase access to justice, lower costs, and improve participants’ understanding of all aspects of the judicial system. An overall increase in accessibility to the judicial system from many demographics that usually cannot access legal services or lawyers will be the likely result, and as such virtual reality may contribute to the diversification of the pool of people who can use the law, participate in it, and benefit from it. 

In the coming years, virtual reality will very likely change the legal system for the better. The law should use virtual reality technology to increase efficiency and accessibility while decreasing costs. Allowing modern technology such as virtual reality into the courtroom will diversify the law and litigation process, making revolutionary changes in the litigation experience at all stages.

Navigating the Dark Forest: Data Breach in the Post-Information Age

By: Charles Simon

In 1984, the credit histories of ninety million people were exposed by theft of a numerical passcode. The code was meant to be dialed through a “teletype credit terminal” located in a Sears department store. The stolen password was posted online to a bulletin board where it existed for “at least a month” before the security breach was even noticed. The New York Times helpfully informed readers that such bulletin boards were “computer file[s] accessible to subscribers by phone.” How did the anonymous hacker crack this code? Well, the password had been handwritten onto a notepad and left in a public space by a Sears employee who found the digits too troublesome to memorize.

Interestingly, while a legal commentator from the ABA had theories about the likely legal harms to consumers and possible liability faced by the credit reporting agency from the hack, simply obtaining unauthorized access to a confidential information system wasn’t yet a crime on its own terms. Legal recourse against the hacker, had they had ever been caught, would have been uncertain given that no mail-order purchases were shown to use consumer data from the Sears/TRW system breach. Two years later, Congress would amend existing law to create the Computer Fraud and Abuse Act of 1986 formalizing the legal harm of cybersecurity breaches, but during this period hacking was generally still considered a hobbyist’s prank.

We’ve come a long way since that time. In 2020, a study funded by IBM Security estimated that the “average cost” of a data breach was $3.86 million. That number is inflated by the largest breaches, but limiting our inquiry to ‘just’ the $178,000 average figure suffered by small- and medium-sized company breaches shows that even smaller hacks can be crippling to business. Breaches of information today can result in serious physical consequences like the loss of industrial controls which govern power grids and automated factories. The healthcare system’s volumes of sensitive patient information make hospitals, insurance providers, and non-profits in the industry extremely attractive targets. Law firms are prime targets for data breach, with sensitive client personal information and litigation documents making for a lucrative prize.

Since 2015, Washington state’s data breach notification laws have required businesses, individuals, and public agencies to notify any resident who is “at risk of harm” because of a breach of personal information. This requirement of notice to customers or citizens affected by an organization’s data breach is mostly accepted among states, but as with other privacy-related rights in the US legal system, there is a patchy history of vindicating plaintiff rights under such laws. 

The ruling on a motion to dismiss in a breach of the Target corporate customer database shows a shift in attitudes towards recognizing concrete harms. A broad class of plaintiffs from across the US drew from a patchwork of state notice laws—some of them lacking direct consumer protection provisions or private rights of action under their state law—to argue that Target’s failure to provide prompt notice of the theft of financial data caused harms. What might have once been considered shaky legal ground for a consumer class action claim proved stable enough for a Minnesota federal court to reject the motion to dismiss. The resulting settlement with 47 state attorneys general was a record-setting milestone in cybersecurity business liability.Prompt notice to those affected by a data breach alone is not enough. Many modern statutes now implement standards of care for data security, and may soon begin standardizing other features such as retention and collection limitations (perhaps taking cues from the EU’s General Data Privacy Regulation). Legal scrutiny is certain to intensify as the financial harms—and less tangible harms to the increasingly-online lives—of citizens mount. The proliferation of cyber liability insurance indicates that many businesses see an inevitability to this field of litigation, which is sure to cause development of the law. In this environment, public and private sector lawyers in a broad array of fields must be cognizant of the legal harms that can arise, their organization’s recourses, and the state and federal law they operate under.

Two New Antitrust Bills Could Increase App Store Competition and Spark Discussion of Privacy and Security as Consumer Welfare Metrics

By: Zoe Wood

In the first quarter of 2022, Apple beat its own record for quarterly spending on lobbying ($2.5 million). What’s the occasion? Two new antitrust bills which threaten Apple’s dominance over its App Store are gaining ground in Congress.

What Bills? 

In late January, the Senate Judiciary Committee voted to advance the American Innovation and Choice Online Act by a vote of 16 to 6. Just a few weeks later, the Committee advanced the Open App Markets Act by a vote of 20 to 2. 

The bills are similar, however, the former has more sweeping coverage. It applies to all “online platforms” with 50,000,000 or more monthly active US-based individual users or 100,000 monthly active US-based business users which (1) enable content generation and content viewing and interaction (i.e., Instagram, Twitter, Spotify, etc.), (2) facilitate online advertising or sales of products or services of any sort (i.e., Amazon, etc.), or (3) enable searches that “access or display a large volume of information” (i.e., Google, etc.). The bill describes ten categories of prohibited conduct, all aimed at curbing covered platforms’ preferential treatment of their own products or services over other products on the platform. 

For example, the Act would prohibit “covered platforms” from “limit[ing] the ability of the products, services, or lines of business of another business user to compete on the covered platform relative to the products, services, or lines of business of the covered platform operator in a manner that would materially harm competition.” 

The latter act, the Open App Markets Act, in contrast would apply to “any person that owns or controls an app store” with over 50,000,000 US-based users. It proceeds by identifying and defining app store behaviors which are purportedly anticompetitive. For example, the Act would prohibit an app store from conditioning distribution of an app on its use of store-controlled payment systems as the in-app payment system. The Act would also prohibit app stores from requiring developers to offer apps on pricing terms equal to or more favorable than those on other app stores and from punishing a developer for doing so. Similar to the Innovation and Choice Online Act, the Open App Markets Act prohibits covered app stores from preferential treatment towards their own products in the app store search function.

Why Does Apple Oppose These Bills (Aside from the Obvious)? 

While the obvious answer (the bills would diminish Apple’s dominance and therefore diminish its profit) is probably also correct, Apple has put forward a different reason for its opposition to the acts. In a January 18th letter addressed to Senators Durbin, Grassley, Klobuchar, and Lee, and signed by Apple’s Senior Director of Government Affairs Timothy Powderly, Apple expressed concern that “[t]hese bills will reward those who have been irresponsible with users’ data and empower bad actors who would target consumers with malware, ransomware, and scams.”

The bills create an exception for otherwise prohibited actions which are “reasonably necessary” to protect safety, user privacy, security of nonpublic data, or the security of the covered platform. Apple’s letter principally takes issue with this exception, finding that it does not provide the company with enough leeway to innovate around privacy and security. The letter complains that “to introduce new and enhanced privacy or security protections under the bills, Apple would have to prove the protections were ‘necessary,’ ‘narrowly tailored,’ and that no less restrictive protections were available.” According to the letter, “[t]his is a nearly insurmountable test, especially when applied after-the-fact as an affirmative defense.” Of course, this is an overly broad statement­. The bills don’t subject all new privacy and security measures to this standard. Only the measures that are anticompetitive in the ways specifically spelled out by the bills are implicated. 

So what privacy and security measures would the bills prohibit? The letter is most concerned with the fact that the bills would restrain Apple from prohibiting “sideloading.” Sideloading refers to downloading an application onto, in this case, an Apple device, from somewhere other than the App Store. Lifting Apple’s restriction on the practice would allow developers to implement their own in-app payment systems and avoid the commission Apple takes (up to 30%) from app sales and in-app subscriptions and purchases. The theory is that prohibiting sideloading is anticompetitive in part because it results in higher prices for consumers. 

But Apple says that allowing sideloading would “put consumers in harm’s way because of the real risk of privacy and security breaches” sideloading causes. The letter further explains that sideloading allows developers to “circumvent[….] the privacy and security protections Apple has designed, including human review of every app and every app update.”

Are Apple’s Security Concerns Shared by All?

No. Privacy and security expert Bruce Schneier, who sits on the board of the Electronic Frontier Foundation and runs the security architecture at a data management company, wrote a rebuttal to Apple’s letter. According to Schneier, “[i]t’s simply not true that this legislation puts user privacy and security at risk” because “App stores monopolies cannot protect users from every risk, and they frequently prevent the distribution of important tools that actually enhance security.” Schneier thinks that “the alleged risks of third-party app stores and ‘sideloading’ apps pale in comparison to their benefits,” among them “encourag[ing] competition, prevent[ing] monopolist extortion, and guarantee[ing] users a new right to digital self-determination.”

Matt Stoller, who is the Director of Research at the American Economic Liberties Project, also wrote a strongly worded rebuttal. Like Schneier, Stoller seems to believe that Apple’s­ security-centric opposition to the bills is disingenuous. 

A New Angle on Consumer Welfare

Regardless of whether Apple’s concerns about privacy and security are overblown, the exchange between Apple, the drafters of the new antitrust bills, and members of the public is interesting because it engages with “consumer welfare”­–the entrenched legal standard which drives antitrust law­–in an atypical way.

Antitrust law exists primarily in common law, and the common law is the origin of the all-important consumer welfare standard. The standard is simple and has remained consistent since a seminal case from 1977. It is concerned primarily with whether a particular practice tends to decrease output and/or causes price to increase for consumers. If it does, the practice is anticompetitive and subject to injunction. While antitrust parties occasionally introduce other aspects of consumer welfare­­, such as the effects on innovation of a challenged practice, such effects are extremely difficult to prove in court. Therefore, most antitrust cases turn on price and output.

The bills in question implicitly take issue with the consumer welfare standard because they, in the language of the American Innovation and Choice Online Act, “provide that certain discriminatory conduct by covered platforms shall be unlawful.” Similarly, the Open App Markets Act seeks to “promote competition and reduce gatekeeper power in the app economy, increase choice, improve quality, and reduce costs for consumers.” By defining and prohibiting specific conduct outright, the bills circumvent the consumer welfare standard’s narrow focus on price and output and save potential antitrust plaintiffs from having to prove in court that Apple’s practices decrease output or increase price. 

Apple’s letter speaks the language of consumer welfare. It insists that “Apple offers consumers the choice of a platform protected from malicious and dangerous code. The bills eliminate that choice.” This point goes to the more traditional conception of consumer welfare in the antitrust context, i.e., proliferation of choice available to consumers. But primarily, the argument that Apple is making (however disingenuously) is that the bills “should be modified to strengthen­–not weaken–consumer welfare, especially with regard to consumer protection in the areas of privacy and security.” 

By focusing on “privacy and security” as a metric of consumer welfare in the antitrust context, Apple, legislators, and the general public are engaging in a conversation that ultimately expands the notion of consumer welfare beyond what would be borne out in a courtroom, constrained by entrenched antitrust precedent. In this way, the bills have already been productive.