Navigating the Dark Forest: Data Breach in the Post-Information Age

By: Charles Simon

In 1984, the credit histories of ninety million people were exposed by theft of a numerical passcode. The code was meant to be dialed through a “teletype credit terminal” located in a Sears department store. The stolen password was posted online to a bulletin board where it existed for “at least a month” before the security breach was even noticed. The New York Times helpfully informed readers that such bulletin boards were “computer file[s] accessible to subscribers by phone.” How did the anonymous hacker crack this code? Well, the password had been handwritten onto a notepad and left in a public space by a Sears employee who found the digits too troublesome to memorize.

Interestingly, while a legal commentator from the ABA had theories about the likely legal harms to consumers and possible liability faced by the credit reporting agency from the hack, simply obtaining unauthorized access to a confidential information system wasn’t yet a crime on its own terms. Legal recourse against the hacker, had they had ever been caught, would have been uncertain given that no mail-order purchases were shown to use consumer data from the Sears/TRW system breach. Two years later, Congress would amend existing law to create the Computer Fraud and Abuse Act of 1986 formalizing the legal harm of cybersecurity breaches, but during this period hacking was generally still considered a hobbyist’s prank.

We’ve come a long way since that time. In 2020, a study funded by IBM Security estimated that the “average cost” of a data breach was $3.86 million. That number is inflated by the largest breaches, but limiting our inquiry to ‘just’ the $178,000 average figure suffered by small- and medium-sized company breaches shows that even smaller hacks can be crippling to business. Breaches of information today can result in serious physical consequences like the loss of industrial controls which govern power grids and automated factories. The healthcare system’s volumes of sensitive patient information make hospitals, insurance providers, and non-profits in the industry extremely attractive targets. Law firms are prime targets for data breach, with sensitive client personal information and litigation documents making for a lucrative prize.

Since 2015, Washington state’s data breach notification laws have required businesses, individuals, and public agencies to notify any resident who is “at risk of harm” because of a breach of personal information. This requirement of notice to customers or citizens affected by an organization’s data breach is mostly accepted among states, but as with other privacy-related rights in the US legal system, there is a patchy history of vindicating plaintiff rights under such laws. 

The ruling on a motion to dismiss in a breach of the Target corporate customer database shows a shift in attitudes towards recognizing concrete harms. A broad class of plaintiffs from across the US drew from a patchwork of state notice laws—some of them lacking direct consumer protection provisions or private rights of action under their state law—to argue that Target’s failure to provide prompt notice of the theft of financial data caused harms. What might have once been considered shaky legal ground for a consumer class action claim proved stable enough for a Minnesota federal court to reject the motion to dismiss. The resulting settlement with 47 state attorneys general was a record-setting milestone in cybersecurity business liability.Prompt notice to those affected by a data breach alone is not enough. Many modern statutes now implement standards of care for data security, and may soon begin standardizing other features such as retention and collection limitations (perhaps taking cues from the EU’s General Data Privacy Regulation). Legal scrutiny is certain to intensify as the financial harms—and less tangible harms to the increasingly-online lives—of citizens mount. The proliferation of cyber liability insurance indicates that many businesses see an inevitability to this field of litigation, which is sure to cause development of the law. In this environment, public and private sector lawyers in a broad array of fields must be cognizant of the legal harms that can arise, their organization’s recourses, and the state and federal law they operate under.

Two New Antitrust Bills Could Increase App Store Competition and Spark Discussion of Privacy and Security as Consumer Welfare Metrics

By: Zoe Wood

In the first quarter of 2022, Apple beat its own record for quarterly spending on lobbying ($2.5 million). What’s the occasion? Two new antitrust bills which threaten Apple’s dominance over its App Store are gaining ground in Congress.

What Bills? 

In late January, the Senate Judiciary Committee voted to advance the American Innovation and Choice Online Act by a vote of 16 to 6. Just a few weeks later, the Committee advanced the Open App Markets Act by a vote of 20 to 2. 

The bills are similar, however, the former has more sweeping coverage. It applies to all “online platforms” with 50,000,000 or more monthly active US-based individual users or 100,000 monthly active US-based business users which (1) enable content generation and content viewing and interaction (i.e., Instagram, Twitter, Spotify, etc.), (2) facilitate online advertising or sales of products or services of any sort (i.e., Amazon, etc.), or (3) enable searches that “access or display a large volume of information” (i.e., Google, etc.). The bill describes ten categories of prohibited conduct, all aimed at curbing covered platforms’ preferential treatment of their own products or services over other products on the platform. 

For example, the Act would prohibit “covered platforms” from “limit[ing] the ability of the products, services, or lines of business of another business user to compete on the covered platform relative to the products, services, or lines of business of the covered platform operator in a manner that would materially harm competition.” 

The latter act, the Open App Markets Act, in contrast would apply to “any person that owns or controls an app store” with over 50,000,000 US-based users. It proceeds by identifying and defining app store behaviors which are purportedly anticompetitive. For example, the Act would prohibit an app store from conditioning distribution of an app on its use of store-controlled payment systems as the in-app payment system. The Act would also prohibit app stores from requiring developers to offer apps on pricing terms equal to or more favorable than those on other app stores and from punishing a developer for doing so. Similar to the Innovation and Choice Online Act, the Open App Markets Act prohibits covered app stores from preferential treatment towards their own products in the app store search function.

Why Does Apple Oppose These Bills (Aside from the Obvious)? 

While the obvious answer (the bills would diminish Apple’s dominance and therefore diminish its profit) is probably also correct, Apple has put forward a different reason for its opposition to the acts. In a January 18th letter addressed to Senators Durbin, Grassley, Klobuchar, and Lee, and signed by Apple’s Senior Director of Government Affairs Timothy Powderly, Apple expressed concern that “[t]hese bills will reward those who have been irresponsible with users’ data and empower bad actors who would target consumers with malware, ransomware, and scams.”

The bills create an exception for otherwise prohibited actions which are “reasonably necessary” to protect safety, user privacy, security of nonpublic data, or the security of the covered platform. Apple’s letter principally takes issue with this exception, finding that it does not provide the company with enough leeway to innovate around privacy and security. The letter complains that “to introduce new and enhanced privacy or security protections under the bills, Apple would have to prove the protections were ‘necessary,’ ‘narrowly tailored,’ and that no less restrictive protections were available.” According to the letter, “[t]his is a nearly insurmountable test, especially when applied after-the-fact as an affirmative defense.” Of course, this is an overly broad statement­. The bills don’t subject all new privacy and security measures to this standard. Only the measures that are anticompetitive in the ways specifically spelled out by the bills are implicated. 

So what privacy and security measures would the bills prohibit? The letter is most concerned with the fact that the bills would restrain Apple from prohibiting “sideloading.” Sideloading refers to downloading an application onto, in this case, an Apple device, from somewhere other than the App Store. Lifting Apple’s restriction on the practice would allow developers to implement their own in-app payment systems and avoid the commission Apple takes (up to 30%) from app sales and in-app subscriptions and purchases. The theory is that prohibiting sideloading is anticompetitive in part because it results in higher prices for consumers. 

But Apple says that allowing sideloading would “put consumers in harm’s way because of the real risk of privacy and security breaches” sideloading causes. The letter further explains that sideloading allows developers to “circumvent[….] the privacy and security protections Apple has designed, including human review of every app and every app update.”

Are Apple’s Security Concerns Shared by All?

No. Privacy and security expert Bruce Schneier, who sits on the board of the Electronic Frontier Foundation and runs the security architecture at a data management company, wrote a rebuttal to Apple’s letter. According to Schneier, “[i]t’s simply not true that this legislation puts user privacy and security at risk” because “App stores monopolies cannot protect users from every risk, and they frequently prevent the distribution of important tools that actually enhance security.” Schneier thinks that “the alleged risks of third-party app stores and ‘sideloading’ apps pale in comparison to their benefits,” among them “encourag[ing] competition, prevent[ing] monopolist extortion, and guarantee[ing] users a new right to digital self-determination.”

Matt Stoller, who is the Director of Research at the American Economic Liberties Project, also wrote a strongly worded rebuttal. Like Schneier, Stoller seems to believe that Apple’s­ security-centric opposition to the bills is disingenuous. 

A New Angle on Consumer Welfare

Regardless of whether Apple’s concerns about privacy and security are overblown, the exchange between Apple, the drafters of the new antitrust bills, and members of the public is interesting because it engages with “consumer welfare”­–the entrenched legal standard which drives antitrust law­–in an atypical way.

Antitrust law exists primarily in common law, and the common law is the origin of the all-important consumer welfare standard. The standard is simple and has remained consistent since a seminal case from 1977. It is concerned primarily with whether a particular practice tends to decrease output and/or causes price to increase for consumers. If it does, the practice is anticompetitive and subject to injunction. While antitrust parties occasionally introduce other aspects of consumer welfare­­, such as the effects on innovation of a challenged practice, such effects are extremely difficult to prove in court. Therefore, most antitrust cases turn on price and output.

The bills in question implicitly take issue with the consumer welfare standard because they, in the language of the American Innovation and Choice Online Act, “provide that certain discriminatory conduct by covered platforms shall be unlawful.” Similarly, the Open App Markets Act seeks to “promote competition and reduce gatekeeper power in the app economy, increase choice, improve quality, and reduce costs for consumers.” By defining and prohibiting specific conduct outright, the bills circumvent the consumer welfare standard’s narrow focus on price and output and save potential antitrust plaintiffs from having to prove in court that Apple’s practices decrease output or increase price. 

Apple’s letter speaks the language of consumer welfare. It insists that “Apple offers consumers the choice of a platform protected from malicious and dangerous code. The bills eliminate that choice.” This point goes to the more traditional conception of consumer welfare in the antitrust context, i.e., proliferation of choice available to consumers. But primarily, the argument that Apple is making (however disingenuously) is that the bills “should be modified to strengthen­–not weaken–consumer welfare, especially with regard to consumer protection in the areas of privacy and security.” 

By focusing on “privacy and security” as a metric of consumer welfare in the antitrust context, Apple, legislators, and the general public are engaging in a conversation that ultimately expands the notion of consumer welfare beyond what would be borne out in a courtroom, constrained by entrenched antitrust precedent. In this way, the bills have already been productive. 

“Grounded”: Amazon’s Delayed Promise of Aerial Package Delivery

By: Justin Cooper

In late 2013, Amazon CEO Jeff Bezos made a surprise announcement on a segment of 60 Minutes: Amazon was developing small aerial drones capable of delivering packages directly to customers’ doorsteps. He stated that the drones would be used to make speedy thirty-minute deliveries from Amazon fulfillment centers, would have a range of over ten miles, and could carry packages weighing up to five pounds. At that time, he also claimed that the widespread use of drones was at least four to five years away. Nine years later, however, “Amazon Prime Air” is still grounded largely because Amazon’s rollout of delivery drones has faced multiple technical challenges which continue to push back the program’s launch. Although the clearance of FAA regulatory hurdles briefly kindled hope that the program was back on track in 2020, concerns about the privacy and safety of Amazon Prime Air, coupled with the possibility of state and municipal challenges to the program’s rollout, could keep Amazon’s delivery drones grounded well into the future.

During the first few years after Bezos’ announcement, research and development of Amazon Prime Air services seemed to be moving at a steady pace. However, in 2015 the program hit its first snag when the Federal Aviation Administration (FAA), which establishes airworthiness criteria to ensure the safe operation of aircraft in accordance with 49 U.S.C. 44701(a) and 44704, published its widely anticipated rules governing “Unmanned Aerial Systems.” Notably, the FAA refused to green light the use of drones for commercial delivery. Amazon responded with a letter to the FAA “threatening to test the drones abroad if the FAA continued to refuse to let it test the machines outdoors in the United States.” The FAA consequently granted Amazon the ability to conduct limited domestic testing, requiring that drone test flights take place under 400 feet and remain in sight of the pilot and observer. Meanwhile, Amazon continued the development of its drones in the United Kingdom, celebrating its first successful commercial delivery in 2016. Amazon Prime Air’s United Kingdom operation seemed to be advancing even more quickly when “UK regulators…fast-tracked approvals for drone testing.” This fast-tracking “made the country an ideal testbed for drone flights and paved the way for Amazon to gain regulatory approval elsewhere.” However, behind the scenes, Amazon’s program was dealing with major problems, including staff layoffs, redundancies, and reports of mismanagement, including reports of employee drunkenness while on the job

During all of this, and back in the United States, Amazon Prime Air was making slow progress. In 2019, Amazon petitioned the FAA to allow it to begin wide-scale testing of its drones, and a year later the company announced it had received approval from the FAA to begin testing commercial deliveries. Despite this victory, however, Amazon Prime Air has continued to face significant issues that cast doubt on the program’s safety, and an investigative report conducted by Bloomberg News has recently revealed multiple Amazon drone crashes, as well as accounts of a management culture more focused on speed than safety.

This focus on speed likely stems from the fact that Amazon has fallen behind its rivals in the drone delivery space. In August 2021, Alphabet Inc.’s program, Wing, announced that it had successfully made its hundred thousandth delivery in Australia. Wing’s drone deliveries are also automated, “but monitored by pilots who function more as air traffic controllers.” A notable difference from Amazon’s drones is that Wing packages “are dropped in front of homes using a winch”, while Amazon’s drones land to deliver their packages. In addition to Wing, UPS has also successfully tested the use of delivery drones in innovative ways. For example, UPS has tested launching drones from its delivery trucks, which allows a delivery driver to cover large rural areas in a much more efficient manner. 

Aside from the technical and production challenges that have slowed the rollout of Amazon Prime Air, Amazon will likely face continued challenges due to significant privacy concerns. According to CNBC, “detecting telephone wires, people, property and even small animals on the ground all require careful sensing and collision avoidance systems.” In addition to the multiple cameras needed to navigate these obstacles, Amazon “is investing heavily in artificial intelligence to help drones navigate safely to their destinations, and drop off packages safely.” The possibility of fleets of AI-automated drones equipped with precision cameras surveilling American cities, a scene seemingly pulled from a dystopian science fiction novel, could quickly become a concerning reality.

Beyond privacy concerns, Amazon Prime Air will likely have to contend with major safety concerns. Accidents caused by manned drones have already led to multiple legal disputes. For example, in 2017, “[t]he owner of an aerial photography business was sentenced to 30 days in jail and a $500 fine after a drone he was operating crashed into people during a 2015 parade and knocked one woman unconscious. Paul Skinner, 38, was found guilty of reckless endangerment by Judge Willie Gregory of the Seattle Municipal Court.” In the case of piloted drones, victims can bring a suit against the human operator; the widespread use of automated drones, in contrast, raises difficult questions about the increased risk of personal injury and how to apportion blame. Last month, questions about the safety of Amazon’s ground-based “autonomous personal delivery devices”, known as Amazon Scout, led the city of Kirkland, Washington to place a temporary moratorium on their continued use, and as Amazon Prime Air moves towards wide-scale implementation, it could likely face similar slow-downs and push back from various state and local governments. 

Despite these setbacks, Amazon has not faltered in its commitment to implement Amazon Prime Air. The promise of faster, more efficient shipping will very likely continue to outweigh the challenges facing the implementation of aerial delivery drones; this is proven by Amazon’s commitment to launching its program, along with Alphabet Inc.’s and UPS’ already operational delivery drone programs. However, the technical challenges and social concerns surrounding these programs will likely continue to delay their full-scale rollout in the near future, “grounding” Amazon Prime Air for at least a little bit longer.

Lawmakers Set Their Sights on Restricting Targeted Advertising

By: Laura Ames

Anyone who spends time online has encountered “surveillance advertising.” You enter something into your search engine, and immediately encounter ads for related products on other sites. Targeted advertising shows individual consumers certain ads based on inferences drawn from their interests, demographics, or other characteristics. This notion itself might not seem particularly harmful, but these data are accrued by tracking users’ activities online. Ad tech companies identify the internet-connected devices that consumers use to search, make purchases, use social media, watch videos, and otherwise interact with the digital world. Such companies then compile these data into user profiles, match the profiles with ads, and then place the ads where consumers will view them. In addition to basic privacy concerns, the Consumer Federation of America (CFA) points to the potential for companies to hide personalized pricing from consumers or to promote unhealthy products and perpetuate fraud. Perhaps the largest concern is that the large stores of personal data that these companies maintain put consumers at risk of having their privacy invaded, identity theft, and malicious tracking.   

In response to these concerns, Democratic lawmakers unveiled the Banning Surveillance Advertising Act (BSSA) in an attempt to restrict the practice and under a general consensus that surveillance advertising is a threat to individual users as well as society at large. This move prompted opponents to argue that the BSSA is overly broad and will harm users, small businesses, and large tech companies alike.

What Does the BSSA Do? 

The BSSA is sponsored by Senator Cory Booker and Representatives Jan Schakowsky and Anna Eshoo. The bill bars digital advertisers from targeting their ads to users and also prohibits advertisers from targeting ads based on protected information like race, gender, religion, or other personal data purchased from data brokers. According to Senator Booker, surveillance advertising is “a predatory and invasive practice,” and the resulting hoarding of data not only “abuses privacy, but also drives the spread of misinformation, domestic extremism, racial division, and violence.”

The BSSA is broad, but it does provide several exceptions. Notably, it allows location-based targeting and context advertising, which occurs when companies match ads to the content of a particular site. The bill suggests delegating power to the FCC and state attorneys general to enforce violations. It also allows private citizens to bring civil actions against companies that violate the ban with monetary penalties up to $1,000 for negligent violations and up to $5,000 for “reckless, knowing, willful, or intentional” violations. The BSSA has support from many public organizations and a number of professors and academicians. Among several tech companies supporting the BSSA is the privacy-focused search engine DuckDuckGo. Its CEO, Gabriel Weinberg, opined that targeted ads are “dangerous to society” and pointed to DuckDuckGo as evidence that “you can run a successful and profitable ad-based business without building profiles on people.” 

The BSSA as Part of a Larger Legislative Agenda 

The BSSA is just one bill among a number of pieces of legislation aiming to restrict the power of large tech companies. Lawmakers have grown increasingly focused on bills regulating social media companies since Facebook whistleblower Frances Haugen testified before Congress in 2021. These bills target a wide variety of topics including antitrust, privacy, child protection, misinformation, and cryptocurrency regulation. Most of these bills appear to be rather long shots, however, because although the Biden administration supports tech industry reform, so many other issues are high priorities for it. Despite this hurdle, lawmakers are currently making a concerted push with these tech bills because the legislature’s attention will soon turn to the 2022 midterms. Additionally, Democrats, who have broader support for tech regulations, worry they could lose control of Congress. Senator Amy Klobuchar argued that once fall comes, “it will be very difficult to get things done because everything is about the election.” 

Tech and Marketing Companies Push Back

In general, tech companies tend to argue that targeted advertising benefits consumers and businesses alike. First, companies argue that this method allows users to see ads that are directly relevant to their needs or interests. Experts rebut this theory with the fact that in order to provide these relevant ads, tech companies must collect and store a great deal of data on users, which can put that data at risk of interference by third parties. Companies also argue that this legislation would drastically change their business models. Marketing and global media platform The Drum predicted that the BSSA “could have a massive impact on the ad industry as well as harm small businesses.” The Interactive Advertising Bureau (IAB), which includes over 700 brands, agencies, media firms, and tech companies, issued a statement strongly condemning the BSSA.  IAB CEO David Cohen argued that the BSSA would “effectively eliminate internet advertising… jeopardizing an estimated 17 million jobs primarily at small- and medium-sized businesses.” The IAB and others argue that targeted advertising is a cost-effective way to precisely advertise to particular users. However, the CFA points to evidence that contextual advertising, which is allowed under the BSSA, is more cost-effective for advertisers and provides greater revenue for publishers. 

Likelihood of the BSSA’s Success

In the past several years, there has been growing bipartisan support for bills addressing the increasing power of tech companies. This support would seem to suggest that these pieces of tech legislation have a better chance of advancing than other more controversial legislation. However, even with this broader support, dozens of bills addressing tech industry power have failed recently, leaving America behind a number of other countries in this area. One of the major problems impeding bipartisan progress is that while both parties tend to agree that Congress needs to address the tremendous power that tech companies have, they do not align on the methods the government should use to address the problem. For example, Democrats have called for measures that would compel companies to remove misinformation and other harmful content while Republicans are largely concerned with laws barring companies from censoring or removing content. According to Rebecca Allensworth, a professor at Vanderbilt Law School, the larger issue is that ultimately, “regulation is regulation, so you will have a hard time bringing a lot of Republicans on board for a bill viewed as a heavy-handed aggressive takedown of Big Tech.” Given Congress’ recent track record in moving major pieces of legislation, and powerful opposition from the ad tech industry, the BSSA might be abandoned along with other recent technology legislation.  

Your Employer Can Monitor You While You Work From Home—Should They?

By: Joshua Waugh

Since “pandemic life” began, as many as 40% of American workers have worked from home. If you’ve been lucky enough to trade the crowded bus or the gridlocked highway for the shorter bedroom-to-laptop commute, chances are you’ve wondered just how closely your employer is watching you. The truth is that telework, for all its benefits, also has a major downside: near limitless opportunity for high-tech surveillance. And while it is clear that employers have the legal capability and the technology to monitor their employees, it’s less clear that employee surveillance is actually a good idea at all.

Can my employer really monitor me?

It is no secret that American privacy and technology laws are often lacking. At the federal level, the primary law dealing with electronic privacy is the Electronic Communications Privacy Act (ECPA), which was passed in 1986. The law is so old that Title I of the Act only contemplates a third party’s “interception” of a message sent by “wire, oral, or electronic communication”; the law doesn’t address the possibility of accessing stored communications, such as email, post-transmission.

Furthermore, Title I of the ECPA has been interpreted to include a carveout specifically allowing employers to monitor employees as long as the employer can show a legitimate business purpose. The ECPA also permits employers to electronically surveil employees upon their consent, which, given often imbalanced employee-employer power dynamics, is not great for the ordinary employee.

Title II of the ECPA, or the Stored Communications Act (SCA), provides more protection to employees, though the law is still just as dated as Title I. Under the SCA it is fairly well established that your employer can’t log in to your personal email without your permission. So rest assured, your employer cannot see the thousands of unread advertising emails in your inbox unless you give them access.

All of that said, there is not much legislation on electronic privacy at the federal level. That may seem surprising considering we’ve seen privacy controversy after privacy controversy from practically every big tech company in recent years, but electronic privacy regulation seems to be generally left to the states. The end result is that only Californians (and to a lesser extent Coloradans and Virginians) enjoy broad statutory protections against electronic employer surveillance. In most of the other states, as long as you are using an employer’s device or network, your employer may surveil you as much as they’d like. And surveillance software is readily available, including keyloggers that record every keystroke you make, activity monitors, and even software that records every website or app you access on the device. In fact, if your workplace is using the Microsoft Office 365 Suite, your employer is already able to monitor and analyze your work activity.

Where do we go from here?

If you’re concerned about your general lack of privacy rights living in America, you are not alone. Researchers have published studies showing that extensive employer surveillance can breed distrust among employees and such surveillance can be a significant hindrance on worker productivity and other positive performance outcomes. The feelings of distrust are even stronger when employees discover that they were being surveilled without their knowledge.

Despite evidence suggesting employee surveillance may have negative effects, surveys show that 62% of executives planned to use monitoring software in 2019, and that number is certain to have grown during the pandemic work-from-home era. Meanwhile, we’re also in the midst of a radical transformation in the labor force—the U.S. Bureau of Labor Statistics reported that 2.9% of the entire U.S. workforce, 4.3 million people, quit their jobs in August 2021. By all appearances, the Great Resignation is accelerating as 4.4 million workers went on to quit during September 2021, topping August’s record numbers. At a time when people are rethinking their relationship with work, struggling with burnout, and dealing with burdensome household issues such as child- and elder-care, employers should spend less time secretly surveilling their employees, and instead put effort into employee engagement. Essentially the opposite of paranoid surveillance, companies should engage with their workers by providing flexibility and building trust. Employee engagement is more likely to boost productivity than surveilling, and more importantly, in today’s climate, has been shown to increase employee retention. Ultimately, under current U.S. law, your employer can surveil you to its heart’s content in most states—but you can also resign if you feel your privacy rights have not been respected. As more and more in the labor force decide to do so, we’ll just have to wait and see how legislators respond.