Navigating the Dark Forest: Data Breach in the Post-Information Age

By: Charles Simon

In 1984, the credit histories of ninety million people were exposed by theft of a numerical passcode. The code was meant to be dialed through a “teletype credit terminal” located in a Sears department store. The stolen password was posted online to a bulletin board where it existed for “at least a month” before the security breach was even noticed. The New York Times helpfully informed readers that such bulletin boards were “computer file[s] accessible to subscribers by phone.” How did the anonymous hacker crack this code? Well, the password had been handwritten onto a notepad and left in a public space by a Sears employee who found the digits too troublesome to memorize.

Interestingly, while a legal commentator from the ABA had theories about the likely legal harms to consumers and possible liability faced by the credit reporting agency from the hack, simply obtaining unauthorized access to a confidential information system wasn’t yet a crime on its own terms. Legal recourse against the hacker, had they had ever been caught, would have been uncertain given that no mail-order purchases were shown to use consumer data from the Sears/TRW system breach. Two years later, Congress would amend existing law to create the Computer Fraud and Abuse Act of 1986 formalizing the legal harm of cybersecurity breaches, but during this period hacking was generally still considered a hobbyist’s prank.

We’ve come a long way since that time. In 2020, a study funded by IBM Security estimated that the “average cost” of a data breach was $3.86 million. That number is inflated by the largest breaches, but limiting our inquiry to ‘just’ the $178,000 average figure suffered by small- and medium-sized company breaches shows that even smaller hacks can be crippling to business. Breaches of information today can result in serious physical consequences like the loss of industrial controls which govern power grids and automated factories. The healthcare system’s volumes of sensitive patient information make hospitals, insurance providers, and non-profits in the industry extremely attractive targets. Law firms are prime targets for data breach, with sensitive client personal information and litigation documents making for a lucrative prize.

Since 2015, Washington state’s data breach notification laws have required businesses, individuals, and public agencies to notify any resident who is “at risk of harm” because of a breach of personal information. This requirement of notice to customers or citizens affected by an organization’s data breach is mostly accepted among states, but as with other privacy-related rights in the US legal system, there is a patchy history of vindicating plaintiff rights under such laws. 

The ruling on a motion to dismiss in a breach of the Target corporate customer database shows a shift in attitudes towards recognizing concrete harms. A broad class of plaintiffs from across the US drew from a patchwork of state notice laws—some of them lacking direct consumer protection provisions or private rights of action under their state law—to argue that Target’s failure to provide prompt notice of the theft of financial data caused harms. What might have once been considered shaky legal ground for a consumer class action claim proved stable enough for a Minnesota federal court to reject the motion to dismiss. The resulting settlement with 47 state attorneys general was a record-setting milestone in cybersecurity business liability.Prompt notice to those affected by a data breach alone is not enough. Many modern statutes now implement standards of care for data security, and may soon begin standardizing other features such as retention and collection limitations (perhaps taking cues from the EU’s General Data Privacy Regulation). Legal scrutiny is certain to intensify as the financial harms—and less tangible harms to the increasingly-online lives—of citizens mount. The proliferation of cyber liability insurance indicates that many businesses see an inevitability to this field of litigation, which is sure to cause development of the law. In this environment, public and private sector lawyers in a broad array of fields must be cognizant of the legal harms that can arise, their organization’s recourses, and the state and federal law they operate under.

Two New Antitrust Bills Could Increase App Store Competition and Spark Discussion of Privacy and Security as Consumer Welfare Metrics

By: Zoe Wood

In the first quarter of 2022, Apple beat its own record for quarterly spending on lobbying ($2.5 million). What’s the occasion? Two new antitrust bills which threaten Apple’s dominance over its App Store are gaining ground in Congress.

What Bills? 

In late January, the Senate Judiciary Committee voted to advance the American Innovation and Choice Online Act by a vote of 16 to 6. Just a few weeks later, the Committee advanced the Open App Markets Act by a vote of 20 to 2. 

The bills are similar, however, the former has more sweeping coverage. It applies to all “online platforms” with 50,000,000 or more monthly active US-based individual users or 100,000 monthly active US-based business users which (1) enable content generation and content viewing and interaction (i.e., Instagram, Twitter, Spotify, etc.), (2) facilitate online advertising or sales of products or services of any sort (i.e., Amazon, etc.), or (3) enable searches that “access or display a large volume of information” (i.e., Google, etc.). The bill describes ten categories of prohibited conduct, all aimed at curbing covered platforms’ preferential treatment of their own products or services over other products on the platform. 

For example, the Act would prohibit “covered platforms” from “limit[ing] the ability of the products, services, or lines of business of another business user to compete on the covered platform relative to the products, services, or lines of business of the covered platform operator in a manner that would materially harm competition.” 

The latter act, the Open App Markets Act, in contrast would apply to “any person that owns or controls an app store” with over 50,000,000 US-based users. It proceeds by identifying and defining app store behaviors which are purportedly anticompetitive. For example, the Act would prohibit an app store from conditioning distribution of an app on its use of store-controlled payment systems as the in-app payment system. The Act would also prohibit app stores from requiring developers to offer apps on pricing terms equal to or more favorable than those on other app stores and from punishing a developer for doing so. Similar to the Innovation and Choice Online Act, the Open App Markets Act prohibits covered app stores from preferential treatment towards their own products in the app store search function.

Why Does Apple Oppose These Bills (Aside from the Obvious)? 

While the obvious answer (the bills would diminish Apple’s dominance and therefore diminish its profit) is probably also correct, Apple has put forward a different reason for its opposition to the acts. In a January 18th letter addressed to Senators Durbin, Grassley, Klobuchar, and Lee, and signed by Apple’s Senior Director of Government Affairs Timothy Powderly, Apple expressed concern that “[t]hese bills will reward those who have been irresponsible with users’ data and empower bad actors who would target consumers with malware, ransomware, and scams.”

The bills create an exception for otherwise prohibited actions which are “reasonably necessary” to protect safety, user privacy, security of nonpublic data, or the security of the covered platform. Apple’s letter principally takes issue with this exception, finding that it does not provide the company with enough leeway to innovate around privacy and security. The letter complains that “to introduce new and enhanced privacy or security protections under the bills, Apple would have to prove the protections were ‘necessary,’ ‘narrowly tailored,’ and that no less restrictive protections were available.” According to the letter, “[t]his is a nearly insurmountable test, especially when applied after-the-fact as an affirmative defense.” Of course, this is an overly broad statement­. The bills don’t subject all new privacy and security measures to this standard. Only the measures that are anticompetitive in the ways specifically spelled out by the bills are implicated. 

So what privacy and security measures would the bills prohibit? The letter is most concerned with the fact that the bills would restrain Apple from prohibiting “sideloading.” Sideloading refers to downloading an application onto, in this case, an Apple device, from somewhere other than the App Store. Lifting Apple’s restriction on the practice would allow developers to implement their own in-app payment systems and avoid the commission Apple takes (up to 30%) from app sales and in-app subscriptions and purchases. The theory is that prohibiting sideloading is anticompetitive in part because it results in higher prices for consumers. 

But Apple says that allowing sideloading would “put consumers in harm’s way because of the real risk of privacy and security breaches” sideloading causes. The letter further explains that sideloading allows developers to “circumvent[….] the privacy and security protections Apple has designed, including human review of every app and every app update.”

Are Apple’s Security Concerns Shared by All?

No. Privacy and security expert Bruce Schneier, who sits on the board of the Electronic Frontier Foundation and runs the security architecture at a data management company, wrote a rebuttal to Apple’s letter. According to Schneier, “[i]t’s simply not true that this legislation puts user privacy and security at risk” because “App stores monopolies cannot protect users from every risk, and they frequently prevent the distribution of important tools that actually enhance security.” Schneier thinks that “the alleged risks of third-party app stores and ‘sideloading’ apps pale in comparison to their benefits,” among them “encourag[ing] competition, prevent[ing] monopolist extortion, and guarantee[ing] users a new right to digital self-determination.”

Matt Stoller, who is the Director of Research at the American Economic Liberties Project, also wrote a strongly worded rebuttal. Like Schneier, Stoller seems to believe that Apple’s­ security-centric opposition to the bills is disingenuous. 

A New Angle on Consumer Welfare

Regardless of whether Apple’s concerns about privacy and security are overblown, the exchange between Apple, the drafters of the new antitrust bills, and members of the public is interesting because it engages with “consumer welfare”­–the entrenched legal standard which drives antitrust law­–in an atypical way.

Antitrust law exists primarily in common law, and the common law is the origin of the all-important consumer welfare standard. The standard is simple and has remained consistent since a seminal case from 1977. It is concerned primarily with whether a particular practice tends to decrease output and/or causes price to increase for consumers. If it does, the practice is anticompetitive and subject to injunction. While antitrust parties occasionally introduce other aspects of consumer welfare­­, such as the effects on innovation of a challenged practice, such effects are extremely difficult to prove in court. Therefore, most antitrust cases turn on price and output.

The bills in question implicitly take issue with the consumer welfare standard because they, in the language of the American Innovation and Choice Online Act, “provide that certain discriminatory conduct by covered platforms shall be unlawful.” Similarly, the Open App Markets Act seeks to “promote competition and reduce gatekeeper power in the app economy, increase choice, improve quality, and reduce costs for consumers.” By defining and prohibiting specific conduct outright, the bills circumvent the consumer welfare standard’s narrow focus on price and output and save potential antitrust plaintiffs from having to prove in court that Apple’s practices decrease output or increase price. 

Apple’s letter speaks the language of consumer welfare. It insists that “Apple offers consumers the choice of a platform protected from malicious and dangerous code. The bills eliminate that choice.” This point goes to the more traditional conception of consumer welfare in the antitrust context, i.e., proliferation of choice available to consumers. But primarily, the argument that Apple is making (however disingenuously) is that the bills “should be modified to strengthen­–not weaken–consumer welfare, especially with regard to consumer protection in the areas of privacy and security.” 

By focusing on “privacy and security” as a metric of consumer welfare in the antitrust context, Apple, legislators, and the general public are engaging in a conversation that ultimately expands the notion of consumer welfare beyond what would be borne out in a courtroom, constrained by entrenched antitrust precedent. In this way, the bills have already been productive. 

Lawmakers Set Their Sights on Restricting Targeted Advertising

By: Laura Ames

Anyone who spends time online has encountered “surveillance advertising.” You enter something into your search engine, and immediately encounter ads for related products on other sites. Targeted advertising shows individual consumers certain ads based on inferences drawn from their interests, demographics, or other characteristics. This notion itself might not seem particularly harmful, but these data are accrued by tracking users’ activities online. Ad tech companies identify the internet-connected devices that consumers use to search, make purchases, use social media, watch videos, and otherwise interact with the digital world. Such companies then compile these data into user profiles, match the profiles with ads, and then place the ads where consumers will view them. In addition to basic privacy concerns, the Consumer Federation of America (CFA) points to the potential for companies to hide personalized pricing from consumers or to promote unhealthy products and perpetuate fraud. Perhaps the largest concern is that the large stores of personal data that these companies maintain put consumers at risk of having their privacy invaded, identity theft, and malicious tracking.   

In response to these concerns, Democratic lawmakers unveiled the Banning Surveillance Advertising Act (BSSA) in an attempt to restrict the practice and under a general consensus that surveillance advertising is a threat to individual users as well as society at large. This move prompted opponents to argue that the BSSA is overly broad and will harm users, small businesses, and large tech companies alike.

What Does the BSSA Do? 

The BSSA is sponsored by Senator Cory Booker and Representatives Jan Schakowsky and Anna Eshoo. The bill bars digital advertisers from targeting their ads to users and also prohibits advertisers from targeting ads based on protected information like race, gender, religion, or other personal data purchased from data brokers. According to Senator Booker, surveillance advertising is “a predatory and invasive practice,” and the resulting hoarding of data not only “abuses privacy, but also drives the spread of misinformation, domestic extremism, racial division, and violence.”

The BSSA is broad, but it does provide several exceptions. Notably, it allows location-based targeting and context advertising, which occurs when companies match ads to the content of a particular site. The bill suggests delegating power to the FCC and state attorneys general to enforce violations. It also allows private citizens to bring civil actions against companies that violate the ban with monetary penalties up to $1,000 for negligent violations and up to $5,000 for “reckless, knowing, willful, or intentional” violations. The BSSA has support from many public organizations and a number of professors and academicians. Among several tech companies supporting the BSSA is the privacy-focused search engine DuckDuckGo. Its CEO, Gabriel Weinberg, opined that targeted ads are “dangerous to society” and pointed to DuckDuckGo as evidence that “you can run a successful and profitable ad-based business without building profiles on people.” 

The BSSA as Part of a Larger Legislative Agenda 

The BSSA is just one bill among a number of pieces of legislation aiming to restrict the power of large tech companies. Lawmakers have grown increasingly focused on bills regulating social media companies since Facebook whistleblower Frances Haugen testified before Congress in 2021. These bills target a wide variety of topics including antitrust, privacy, child protection, misinformation, and cryptocurrency regulation. Most of these bills appear to be rather long shots, however, because although the Biden administration supports tech industry reform, so many other issues are high priorities for it. Despite this hurdle, lawmakers are currently making a concerted push with these tech bills because the legislature’s attention will soon turn to the 2022 midterms. Additionally, Democrats, who have broader support for tech regulations, worry they could lose control of Congress. Senator Amy Klobuchar argued that once fall comes, “it will be very difficult to get things done because everything is about the election.” 

Tech and Marketing Companies Push Back

In general, tech companies tend to argue that targeted advertising benefits consumers and businesses alike. First, companies argue that this method allows users to see ads that are directly relevant to their needs or interests. Experts rebut this theory with the fact that in order to provide these relevant ads, tech companies must collect and store a great deal of data on users, which can put that data at risk of interference by third parties. Companies also argue that this legislation would drastically change their business models. Marketing and global media platform The Drum predicted that the BSSA “could have a massive impact on the ad industry as well as harm small businesses.” The Interactive Advertising Bureau (IAB), which includes over 700 brands, agencies, media firms, and tech companies, issued a statement strongly condemning the BSSA.  IAB CEO David Cohen argued that the BSSA would “effectively eliminate internet advertising… jeopardizing an estimated 17 million jobs primarily at small- and medium-sized businesses.” The IAB and others argue that targeted advertising is a cost-effective way to precisely advertise to particular users. However, the CFA points to evidence that contextual advertising, which is allowed under the BSSA, is more cost-effective for advertisers and provides greater revenue for publishers. 

Likelihood of the BSSA’s Success

In the past several years, there has been growing bipartisan support for bills addressing the increasing power of tech companies. This support would seem to suggest that these pieces of tech legislation have a better chance of advancing than other more controversial legislation. However, even with this broader support, dozens of bills addressing tech industry power have failed recently, leaving America behind a number of other countries in this area. One of the major problems impeding bipartisan progress is that while both parties tend to agree that Congress needs to address the tremendous power that tech companies have, they do not align on the methods the government should use to address the problem. For example, Democrats have called for measures that would compel companies to remove misinformation and other harmful content while Republicans are largely concerned with laws barring companies from censoring or removing content. According to Rebecca Allensworth, a professor at Vanderbilt Law School, the larger issue is that ultimately, “regulation is regulation, so you will have a hard time bringing a lot of Republicans on board for a bill viewed as a heavy-handed aggressive takedown of Big Tech.” Given Congress’ recent track record in moving major pieces of legislation, and powerful opposition from the ad tech industry, the BSSA might be abandoned along with other recent technology legislation.  

They Are Listening and It CAN Come Back to Haunt You

echo

Amazon Echo

By Tyler Quillin

 

How many smart devices with voice-activation capabilities surround you at any given moment? How many times have you thought about whether they are listening to everything you’re saying, just waiting for the word “Alexa” to wake them up from their idle eavesdropping? Well, some of your concerns may soon be answered by a court in Arkansas.

In late 2016, Bentonville Police Department of Arkansas obtained a search warrant for the recordings produce through Amazon’s “Echo” device pertaining to a bath tub murder. Echo is aptly described as an “always on” device. It continuously listens, waiting to hear the term “Alexa,” which “wakes” it up. Once awoken, Alexa will perform various tasks upon verbal request. She does everything from checking the weather or traffic, to answering trivia, to playing music through a Bluetooth connection.

Continue reading

Slippery Slope for Online Service Providers with New California Appellate Court Ruling

ispsBy Tyler Quillin

The most important law governing the internet just had its 20th birthday earlier this year, the Communications Decency Act (CDA). Signed by President Bill Clinton in 1996, the CDA grants online service providers immunity from liability for most illegal activities of their users. What’s more, the CDA not only allows large internet-based companies like Facebook, Amazon, and Yelp! to survive because they don’t have to individually each user’s activity, it also enables a large portion of the freedom of speech the general public enjoys online daily.

Yet, despite 20 years of precedent, the CDA has come under scrutiny. Most notably, a California appellate court issued a ruling that included an order for Yelp!, a nonparty to the case, to take down a defamatory post involving an attorney who sued a former client for posting defamatory comments and reviews on Yelp!. Along with the court order to take down the reviews, the attorney won on a default judgment to the tune of over $500,000.

Continue reading