Two New Antitrust Bills Could Increase App Store Competition and Spark Discussion of Privacy and Security as Consumer Welfare Metrics

By: Zoe Wood

In the first quarter of 2022, Apple beat its own record for quarterly spending on lobbying ($2.5 million). What’s the occasion? Two new antitrust bills which threaten Apple’s dominance over its App Store are gaining ground in Congress.

What Bills? 

In late January, the Senate Judiciary Committee voted to advance the American Innovation and Choice Online Act by a vote of 16 to 6. Just a few weeks later, the Committee advanced the Open App Markets Act by a vote of 20 to 2. 

The bills are similar, however, the former has more sweeping coverage. It applies to all “online platforms” with 50,000,000 or more monthly active US-based individual users or 100,000 monthly active US-based business users which (1) enable content generation and content viewing and interaction (i.e., Instagram, Twitter, Spotify, etc.), (2) facilitate online advertising or sales of products or services of any sort (i.e., Amazon, etc.), or (3) enable searches that “access or display a large volume of information” (i.e., Google, etc.). The bill describes ten categories of prohibited conduct, all aimed at curbing covered platforms’ preferential treatment of their own products or services over other products on the platform. 

For example, the Act would prohibit “covered platforms” from “limit[ing] the ability of the products, services, or lines of business of another business user to compete on the covered platform relative to the products, services, or lines of business of the covered platform operator in a manner that would materially harm competition.” 

The latter act, the Open App Markets Act, in contrast would apply to “any person that owns or controls an app store” with over 50,000,000 US-based users. It proceeds by identifying and defining app store behaviors which are purportedly anticompetitive. For example, the Act would prohibit an app store from conditioning distribution of an app on its use of store-controlled payment systems as the in-app payment system. The Act would also prohibit app stores from requiring developers to offer apps on pricing terms equal to or more favorable than those on other app stores and from punishing a developer for doing so. Similar to the Innovation and Choice Online Act, the Open App Markets Act prohibits covered app stores from preferential treatment towards their own products in the app store search function.

Why Does Apple Oppose These Bills (Aside from the Obvious)? 

While the obvious answer (the bills would diminish Apple’s dominance and therefore diminish its profit) is probably also correct, Apple has put forward a different reason for its opposition to the acts. In a January 18th letter addressed to Senators Durbin, Grassley, Klobuchar, and Lee, and signed by Apple’s Senior Director of Government Affairs Timothy Powderly, Apple expressed concern that “[t]hese bills will reward those who have been irresponsible with users’ data and empower bad actors who would target consumers with malware, ransomware, and scams.”

The bills create an exception for otherwise prohibited actions which are “reasonably necessary” to protect safety, user privacy, security of nonpublic data, or the security of the covered platform. Apple’s letter principally takes issue with this exception, finding that it does not provide the company with enough leeway to innovate around privacy and security. The letter complains that “to introduce new and enhanced privacy or security protections under the bills, Apple would have to prove the protections were ‘necessary,’ ‘narrowly tailored,’ and that no less restrictive protections were available.” According to the letter, “[t]his is a nearly insurmountable test, especially when applied after-the-fact as an affirmative defense.” Of course, this is an overly broad statement­. The bills don’t subject all new privacy and security measures to this standard. Only the measures that are anticompetitive in the ways specifically spelled out by the bills are implicated. 

So what privacy and security measures would the bills prohibit? The letter is most concerned with the fact that the bills would restrain Apple from prohibiting “sideloading.” Sideloading refers to downloading an application onto, in this case, an Apple device, from somewhere other than the App Store. Lifting Apple’s restriction on the practice would allow developers to implement their own in-app payment systems and avoid the commission Apple takes (up to 30%) from app sales and in-app subscriptions and purchases. The theory is that prohibiting sideloading is anticompetitive in part because it results in higher prices for consumers. 

But Apple says that allowing sideloading would “put consumers in harm’s way because of the real risk of privacy and security breaches” sideloading causes. The letter further explains that sideloading allows developers to “circumvent[….] the privacy and security protections Apple has designed, including human review of every app and every app update.”

Are Apple’s Security Concerns Shared by All?

No. Privacy and security expert Bruce Schneier, who sits on the board of the Electronic Frontier Foundation and runs the security architecture at a data management company, wrote a rebuttal to Apple’s letter. According to Schneier, “[i]t’s simply not true that this legislation puts user privacy and security at risk” because “App stores monopolies cannot protect users from every risk, and they frequently prevent the distribution of important tools that actually enhance security.” Schneier thinks that “the alleged risks of third-party app stores and ‘sideloading’ apps pale in comparison to their benefits,” among them “encourag[ing] competition, prevent[ing] monopolist extortion, and guarantee[ing] users a new right to digital self-determination.”

Matt Stoller, who is the Director of Research at the American Economic Liberties Project, also wrote a strongly worded rebuttal. Like Schneier, Stoller seems to believe that Apple’s­ security-centric opposition to the bills is disingenuous. 

A New Angle on Consumer Welfare

Regardless of whether Apple’s concerns about privacy and security are overblown, the exchange between Apple, the drafters of the new antitrust bills, and members of the public is interesting because it engages with “consumer welfare”­–the entrenched legal standard which drives antitrust law­–in an atypical way.

Antitrust law exists primarily in common law, and the common law is the origin of the all-important consumer welfare standard. The standard is simple and has remained consistent since a seminal case from 1977. It is concerned primarily with whether a particular practice tends to decrease output and/or causes price to increase for consumers. If it does, the practice is anticompetitive and subject to injunction. While antitrust parties occasionally introduce other aspects of consumer welfare­­, such as the effects on innovation of a challenged practice, such effects are extremely difficult to prove in court. Therefore, most antitrust cases turn on price and output.

The bills in question implicitly take issue with the consumer welfare standard because they, in the language of the American Innovation and Choice Online Act, “provide that certain discriminatory conduct by covered platforms shall be unlawful.” Similarly, the Open App Markets Act seeks to “promote competition and reduce gatekeeper power in the app economy, increase choice, improve quality, and reduce costs for consumers.” By defining and prohibiting specific conduct outright, the bills circumvent the consumer welfare standard’s narrow focus on price and output and save potential antitrust plaintiffs from having to prove in court that Apple’s practices decrease output or increase price. 

Apple’s letter speaks the language of consumer welfare. It insists that “Apple offers consumers the choice of a platform protected from malicious and dangerous code. The bills eliminate that choice.” This point goes to the more traditional conception of consumer welfare in the antitrust context, i.e., proliferation of choice available to consumers. But primarily, the argument that Apple is making (however disingenuously) is that the bills “should be modified to strengthen­–not weaken–consumer welfare, especially with regard to consumer protection in the areas of privacy and security.” 

By focusing on “privacy and security” as a metric of consumer welfare in the antitrust context, Apple, legislators, and the general public are engaging in a conversation that ultimately expands the notion of consumer welfare beyond what would be borne out in a courtroom, constrained by entrenched antitrust precedent. In this way, the bills have already been productive. 

“Grounded”: Amazon’s Delayed Promise of Aerial Package Delivery

By: Justin Cooper

In late 2013, Amazon CEO Jeff Bezos made a surprise announcement on a segment of 60 Minutes: Amazon was developing small aerial drones capable of delivering packages directly to customers’ doorsteps. He stated that the drones would be used to make speedy thirty-minute deliveries from Amazon fulfillment centers, would have a range of over ten miles, and could carry packages weighing up to five pounds. At that time, he also claimed that the widespread use of drones was at least four to five years away. Nine years later, however, “Amazon Prime Air” is still grounded largely because Amazon’s rollout of delivery drones has faced multiple technical challenges which continue to push back the program’s launch. Although the clearance of FAA regulatory hurdles briefly kindled hope that the program was back on track in 2020, concerns about the privacy and safety of Amazon Prime Air, coupled with the possibility of state and municipal challenges to the program’s rollout, could keep Amazon’s delivery drones grounded well into the future.

During the first few years after Bezos’ announcement, research and development of Amazon Prime Air services seemed to be moving at a steady pace. However, in 2015 the program hit its first snag when the Federal Aviation Administration (FAA), which establishes airworthiness criteria to ensure the safe operation of aircraft in accordance with 49 U.S.C. 44701(a) and 44704, published its widely anticipated rules governing “Unmanned Aerial Systems.” Notably, the FAA refused to green light the use of drones for commercial delivery. Amazon responded with a letter to the FAA “threatening to test the drones abroad if the FAA continued to refuse to let it test the machines outdoors in the United States.” The FAA consequently granted Amazon the ability to conduct limited domestic testing, requiring that drone test flights take place under 400 feet and remain in sight of the pilot and observer. Meanwhile, Amazon continued the development of its drones in the United Kingdom, celebrating its first successful commercial delivery in 2016. Amazon Prime Air’s United Kingdom operation seemed to be advancing even more quickly when “UK regulators…fast-tracked approvals for drone testing.” This fast-tracking “made the country an ideal testbed for drone flights and paved the way for Amazon to gain regulatory approval elsewhere.” However, behind the scenes, Amazon’s program was dealing with major problems, including staff layoffs, redundancies, and reports of mismanagement, including reports of employee drunkenness while on the job

During all of this, and back in the United States, Amazon Prime Air was making slow progress. In 2019, Amazon petitioned the FAA to allow it to begin wide-scale testing of its drones, and a year later the company announced it had received approval from the FAA to begin testing commercial deliveries. Despite this victory, however, Amazon Prime Air has continued to face significant issues that cast doubt on the program’s safety, and an investigative report conducted by Bloomberg News has recently revealed multiple Amazon drone crashes, as well as accounts of a management culture more focused on speed than safety.

This focus on speed likely stems from the fact that Amazon has fallen behind its rivals in the drone delivery space. In August 2021, Alphabet Inc.’s program, Wing, announced that it had successfully made its hundred thousandth delivery in Australia. Wing’s drone deliveries are also automated, “but monitored by pilots who function more as air traffic controllers.” A notable difference from Amazon’s drones is that Wing packages “are dropped in front of homes using a winch”, while Amazon’s drones land to deliver their packages. In addition to Wing, UPS has also successfully tested the use of delivery drones in innovative ways. For example, UPS has tested launching drones from its delivery trucks, which allows a delivery driver to cover large rural areas in a much more efficient manner. 

Aside from the technical and production challenges that have slowed the rollout of Amazon Prime Air, Amazon will likely face continued challenges due to significant privacy concerns. According to CNBC, “detecting telephone wires, people, property and even small animals on the ground all require careful sensing and collision avoidance systems.” In addition to the multiple cameras needed to navigate these obstacles, Amazon “is investing heavily in artificial intelligence to help drones navigate safely to their destinations, and drop off packages safely.” The possibility of fleets of AI-automated drones equipped with precision cameras surveilling American cities, a scene seemingly pulled from a dystopian science fiction novel, could quickly become a concerning reality.

Beyond privacy concerns, Amazon Prime Air will likely have to contend with major safety concerns. Accidents caused by manned drones have already led to multiple legal disputes. For example, in 2017, “[t]he owner of an aerial photography business was sentenced to 30 days in jail and a $500 fine after a drone he was operating crashed into people during a 2015 parade and knocked one woman unconscious. Paul Skinner, 38, was found guilty of reckless endangerment by Judge Willie Gregory of the Seattle Municipal Court.” In the case of piloted drones, victims can bring a suit against the human operator; the widespread use of automated drones, in contrast, raises difficult questions about the increased risk of personal injury and how to apportion blame. Last month, questions about the safety of Amazon’s ground-based “autonomous personal delivery devices”, known as Amazon Scout, led the city of Kirkland, Washington to place a temporary moratorium on their continued use, and as Amazon Prime Air moves towards wide-scale implementation, it could likely face similar slow-downs and push back from various state and local governments. 

Despite these setbacks, Amazon has not faltered in its commitment to implement Amazon Prime Air. The promise of faster, more efficient shipping will very likely continue to outweigh the challenges facing the implementation of aerial delivery drones; this is proven by Amazon’s commitment to launching its program, along with Alphabet Inc.’s and UPS’ already operational delivery drone programs. However, the technical challenges and social concerns surrounding these programs will likely continue to delay their full-scale rollout in the near future, “grounding” Amazon Prime Air for at least a little bit longer.

Lawmakers Set Their Sights on Restricting Targeted Advertising

By: Laura Ames

Anyone who spends time online has encountered “surveillance advertising.” You enter something into your search engine, and immediately encounter ads for related products on other sites. Targeted advertising shows individual consumers certain ads based on inferences drawn from their interests, demographics, or other characteristics. This notion itself might not seem particularly harmful, but these data are accrued by tracking users’ activities online. Ad tech companies identify the internet-connected devices that consumers use to search, make purchases, use social media, watch videos, and otherwise interact with the digital world. Such companies then compile these data into user profiles, match the profiles with ads, and then place the ads where consumers will view them. In addition to basic privacy concerns, the Consumer Federation of America (CFA) points to the potential for companies to hide personalized pricing from consumers or to promote unhealthy products and perpetuate fraud. Perhaps the largest concern is that the large stores of personal data that these companies maintain put consumers at risk of having their privacy invaded, identity theft, and malicious tracking.   

In response to these concerns, Democratic lawmakers unveiled the Banning Surveillance Advertising Act (BSSA) in an attempt to restrict the practice and under a general consensus that surveillance advertising is a threat to individual users as well as society at large. This move prompted opponents to argue that the BSSA is overly broad and will harm users, small businesses, and large tech companies alike.

What Does the BSSA Do? 

The BSSA is sponsored by Senator Cory Booker and Representatives Jan Schakowsky and Anna Eshoo. The bill bars digital advertisers from targeting their ads to users and also prohibits advertisers from targeting ads based on protected information like race, gender, religion, or other personal data purchased from data brokers. According to Senator Booker, surveillance advertising is “a predatory and invasive practice,” and the resulting hoarding of data not only “abuses privacy, but also drives the spread of misinformation, domestic extremism, racial division, and violence.”

The BSSA is broad, but it does provide several exceptions. Notably, it allows location-based targeting and context advertising, which occurs when companies match ads to the content of a particular site. The bill suggests delegating power to the FCC and state attorneys general to enforce violations. It also allows private citizens to bring civil actions against companies that violate the ban with monetary penalties up to $1,000 for negligent violations and up to $5,000 for “reckless, knowing, willful, or intentional” violations. The BSSA has support from many public organizations and a number of professors and academicians. Among several tech companies supporting the BSSA is the privacy-focused search engine DuckDuckGo. Its CEO, Gabriel Weinberg, opined that targeted ads are “dangerous to society” and pointed to DuckDuckGo as evidence that “you can run a successful and profitable ad-based business without building profiles on people.” 

The BSSA as Part of a Larger Legislative Agenda 

The BSSA is just one bill among a number of pieces of legislation aiming to restrict the power of large tech companies. Lawmakers have grown increasingly focused on bills regulating social media companies since Facebook whistleblower Frances Haugen testified before Congress in 2021. These bills target a wide variety of topics including antitrust, privacy, child protection, misinformation, and cryptocurrency regulation. Most of these bills appear to be rather long shots, however, because although the Biden administration supports tech industry reform, so many other issues are high priorities for it. Despite this hurdle, lawmakers are currently making a concerted push with these tech bills because the legislature’s attention will soon turn to the 2022 midterms. Additionally, Democrats, who have broader support for tech regulations, worry they could lose control of Congress. Senator Amy Klobuchar argued that once fall comes, “it will be very difficult to get things done because everything is about the election.” 

Tech and Marketing Companies Push Back

In general, tech companies tend to argue that targeted advertising benefits consumers and businesses alike. First, companies argue that this method allows users to see ads that are directly relevant to their needs or interests. Experts rebut this theory with the fact that in order to provide these relevant ads, tech companies must collect and store a great deal of data on users, which can put that data at risk of interference by third parties. Companies also argue that this legislation would drastically change their business models. Marketing and global media platform The Drum predicted that the BSSA “could have a massive impact on the ad industry as well as harm small businesses.” The Interactive Advertising Bureau (IAB), which includes over 700 brands, agencies, media firms, and tech companies, issued a statement strongly condemning the BSSA.  IAB CEO David Cohen argued that the BSSA would “effectively eliminate internet advertising… jeopardizing an estimated 17 million jobs primarily at small- and medium-sized businesses.” The IAB and others argue that targeted advertising is a cost-effective way to precisely advertise to particular users. However, the CFA points to evidence that contextual advertising, which is allowed under the BSSA, is more cost-effective for advertisers and provides greater revenue for publishers. 

Likelihood of the BSSA’s Success

In the past several years, there has been growing bipartisan support for bills addressing the increasing power of tech companies. This support would seem to suggest that these pieces of tech legislation have a better chance of advancing than other more controversial legislation. However, even with this broader support, dozens of bills addressing tech industry power have failed recently, leaving America behind a number of other countries in this area. One of the major problems impeding bipartisan progress is that while both parties tend to agree that Congress needs to address the tremendous power that tech companies have, they do not align on the methods the government should use to address the problem. For example, Democrats have called for measures that would compel companies to remove misinformation and other harmful content while Republicans are largely concerned with laws barring companies from censoring or removing content. According to Rebecca Allensworth, a professor at Vanderbilt Law School, the larger issue is that ultimately, “regulation is regulation, so you will have a hard time bringing a lot of Republicans on board for a bill viewed as a heavy-handed aggressive takedown of Big Tech.” Given Congress’ recent track record in moving major pieces of legislation, and powerful opposition from the ad tech industry, the BSSA might be abandoned along with other recent technology legislation.  

Your Employer Can Monitor You While You Work From Home—Should They?

By: Joshua Waugh

Since “pandemic life” began, as many as 40% of American workers have worked from home. If you’ve been lucky enough to trade the crowded bus or the gridlocked highway for the shorter bedroom-to-laptop commute, chances are you’ve wondered just how closely your employer is watching you. The truth is that telework, for all its benefits, also has a major downside: near limitless opportunity for high-tech surveillance. And while it is clear that employers have the legal capability and the technology to monitor their employees, it’s less clear that employee surveillance is actually a good idea at all.

Can my employer really monitor me?

It is no secret that American privacy and technology laws are often lacking. At the federal level, the primary law dealing with electronic privacy is the Electronic Communications Privacy Act (ECPA), which was passed in 1986. The law is so old that Title I of the Act only contemplates a third party’s “interception” of a message sent by “wire, oral, or electronic communication”; the law doesn’t address the possibility of accessing stored communications, such as email, post-transmission.

Furthermore, Title I of the ECPA has been interpreted to include a carveout specifically allowing employers to monitor employees as long as the employer can show a legitimate business purpose. The ECPA also permits employers to electronically surveil employees upon their consent, which, given often imbalanced employee-employer power dynamics, is not great for the ordinary employee.

Title II of the ECPA, or the Stored Communications Act (SCA), provides more protection to employees, though the law is still just as dated as Title I. Under the SCA it is fairly well established that your employer can’t log in to your personal email without your permission. So rest assured, your employer cannot see the thousands of unread advertising emails in your inbox unless you give them access.

All of that said, there is not much legislation on electronic privacy at the federal level. That may seem surprising considering we’ve seen privacy controversy after privacy controversy from practically every big tech company in recent years, but electronic privacy regulation seems to be generally left to the states. The end result is that only Californians (and to a lesser extent Coloradans and Virginians) enjoy broad statutory protections against electronic employer surveillance. In most of the other states, as long as you are using an employer’s device or network, your employer may surveil you as much as they’d like. And surveillance software is readily available, including keyloggers that record every keystroke you make, activity monitors, and even software that records every website or app you access on the device. In fact, if your workplace is using the Microsoft Office 365 Suite, your employer is already able to monitor and analyze your work activity.

Where do we go from here?

If you’re concerned about your general lack of privacy rights living in America, you are not alone. Researchers have published studies showing that extensive employer surveillance can breed distrust among employees and such surveillance can be a significant hindrance on worker productivity and other positive performance outcomes. The feelings of distrust are even stronger when employees discover that they were being surveilled without their knowledge.

Despite evidence suggesting employee surveillance may have negative effects, surveys show that 62% of executives planned to use monitoring software in 2019, and that number is certain to have grown during the pandemic work-from-home era. Meanwhile, we’re also in the midst of a radical transformation in the labor force—the U.S. Bureau of Labor Statistics reported that 2.9% of the entire U.S. workforce, 4.3 million people, quit their jobs in August 2021. By all appearances, the Great Resignation is accelerating as 4.4 million workers went on to quit during September 2021, topping August’s record numbers. At a time when people are rethinking their relationship with work, struggling with burnout, and dealing with burdensome household issues such as child- and elder-care, employers should spend less time secretly surveilling their employees, and instead put effort into employee engagement. Essentially the opposite of paranoid surveillance, companies should engage with their workers by providing flexibility and building trust. Employee engagement is more likely to boost productivity than surveilling, and more importantly, in today’s climate, has been shown to increase employee retention. Ultimately, under current U.S. law, your employer can surveil you to its heart’s content in most states—but you can also resign if you feel your privacy rights have not been respected. As more and more in the labor force decide to do so, we’ll just have to wait and see how legislators respond.

The FTC Takes on Health and Fitness Apps’ Rampant Privacy Problems

By: Laura Ames

More and more Americans are turning to mobile health and fitness applications, but many worry about the lack of regulations would ensure that developers of these products keep user information secure and private. The Federal Trade Commission (“FTC”) recently addressed this concern with a policy statement (“Statement”) including app developers among the entities who must follow certain notification procedures after security breaches. However, many question the Statement’s practical effects and whether the FTC had the authority to issue it.  

Health App Trends

Mobile health and fitness apps have gained popularity in recent years, and the COVID-19 pandemic only accelerated this growth. In fact, the United States led the world in health and fitness app downloads as of October 2020 with 238,330,727 downloads that year alone. Even with this increased usage, a recent poll showed that over 60% of U.S. adults felt at least somewhat concerned regarding the privacy of their health information on mobile apps. These worries appear to be well-founded. Flo Health Inc., the developer of a menstrual cycle and fertility-tracking app, currently faces a consolidated class action alleging the company disclosed users’ health information to third parties without users’ knowledge. This is not an isolated concern. A recent study of over 20,000 health and fitness apps found that a third of these apps could collect user email addresses and more than a third transmitted user data to third parties such as advertisers.

The Original Health Breach Notification Rule

Congress enacted the Health Information Technology for Economic and Clinical Health (“HITECH”) Act as an investment in American health care technology. Subtitle D of this Act delegated authority to the FTC to promulgate breach notification requirements for breaches of unsecured protected health information. In 2009, the FTC issued its Health Breach Notification Rule (“HBNR”) covering vendors of personal health records (“PHR”) and PHR-related entities who experienced a security breach. The HBNR requires these entities to notify affected individuals and the FTC. Crucially, the HITECH Act defines a PHR as an electronic record that can be drawn from multiple sources.

The FTC has never enforced the HBNR, but the possibility for changes to the rule has been on the horizon for some time. In 2020, the FTC requested public comments on the HBNR, which functions as a part of their rulemaking process, saying that it was merely a periodic review of the rule. However, before that comment period ended, the Commission issued a policy statement that turned heads.

The FTC Makes a Bold Move

On September 15, the FTC issued a statement with two of the five Commissioners dissenting. The FTC’s stated goal was to clarify the HBNR and put entities on notice of their security breach obligations. The FTC explained that the HBNR is triggered when “vendors of personal health records that contain individually identifiable health information created or received by health care providers” experience a security breach. The first major revelation was that the FTC considers developers of health apps or connected devices as health care providers because they provide health care services or supplies.

Additionally, the FTC stated that it interprets the rule as covering apps that are capable of drawing information from multiple sources, like through a combination of consumer inputs and application programming interfaces (“APIs”). The statement gave two examples of apps that are covered under this understanding. First, an app that collects information directly from users and has the capacity to draw information through an API that enables syncing with a user’s fitness tracker. Second, an app is implicated if it draws information from multiple sources even if the health information only comes from one source. For example, if a consumer uses a blood sugar monitoring app that draws health data only from that consumer’s inputs but also draws non-health data from the phone’s calendar, that app is covered by the HBNR.

Additionally, the FTC sought to remind entities that a breach is not limited to cybersecurity intrusions but also includes unauthorized access to information. Under this interpretation, companies that share information without a user’s authorization would also be subject to the Rule. Although the FTC had not previously enforced the Rule, this Statement also served as signaling the FTC’s willingness to do so. It mentions that businesses could face potential civil penalties of $43,792 per violation per day.

Obviously, these clarifications could subject many app developers and other companies to the FTC’s rule. However, in the eyes of some, including the two dissenting Commissioners, this statement is not a mere clarification but a fundamental policy change. It could not only lead to potential confusion but could also be a breach of the FTC’s statutory authority and rulemaking process.

Critiques and Larger Questions

Some legal experts argue that this statement represents an expansion of the HBNR that could lead to further confusion for app companies and others. The two dissenting FTC Commissioners go further than potential confusion in their statements.

Commissioner Christine S. Wilson argued that this Statement both short-circuits the FTC’s rulemaking process and also improperly increases its statutory authority by expanding the definitions of terms without legislative approval. Commissioner Noah Joshua Phillips agreed that this statement’s first problem is its issuance in the middle of a request for public comment. Wilson pointed out that the FTC’s own business guidance for dealing with the HBNR directly contradicted the statement by saying that “if consumers can simply input their own” health data on a business’ site, for example, a weekly weight input, then the business is not covered by this rule. Wilson also expressed concerns that this interpretation of “health care provider” was a potentially slippery slope. For instance, does Amazon qualify as a health care provider given that users can purchase Band-Aids and other medical supplies through its phone app?

In the coming months, we might see the FTC forcing app developers to notify customers of data disclosures, but the debate around this statement also reveals larger questions concerning health care at the moment. Fundamental questions that once might have seemed easy to answer, such as who qualifies as a health care provider, are growing murkier. In the wake of COVID-19’s effects on telehealth and health technology in general, it seems unlikely that health care will phase out of this continued intermingling with technology. If that is the case, then legislation and regulations surrounding health care will continue to have to scramble to catch up with this rapid technological evolution.