All Bark, No Bite: Washington’s 2021 Facial Recognition Regulation Lacks Enforcement Mechanism

By: Alex Coplan

Taking effect on July 1, 2021, Washington’s new facial recognition (FR) law will regulate state and local government use of FR technologies.  But will the new law be effective enough to protect your identity? The law serves as a middle ground between privacy advocates and government officials in favor of using FR programs. However, due to vague wording and lack of oversight, the bill’s intent may not always produce the desired results.

On its face, SB 6280 provides proper safeguards to prevent government use and abuse of FR technology. The law includes significant provisions requiring accountability and limitations on use, and signals the Washington legislature’s belief that FR use creates serious policy issues.

First, SB 6280 will apply to all state and local government agencies. This means that agencies operating in Washington must comply with the new law and be subject to its oversight. Some exceptions, however, include the Department of Licensing (DOL) and the Transportation Security Administration (TSA). While these agencies are not subject to SB 6280, they are required to disclose the use of FR technology if located in Washington.

Second, the law requires agencies to provide a notice of intent and produce an accountability report. In other words, government agencies must file notice to obtain and implement FR services. If approved, those agencies must provide accountability reports every two years. These reports disclose the capabilities of the FR system, the data types the system collects, the training procedures and security for protecting data, and how the system can benefit the community.

Third, and perhaps most importantly, SB 6280 requires “meaningful human review” when FR programs create “legal effects concerning individuals or similarly significant effects concerning individuals.” Meaningful human review requires review or oversight by one or more individuals who are trained in accordance with the act. Training includes coverage of the capabilities and limitations of the FR program, and how to interpret the program’s output. The bill considers “legal” or “similarly significant effects” to be decisions that result in the provision or denial of criminal justice, financial services, housing, education, employment opportunities, and other basic civil rights. Accordingly, if an individual faces a significant outcome following government use of FR, they have the right to meaningful human review.

Fourth, agencies must receive a warrant in order to use FR programs during real-time surveillance. This means government actors may not use FR programs simultaneously with a live video feed. Specifically, SB 6280 prohibits use of FR programs body-worn cameras used by law enforcement, meaning police may not use FR programs in the field without judicial authorization.

Companies like Microsoft—who create FR technology—favored and lobbied for the passage of SB 6280. Why, in a bill intended to limit FR use, would Microsoft approve of this legislation? Arguably, SB 6280 may be all bark, but no bite. While this law sounds effective, it lacks enforcement procedures.

For example, the accountability reports may not provide much accountability. At this point, the reports are not required to be approved by any regulatory or legislative body. As a result, there is no enforcement mechanism for the provision of this bill. If an agency chooses to not follow the procedure, who will stop them?

Further, the “meaningful human review” provision lacks substantial definition. The subsection defining this phrase is a single sentence, which fails to provide any direction for decision making. Further, the provision does not require review from any third party, allowing agencies to review any potential misconduct themselves, with their own employees.

Additionally, SB 6280’s warrant requirement only covers real-time identification, so agencies may freely use FR technology on previously shot footage without obtaining a warrant first. By allowing agencies to engage in this practice without a warrant, SB 6280 subjects the public to unchecked surveillance by law enforcement.

Washington enacted SB 6280 to provide compromise and regulation to an emerging field. However, the law lacks sufficient procedures for enforcing violations. Without those procedures, or a moratorium on FR use, government agencies can abuse these technologies to the detriment of every Washingtonian.

Is Discrimination Fair? The FTC’s Failure to Regulate Big Tech

By: Gabrielle Ayala-Montgomery

In the age of technological innovation, minorities fight to end discrimination both offline and online.  Biased outcomes produced by technology have material consequences for minorities as advancements in technologies replicate social inequalities and discrimination. Vulnerable populations are denied fair housing, education, loans, and employment because of biased data; however, the commercial use of biased data raises concerns for all consumers.

The FTC should consider using its authority to deliver effective deterrence for the harms to the economy when companies unfairly and disproportionately impact minorities. First, the FTC must bring enforcement actions against companies whose “unfair practices” disproportionately affect minority consumers. Second, the FTC should conduct rulemaking to expand its definition of “unfair practices” to encapsulate businesses practices unfairly affecting minorities.  

The FTC Breakdown

The FTC protects consumers from deceptive or unfair business practices and investigates alleged violations of federal laws or FTC regulations. Section 5 of the FTC Act prohibits “unfair or deceptive business practices in or affecting commerce.” The Act broadly vests the FTC’s authority to bring enforcement actions against businesses to protect consumers against unfair or deceptive practices. Section 18 enables the FTC to promulgate regulations to prevent unfair or deceptive practices. After the FTC issues a rule, it may seek penalties for unfair practices constituting violations.

The FTC explicitly defined its standards for determining whether a practice is deceptive but left the question of whether a practice is unfair to be interpreted by courts. In response, courts have developed a three-factor test to identify an unfair practice.  The test asks: (1) if the practice injures consumers; (2) if it violates established public policy; and (3) if it is unethical or unscrupulous.

Biased Practices in High Tech

Algorithms are quantitative data, a process, or a set of rules involving mathematical calculations that produce data to help humans make decisions. Algorithmic bias, machine learning bias, or AI bias is a systematic error in the coding, collection, or selection of data that produces unintended or unanticipated discriminatory results. Algorithmic bias is perpetuated by programmers who train algorithms based on patterns found in historical data. Humans then use these biased results to make decisions with implications systematically prejudiced towards minorities.  

Data and surveillance are now big businesses. However, some companies have failed to ensure AI products are fair to consumers and free from impermissible bias. For example, Amazon had to disband recruiting tools because the system’s data rated job candidates in a gender-biased manner. The AI models educated themselves from resume data compiled from the previous ten years, composed primarily of white men. Thus, Amazon’s recruiting tool taught itself male candidates were preferable.

The National Institute of Standards and Technology (NIST) examined the tech industry’s leading systems’ facial recognition technology (FRT) algorithms, including 189 algorithms voluntarily submitted by 99 companies, academic institutions, and other developers. The algorithms came from tech companies and surveillance contractors, including Idemia, Intel, Microsoft, Panasonic, SenseTime, and Vigilant Solutions. NIST found empirical evidence” many FRT algorithms exhibited “demographic differentials” that can worsen their accuracy based on a person’s age, gender, or race. Some algorithms produced no errors, where other software was up to 100 times more likely to return an error to a person of color than for a white individual. Overall, middle-aged white men generally benefited from the highest FRT accuracy rates or the least amount of errors. Such bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions that can have a collective, disparate impact on specific groups of people even without the programmer’s intention to discriminate.

Companies’ targeted advertisement systems have been utilized to exclude people of color from seeing ads for homes based on their “ethnic affinity.” For example, Facebook and other tech companies settled with civil rights groups for participating in illegal discrimination practices in their advertisement of housing, employment, and loans. Targeted marketing policies and practices by tech companies permitted users to exclude marginalized groups from seeing specific ads. As a condition of the settlement, Facebook agreed to establish a separate advertising portal for creating housing, employment, and credit “HEC” ads on Facebook, Instagram, and Messenger that will not allow users to block consumers based on gender, age, and multicultural affinity.” However, research demonstrates Facebook’s ad system can still unintentionally alter ad delivery based on demographics. These current advertisement practices online contribute to the systematic inequality faced by minorities’ income, housing, and wealth.

Expanding “Unfair Practices”

What if Facebook had not reexamined its practices or Amazon kept its biased recruiting tech? No single piece of data protection legislation exists in the USA to prevent biased data and surveillance technology. Instead, the country has patchwork laws at federal, state, and municipal levels.  Sen. Ron Wyden, D-Ore., plans to update and reintroduce his Algorithmic Accountability Act of 2019, a bill designed to fight AI bias and require tech companies to audit their AI systems for discrimination. The Act, if passed, would have directed the FTC to require companies to study and fix flawed algorithms resulting in inaccurate, unfair, biased, or discriminatory decisions. The passage of an Algorithmic Accountability Act would reduce decisions based on biased algorithms. However, instead of waiting on Congress, the FTC may use existing laws and policy options addressing such violations and apply them to unfair and discriminatory practices in the digital world.

In the absence of federal legislation, the FTCA could be used to protect consumers against unfair practices that are biased against minorities. For the purposes of Section 5 of the FTC Act, purposeful or negligent practices that disproportionally impact minorities should constitute (1) unethical or unscrupulous, (2) violations of public policy, and (3) actual harm to consumers. The FTC should prioritize enforcement of valid claims of unfair commercial practices that disproportionately impact minorities.

New Rulemaking Group May Provide Hope for Prevention

Following criticism that the FTC failed to use its authorities to address consumer protection harms adequately, it announced its new rulemaking group.  This rulemaking group will allow the FTC to take a strategic and harmonized approach to rulemaking across its different authorities and mission areas. With this new group in place, the FTC is poised to strengthen existing rules by undertaking new rulemakings that would interpret unfair practices further. Chairwoman Rebecca Kelly Slaughter stated, “I believe that we can and must use our rulemaking authority to deliver effective deterrence for the novel harms of the digital economy].” Perhaps under this new rulemaking group, the FTC can meaningfully protect and educate consumers about the harms of commercial practices that unfairly impact minorities by expanding their interpretations of “unfair” to include commercial practices biased against minorities.

The FTC needs to protect consumers by protecting minorities. Current laws do not adequately address biased data or practices that produce discriminatory outcomes for minorities. Without legislation or federal administrative action, high-tech companies will continue developing systems, intentionally or unintentionally, biased against minorities like people of color, women, immigrants, the incarcerated and formerly incarcerated, activists, and others. The FTC has the authority to effectively deter business practices unfairly and disproportionately impacting minorities by bringing enforcement actions against such practices. Further, the FTC should explicitly expand their interpretation of “unfair practices” to encapsulate practices disproportionately affecting minorities.

Right to Buy, But Not Repair?

By: Joanna Mrsich

In an increasingly advanced and technological world, gadgets continue to improve people’s lives and make what was once thought impossible, possible. However, as the technology behind these products becomes more advanced, so does the difficulty in fixing them when they malfunction. While companies such as Apple, Tesla, and others all provide arguably accessible product service and repairs, do consumers have the actual ability to fix or repair these products themselves? More importantly, should consumers have the ability, or right, to repair products they own?

What is the Right to Repair?

The “right to repair” movement is based on the belief that the technology in modern equipment has enabled manufacturers to reduce access to repair by claiming that repairs could violate their proprietary rights. Moreover, the movement argues that although people generally have the “right” to repair their own goods, the actual ability to do so can be influenced—or completely controlled—by a product’s original equipment manufacturer (“OEM”). These entities—OEM’s—are often the exclusive source of replacement parts and other physical and electronic tools that make repairs feasible. OEM’s often bar consumers from using independent repair or service firms by claiming such acts risk voiding their products’ warranties. Manufacturers have often said that “strict repair guidelines protect trade secrets and ensure safety, security and reliability.”

Some people within the right to repair movement argue that, in addition to providing consumers with this necessary right, legislation supporting a right to repair would have a positive environmental impact because more consumers would likely fix, rather than dispose of, their own devices. A 2019 report by the Colorado Public Interest Research Group noted that by making repairs difficult, the consumer electronics industry is creating a culture of disposable electronics. According to a 2014 Consumer Report, consumers were advised not to spend more than fifty percent of the cost of a new product on repairing on existing one. The executive director of The Repair Association—advocates for right-to-repair legislation—noted that repairs are “roughly 50% or more of the cost to replace the device,” and that the “holy grail of all of it is to send you to the showroom to buy another product.” Therefore, when OEMs act as gatekeepers for repairs, consumers are more likely to replace damaged products than repair them.

Opposition to the Right to Repair:

It is not uncommon for companies to oppose consumers’ right the right to repair their own products. An example can be found in Apple’s Repair Terms and Conditions. In §1.11.6—Disclosure of Unauthorized Modifications—this contract states that: “During the service ordering process, you must notify Apple of any unauthorized modifications, or any repairs or replacements not performed by Apple or an Apple Authorized Service Provider (“AASP”).” The section goes on to imply that repairs done by non-AASP services will likely void the product’s warranty or result in additional costs for completing the service “even if the product is covered by warranty or an AppleCare service plan.”

Another pertinent example can be found in the video game industry. According to the Entertainment Software Association (“ESA”), video game consoles are “unique from other devices, appliances and consumer products” because they “rely upon a secure platform to protect users, the integrity of the gaming experience, and the intellectual property of game developers.” In fact, the ESA’s Right to Repair Policy explicitly bars unauthorized parties from bypassing a console’s specialized software that protects its security and piracy risks. Moreover, this policy explains that while repair shops likely do not use repairs for illegal purposes, the “publication of a console’s security roadmap would allow bad actors to use this knowledge to undermine the entire console ecosystem.” The policy acknowledges the harm right to repair could have on the industry and notes that major video game console makers—such as Microsoft, Nintendo, and Sony—all provide “easy, reliable, and affordable repair services…to ensure that their consoles remain in good working order.” Lastly, the ESA ends its repair policy by noting that the right to repair movement does not contribute to a positive environmental impact because video game companies and retailers—like Microsoft, Nintendo, Sony, and GameStop—have robust recycling programs for consumers looking to dispose of consoles.

Despite the existence of many more technology-based examples of industry opposition to consumers’ right to repair, the last example in this post involves the agricultural industry. The American Farm Bureau Federation, National Farmers Union, and the National Corn Growers all support the Right to Repair. The agricultural industry believes it is only fair to require manufactures to make equipment field serviceable. Industry manufacturers, on the other hand, oppose the right to repair. For example, John Deere claims that is dangerous to allow farmers to repair their equipment.

Current Discourse:

Over the years, important discourse on the issue has taken place but not enough has been done. In 2019, the Federal Trade Commission (“FTC”) hosted a discussion called Nixing the Fix: A Workshop on Repair Restrictions. The workshop discussed issues consumers or independent repair shops face when manufacturers restrict or make it impossible to conduct product repairs without voiding a product warranty. The FTC later made all empirical research collected in preparation of the workshop available for the public to view. Additionally, state legislatures and Attorneys General have launched legislative and litigation initiatives in an attempt to bar popular user restrictions. In the farming context, former presidential candidates Warren and Sanders both promoted the Right to Repair.

However, some existing laws could be applied to the right to repair issue without amendment. The Magnuson-Moss Warranty Act—passed by Congress in 1975—requires businesses to provide consumers with detailed information about warranty coverage. This Act protects consumers from unfair or misleading disclaimers, including claims that warranties can be voided if a consumer removes a seal. Furthermore, as of February 2021, right to repair legislation has been introduced in 14 states.

What now?

Consumers should have the opportunity to repair the property they purchase. Under the status quo, it is increasingly difficult for people to buy repairable products. The European Commission may provide a good template for legislative changes the United States could implement. In Europe, these right to repair rules are supposed to cover phones, tablets, and laptops by 2021. There has also been some progress during the COVID-19 pandemic with Senate Democrats introducing a Bill to block manufacturers’ limits on medical devices. However, the problem is also present in nearly every other industry, and comprehensive legislation is required. If technology truly is the future, repair rights need to be expanded to protect consumers.

Data is the New Oil: How China’s Global Biometric Database Throws the World Power Dynamic in Flux

Photo by Martin Lopez on Pexels.com

By: Kelsey Cloud

Imagine receiving a phone call from a Chinese genome sequencing company who tells you that you are on the verge of developing heart disease and recommends a cocktail of medications to alleviate your future symptoms. Would you accept the medication?

On the one hand, if a Chinese company can micro-target you based on something identified in your DNA, why wouldn’t you want to be proactive and begin to solve a problem before it even arises? On the other hand, do we as a nation want another country to systematically eliminate our health care services?

Over the last few decades, the Communist Party of China—the majority political party in the People’s Republic of China (PRC)—has acquired the personally identifiable information (PPI) from an estimated 80% of American adults. In February, the National Counterintelligence and Security Center (NCSC) warned that these efforts to obtain healthcare data from countries around the world posed “serious risks, not only to the privacy of Americans, but also to the economic and national security of the U.S.” The race to control the future of health care through the accumulation of biometric data is the modern space race; however, there is more at stake than national pride.

The Plan: Made in China 2025

PRC’s authoritarian government, led by Xi Jinping, has brazenly declared their aspirations to take the world stage as the dominant leader in this biological age. As the U.S. Chamber of Commerce highlighted in its summary of PRC’s published manifesto, Made in China 2025, PRC designates biotech as a “strategic emerging industry” and prioritizes collecting healthcare information both domestically and internationally. By investing $9 billion into collecting and sequencing genomic data, PRC’s communist regime strives to collect and analyze large genomic datasets in order to globally propel its precision medicine industry.

Rather than administering one-drug-suits-all treatments, precision medicine aims to provide customized treatment for  individual patients based on their genetic makeup, lifestyle, and environment. By analyzing how a patient’s genes interact with their environment, precision medicine allows doctors to predict risk of disease and reactions to various medicines.

In the wake of the COVID-19 pandemic, PRC has invested billions of dollars into distributing COVID-19 tests around the world to accumulate genomic data from the global population for precision medicine advances. Propelled by China’s largest genomics company, the Beijing Genomics Institute (BGI), PRC has sold test kits to over 180 countries—establishing its own laboratories in 18—since August 2020.

COVID-19 Laboratories or Modern Day Trojan Horses?

When COVID-19 infections skyrocketed globally, BGI sent a letter to Washington State Governor Jay Inslee, proposing to construct and manage COVID testing laboratories. Promising to provide technical expertise and new equipment, BGI attempted to take advantage of the worldwide crisis with the ulterior motive of using testing to expand their collection of biometric information. Bill Evanina, former NCSC Director and veteran of the FBI and CIA, made a public statement in response to BGI’s letter, warning that “[f]oreign powers can collect, store and exploit biometric information from COVID tests.” Although it remains unclear whether BGI could receive DNA from nasal swabs, BGI has certainly found a way to establish a foothold in countries to start mining data.

The BGI headquarters house and operate the government-funded China National GeneBank, enabling PRC to map the human genome through a biorepository of 20 million genetic samples taken from humans, animals, and plants. Disguised by BGI as a way to foster new medical discoveries and cures that will “advance its Artificial Intelligence and precision medicine industries”, the company’s alleged mission acts as a modern day Trojan horse. By obtaining vast troves of foreign countries’ health data, PRC ultimately endeavors to weaponize that data and systematically eliminate foreign health care services, displacing America as a global biotech leader.

The Chase to Control Biodata: A Modern Space Race

In the 21st century, as global superpowers recognize that their future success hinges on acquiring robust amounts of human biometric data, the race to accumulate the largest, most diverse dataset has become the modern space race. Ultimately, the biggest dataset wins—hence PRC’s aggressive efforts to accumulate data from every country in the world.

As PRC rapidly stockpiles U.S. data to support these economic initiatives, it has simultaneously shut the door on access to their own data, creating a one-way street that thwarts the U.S. from similarly benefiting from Chinese healthcare data. This inequitable relationship could allow PRC to displace U.S. biotech companies as global biotech leaders. Even though new healthcare treatments produced in PRC could benefit American patients, America would ultimately become increasingly dependent on PRC’s drug industry. America’s dependence on China during the COVID pandemic for personal protection equipment would seem trivial compared to the potential for that kind of future dependence. NCSC warns that such a strong reliance on Chinese medicine would likely lead to a transfer of wealth, with the U.S. job market weakening as China’s strengthens.

Data as a Weapon

PRC’s vast accumulation of DNA, PPI, and personal health information could allow them to target specific individuals, including Americans, through extortion and manipulation, such as leveraging someone’s mental illness or addiction for blackmail. Knowledge of top national decision-makers’ DNA could additionally be exploited to bolster PRC’s national defense strategies. By targeting genetic weaknesses, PRC could utilize genetic information to enhance their own soldiers’ strengths and engineer pathogens to exploit American soldiers’ weaknesses. For instance, BGI’s latest research centered on how medicine could interact with genetic makeup to protect a soldier from brain injury or stop altitude sickness from impairing a soldier from performing at maximum strength during wartime.

While BGI claims its collaboration with military researchers was for solely academic purposes, the Human Rights Watch says otherwise: more than a million Uyghurs (Chinese citizens who belong to a Muslim minority) have been jailed in camps, in part due to two subsidiaries of BGI whom allegedly conducted genetic analyses used to facilitate Muslim repression. Deeming these camps a crime against humanity, the U.S. Department of Commerce placed trade sanctions on BGI. The company responded by stating it was not involved in human rights abuses, attempting to persuade the public that the camps are educational and vocational institutions.

The U.S. Must Protect its Citizens Data

While no one chastises a country that conducts medical research to improve treatments and find new cures, PRC’s accumulation of biometric data through BGI poses a substantial risk to the health and safety of not just their own citizens, but also citizens of every country worldwide. With national security, healthcare systems, the economy, and individual privacy on the line, the U.S. must protect its citizens from misuse and theft of their biometric data. America’s $100 billion biotech industry stands to lose their innovative edge in the genomic field to Chinese companies, threatening a long-term cost to the U.S. economy. To remain a global superpower, the U.S. government must ensure that they take an abundance of caution when collaborating with Chinese companies, and that their citizens’ data remains safeguarded from the grip of the PRC.

Extra! Extra! Pay Extra for News on Social Media!

By: Mason Hudon

The Australian Dilemma

In February of 2021, Facebook, Google, and the Australian government made headlines as they engaged in a fiery debate over proposed legislation that would force the media giants to pay for the ability to link news articles promulgated by Australian news companies through their digital platforms. An initial, and more aggressive version of this law, entitled Australia’s News Media and Digital Platforms Mandatory Bargaining Code Act of 2021 (NMDPMBC), was met with such reproach that Facebook actually halted news service to its Australian users in mid-February of this year, going “dark” for three to four days in an unprecedented protest to the alleged “assault” on its lucrative business model. Following talks between Australian Treasurer Josh Frydenberg and Facebook Chief Executive Mark Zuckerberg, and in the wake of a wave of negative press associated with the move to cut services, Facebook begrudgingly lifted restrictions near the end of February. A final law was subsequently passed by Australian parliament on February 25, with notable revisions borne of the contentious talks preceding its passage. Importantly, a proposed forced arbitration process, that would likely strongly increase the bargaining power of Australian media companies when legally enforced negotiations came to an impasse, was amended to include an opt out provision “if [a media company] could convince the government that it had already ‘made a significant contribution to the sustainability of the Australian news industry through reaching commercial agreements with news media businesses.’”

More importantly, the NMDPMBC has globally stoked the fires of debate over the current state of news media, as well as the very nature of the internet. A world first in its scope and breadth, the NMDPMBC could potentially be, “a move that[…] unleash[es] more global regulatory action to limit [media giants’] power.” Reactions to this contingency are mixed.

Some, like Timothy Berners-Lee, “the British computer scientist known as the inventor of the World Wide Web, say[…] Australian plans to make digital giants pay for journalism could set a precedent that renders the Internet as we know it unworkable.” By undermining the “free linking” system, whereby the internet is seen as an inherently free space for the sharing of information and wherein links may be hosted by any and all sources free of charge, critics of the Australian law see this situation as opening the floodgates to stifling the free exchange of information that has become a paramount aspect of the internet today. Furthermore, by requiring payment for the hosting of links to news websites or other sites as well, such regulations may narrow the field of companies that can fight their way into the market by raising already high startup costs.

In contrast, some see laws like the NMDPMBC as long-overdue arbiters of change that will return power to news media companies in the face of Facebook and Google’s recent, relative hegemony in the arena of the free press. Eli Sanders, opining in a University of Washington Law Election Law class on the effects of the internet on traditional news media practices, said “[l]ocal newspapers have seen over 70% of their ad revenue disappear as Facebook and Google have gobbled up the profits. 60% of newsroom employees have been shed by companies [and] the [media] platforms have decimated the underpinnings of journalism.” These numbers are backed up by in-depth data from the Pew Research Center, and it is widely reported that currently, “the Duopoly [Facebook and Google] captures 90 percent of all digital ad revenue growth and approximately 60 percent of total U.S. digital advertising revenue.”

It is argued that this situation has caused a disintegration of the traditional process, driven by decreasing ad revenues, undermining thorough journalism while making it difficult for news media organizations to facilitate the “special push and pull between editors and writers” that has, for years, led to unbiased, fact-checked, and well-sourced journalism a la Walter Cronkite. The inability of news publishers to effectively profit off of their work is understood to contribute not only to job loss and poorer quality journalism, but also to the prevalence of fake news and misinformation due to the decreased integrity of traditional vetting processes. For years, lobbying groups within the United States, like the News Media Alliance, have argued these points to little avail.

With these facts in mind, the Australian NMDPMBC should serve multiple broad goals by giving Australian news media companies the ability to control their resources more properly and maintain larger staffs due to increased revenue from their reporting. It might also lessen the effects of disinformation and misinformation in the nation by providing a strong basis for ensuring that well-researched news is being promulgated into the marketplace of ideas, and might in the long run lead to more widespread quality journalism if companies are able to rebound from the deleterious financial effects that media giants have had on their bottom lines.

Is America Next?

Currently there is legislation in the works signaling that the United States might be poised to follow in Australia’s footsteps.

On March 10, 2021, House Antitrust Chairman David Cicilline (D-RI), ranking member Ken Buck (R-CO), Senate Antitrust chairwoman Amy Klobuchar (D-MN), and Senator John Kennedy (R-LA) introduced two identical versions of what is being called the Journalism Competition and Preservation Act (JCPA). “The bill would provide a limited antitrust safe harbor for news publishers to collectively negotiate with Facebook and Google for fair compensation for the use of their content,” provisions that appear to closely mirror those found in Australia’s NMDPMBC. The bill also appears to have clear targets: “[t]he draft defines online content distributors as companies with at least 1 billion monthly active users that display, distribute or direct users to news articles,” and this provision strongly suggests that the bill is meant to counter big tech’s power in the marketplace for news, rather than acting as a broad blanket provision that seeks to flush news providers with cash across the board.

Given the composition of the legislative branch and bipartisan criticism of the power wielded by media giants paired with their inability and unwillingness to better regulate their platforms, JCPA may find significant traction with lawmakers moving forward. Even Senate Minority Leader Mitch McConnell (R-KY) has expressed support for the bill.

Perhaps just as telling, two days after the introduction of the JCPA, “[o]n Friday, March 12, the News Media Alliance President & CEO, David Chavern, testified at a House Antitrust, Commercial, and Administrative Law Subcommittee hearing, ‘Reviving Competition, Part 2: Saving the Free and Diverse Press,’ on the need for the dominant tech platforms, such as Facebook and Google, to compensate news publishers fairly for use of their content” in the United States.

In the end, Facebook and Google are entering an era of increased opposition to their dominion of the world wide web. Potential Section 230 reform, widespread pushback of the rollout of digital currencies, privacy concerns, and now the rising influence of the JCPA look to reign in the unabashed money-making machinations of big tech in the United States. While such criticisms are well-founded, questions remain as to what will become of the internet’s “free market” and provision of services to the consumer. Who or what will ultimately prevail? Only time will tell.