Electric Soothsayers: The Ethics of Brain-Machine Interfaces

By: Mason Hudon

“Over one’s mind and over one’s body the individual is sovereign.” – John Stuart Mill

            In mid-April of this year, a company called Neuralink released a video of a male macaque monkey playing a version of the Atari classic game, “Pong”. At first glance, the video appears to be nothing more than a cute gimmick… that is until the viewer realizes that the joystick the monkey is using isn’t even plugged in—the program is being controlled by the creature’s brain by way of a complex, proprietary microchip.

            Neuralink, the brainchild of billionaire tech tycoon Elon Musk (better known as the CEO of Tesla, Inc. and founder of SpaceX) develops “breakthrough technology for the brain” known as brain-machine (or brain-computer) interfaces (BCIs). Essentially, Neuralink and other companies like it, are seeking to blur the line between human and machine, introducing computer hardware into human brains to do anything from making the world more accessible for disabled communities, to enhancing the video game experience, to “achiev[ing] a symbiosis with artificial intelligence” as Mr. Musk puts it. According to Limor Shmerlin Magazanik, Director of the Israel Tech Policy Institute, “a BCI decodes direct brain signals—colloquially known as the firing of neurons—into commands a machine can understand. Using either an invasive method—a chip implanted directly in the brain—or non-invasive neuroimaging tools, letting the machine pull raw data from the brain and translate it to action in the outside world.” While the technological singularity (the merging of human and machine into an inseparable existence) may still be quite far off for the human race, the introduction of BCI technologies that implicate the human brain raise very serious legal and ethical concerns regarding personal autonomy, privacy, and the rights and identities of humans as we currently perceive them.

            It’s true, “[b]asic neurotechnologies have been around for a while—including technologies like cochlear implants and deep brain stimulation and more complicated brain-computer interfaces,” but technologies of the kind that Neuralink and other companies involved in advanced BCI development are seeking to introduce are wholly unprecedented. In fact, Maja Larson, general counsel for the Seattle-based Allen Institute, has expressed that this “commercialization” of formerly purely medical applications for BCIs has never been seen and risks turning “benign research politicized”. When profit margins and the “bottom line” are introduced into an equation that previously sought to solve relatively narrow issues (typically divorced from the idea of revenue generation and solidly situated within the clinical environment), all bets might be off.

Legal regimes and regulations have not been crafted to deal with many of the dilemmas that these technologies will pose, for example: how college admissions should be handled for students that have brain implants that aid them in their school work or allow them to access the internet, how brain data should be protected when a BCI is communicating with a public WiFi network, how advertising will be implicated if companies can detect your needs (like hunger), or, even more complexly, how the regime of intellectual property will be impacted as a whole. Additionally, Scientific American writes “[o]ne tricky aspect is that most of the neurodata generated by the nervous systems is unconscious. It means it is very possible to unknowingly or unintentionally provide neurotech with information that one otherwise wouldn’t. So, in some applications of neurotech, the presumption of privacy within one’s own mind may simply no longer be a certainty.” The legal community will ultimately be tasked with addressing these deep concerns, and efforts should begin sooner, rather than later to develop new laws that preemptively protect against abuses of this technology before it is too late.

            Robert Gomulkiewicz, Charles I. Stone Professor of Law at the University of Washington School of Law, discusses in his Legal Protections of Software class that intellectual property protections for software don’t always work extremely well because lawmakers in the mid-20th century chose to conform existing IP regimes like copyright, patent and trademark to novel technologies far different than the items and ideas that they protected in years prior to the advent of the computer. Instead of creating sui generis laws that might account for all of the nuances and complexities of software, lawmakers opted for the “easy option” by retrofitting copyright, patent and trademark laws to fit contemporary needs. Such an “easy option” may work adequately for protecting software when financial concerns are the only issues implicated, but when it comes to the human mind and the privacy of one’s own thoughts and emotions, a retrofitted system leaves much to be desired because the stakes are so high. Sui generis laws are thus both a legal and moral imperative for lawmakers seeking to tackle BCI technologies moving forward. While new statutory regimes may and should draw important aspects from intellectual property and existing privacy regimes into their language, it remains clear that crafting brand new policy cannot and should not be avoided.

            Given the complexity of the issues inherent in BCI technologies, it will be critical to involve stakeholders from different backgrounds and paradigms including lawyers, engineers, bioethicists, doctors, and perhaps even philosophers in coalescing competing ideologies of the role of BCIs into workable legal doctrine. Particular focus should be directed towards ensuring privacy, equity, autonomy, and safety for those wishing to partake in BCI technologies. Specifically, discussions should concern: (1) securing the protection of the fundamental autonomy of the human mind, (2) securing the protection of the fundamental autonomy of the human body, (3) allowing for the ability of BCI users to control third-party access to their data, (4) ensuring accuracy in the interpretive methods used by software that attempts to translate the data from people’s brains, (5) ensuring disclosure of the use of performance enhancing BCIs in academic and competitive settings, (6) mitigating the effects of hacking and malware on BCIs, (7) elucidating the role and risks of allowing artificial intelligence a role in BCIs as Elon Musk has discussed, and (8) ensuring that people remain psychologically sound after installation. This list is not exhaustive, but these should cover some of the central issues that will underpin the legal framework for BCIs in the future. As with many technical innovations, things move pretty fast, and this means that legal entities need to act now to protect the qualities of human existence that we currently hold dear.

All Bark, No Bite: Washington’s 2021 Facial Recognition Regulation Lacks Enforcement Mechanism

By: Alex Coplan

Taking effect on July 1, 2021, Washington’s new facial recognition (FR) law will regulate state and local government use of FR technologies.  But will the new law be effective enough to protect your identity? The law serves as a middle ground between privacy advocates and government officials in favor of using FR programs. However, due to vague wording and lack of oversight, the bill’s intent may not always produce the desired results.

On its face, SB 6280 provides proper safeguards to prevent government use and abuse of FR technology. The law includes significant provisions requiring accountability and limitations on use, and signals the Washington legislature’s belief that FR use creates serious policy issues.

First, SB 6280 will apply to all state and local government agencies. This means that agencies operating in Washington must comply with the new law and be subject to its oversight. Some exceptions, however, include the Department of Licensing (DOL) and the Transportation Security Administration (TSA). While these agencies are not subject to SB 6280, they are required to disclose the use of FR technology if located in Washington.

Second, the law requires agencies to provide a notice of intent and produce an accountability report. In other words, government agencies must file notice to obtain and implement FR services. If approved, those agencies must provide accountability reports every two years. These reports disclose the capabilities of the FR system, the data types the system collects, the training procedures and security for protecting data, and how the system can benefit the community.

Third, and perhaps most importantly, SB 6280 requires “meaningful human review” when FR programs create “legal effects concerning individuals or similarly significant effects concerning individuals.” Meaningful human review requires review or oversight by one or more individuals who are trained in accordance with the act. Training includes coverage of the capabilities and limitations of the FR program, and how to interpret the program’s output. The bill considers “legal” or “similarly significant effects” to be decisions that result in the provision or denial of criminal justice, financial services, housing, education, employment opportunities, and other basic civil rights. Accordingly, if an individual faces a significant outcome following government use of FR, they have the right to meaningful human review.

Fourth, agencies must receive a warrant in order to use FR programs during real-time surveillance. This means government actors may not use FR programs simultaneously with a live video feed. Specifically, SB 6280 prohibits use of FR programs body-worn cameras used by law enforcement, meaning police may not use FR programs in the field without judicial authorization.

Companies like Microsoft—who create FR technology—favored and lobbied for the passage of SB 6280. Why, in a bill intended to limit FR use, would Microsoft approve of this legislation? Arguably, SB 6280 may be all bark, but no bite. While this law sounds effective, it lacks enforcement procedures.

For example, the accountability reports may not provide much accountability. At this point, the reports are not required to be approved by any regulatory or legislative body. As a result, there is no enforcement mechanism for the provision of this bill. If an agency chooses to not follow the procedure, who will stop them?

Further, the “meaningful human review” provision lacks substantial definition. The subsection defining this phrase is a single sentence, which fails to provide any direction for decision making. Further, the provision does not require review from any third party, allowing agencies to review any potential misconduct themselves, with their own employees.

Additionally, SB 6280’s warrant requirement only covers real-time identification, so agencies may freely use FR technology on previously shot footage without obtaining a warrant first. By allowing agencies to engage in this practice without a warrant, SB 6280 subjects the public to unchecked surveillance by law enforcement.

Washington enacted SB 6280 to provide compromise and regulation to an emerging field. However, the law lacks sufficient procedures for enforcing violations. Without those procedures, or a moratorium on FR use, government agencies can abuse these technologies to the detriment of every Washingtonian.

Is Discrimination Fair? The FTC’s Failure to Regulate Big Tech

By: Gabrielle Ayala-Montgomery

In the age of technological innovation, minorities fight to end discrimination both offline and online.  Biased outcomes produced by technology have material consequences for minorities as advancements in technologies replicate social inequalities and discrimination. Vulnerable populations are denied fair housing, education, loans, and employment because of biased data; however, the commercial use of biased data raises concerns for all consumers.

The FTC should consider using its authority to deliver effective deterrence for the harms to the economy when companies unfairly and disproportionately impact minorities. First, the FTC must bring enforcement actions against companies whose “unfair practices” disproportionately affect minority consumers. Second, the FTC should conduct rulemaking to expand its definition of “unfair practices” to encapsulate businesses practices unfairly affecting minorities.  

The FTC Breakdown

The FTC protects consumers from deceptive or unfair business practices and investigates alleged violations of federal laws or FTC regulations. Section 5 of the FTC Act prohibits “unfair or deceptive business practices in or affecting commerce.” The Act broadly vests the FTC’s authority to bring enforcement actions against businesses to protect consumers against unfair or deceptive practices. Section 18 enables the FTC to promulgate regulations to prevent unfair or deceptive practices. After the FTC issues a rule, it may seek penalties for unfair practices constituting violations.

The FTC explicitly defined its standards for determining whether a practice is deceptive but left the question of whether a practice is unfair to be interpreted by courts. In response, courts have developed a three-factor test to identify an unfair practice.  The test asks: (1) if the practice injures consumers; (2) if it violates established public policy; and (3) if it is unethical or unscrupulous.

Biased Practices in High Tech

Algorithms are quantitative data, a process, or a set of rules involving mathematical calculations that produce data to help humans make decisions. Algorithmic bias, machine learning bias, or AI bias is a systematic error in the coding, collection, or selection of data that produces unintended or unanticipated discriminatory results. Algorithmic bias is perpetuated by programmers who train algorithms based on patterns found in historical data. Humans then use these biased results to make decisions with implications systematically prejudiced towards minorities.  

Data and surveillance are now big businesses. However, some companies have failed to ensure AI products are fair to consumers and free from impermissible bias. For example, Amazon had to disband recruiting tools because the system’s data rated job candidates in a gender-biased manner. The AI models educated themselves from resume data compiled from the previous ten years, composed primarily of white men. Thus, Amazon’s recruiting tool taught itself male candidates were preferable.

The National Institute of Standards and Technology (NIST) examined the tech industry’s leading systems’ facial recognition technology (FRT) algorithms, including 189 algorithms voluntarily submitted by 99 companies, academic institutions, and other developers. The algorithms came from tech companies and surveillance contractors, including Idemia, Intel, Microsoft, Panasonic, SenseTime, and Vigilant Solutions. NIST found empirical evidence” many FRT algorithms exhibited “demographic differentials” that can worsen their accuracy based on a person’s age, gender, or race. Some algorithms produced no errors, where other software was up to 100 times more likely to return an error to a person of color than for a white individual. Overall, middle-aged white men generally benefited from the highest FRT accuracy rates or the least amount of errors. Such bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities. If left unchecked, biased algorithms can lead to decisions that can have a collective, disparate impact on specific groups of people even without the programmer’s intention to discriminate.

Companies’ targeted advertisement systems have been utilized to exclude people of color from seeing ads for homes based on their “ethnic affinity.” For example, Facebook and other tech companies settled with civil rights groups for participating in illegal discrimination practices in their advertisement of housing, employment, and loans. Targeted marketing policies and practices by tech companies permitted users to exclude marginalized groups from seeing specific ads. As a condition of the settlement, Facebook agreed to establish a separate advertising portal for creating housing, employment, and credit “HEC” ads on Facebook, Instagram, and Messenger that will not allow users to block consumers based on gender, age, and multicultural affinity.” However, research demonstrates Facebook’s ad system can still unintentionally alter ad delivery based on demographics. These current advertisement practices online contribute to the systematic inequality faced by minorities’ income, housing, and wealth.

Expanding “Unfair Practices”

What if Facebook had not reexamined its practices or Amazon kept its biased recruiting tech? No single piece of data protection legislation exists in the USA to prevent biased data and surveillance technology. Instead, the country has patchwork laws at federal, state, and municipal levels.  Sen. Ron Wyden, D-Ore., plans to update and reintroduce his Algorithmic Accountability Act of 2019, a bill designed to fight AI bias and require tech companies to audit their AI systems for discrimination. The Act, if passed, would have directed the FTC to require companies to study and fix flawed algorithms resulting in inaccurate, unfair, biased, or discriminatory decisions. The passage of an Algorithmic Accountability Act would reduce decisions based on biased algorithms. However, instead of waiting on Congress, the FTC may use existing laws and policy options addressing such violations and apply them to unfair and discriminatory practices in the digital world.

In the absence of federal legislation, the FTCA could be used to protect consumers against unfair practices that are biased against minorities. For the purposes of Section 5 of the FTC Act, purposeful or negligent practices that disproportionally impact minorities should constitute (1) unethical or unscrupulous, (2) violations of public policy, and (3) actual harm to consumers. The FTC should prioritize enforcement of valid claims of unfair commercial practices that disproportionately impact minorities.

New Rulemaking Group May Provide Hope for Prevention

Following criticism that the FTC failed to use its authorities to address consumer protection harms adequately, it announced its new rulemaking group.  This rulemaking group will allow the FTC to take a strategic and harmonized approach to rulemaking across its different authorities and mission areas. With this new group in place, the FTC is poised to strengthen existing rules by undertaking new rulemakings that would interpret unfair practices further. Chairwoman Rebecca Kelly Slaughter stated, “I believe that we can and must use our rulemaking authority to deliver effective deterrence for the novel harms of the digital economy].” Perhaps under this new rulemaking group, the FTC can meaningfully protect and educate consumers about the harms of commercial practices that unfairly impact minorities by expanding their interpretations of “unfair” to include commercial practices biased against minorities.

The FTC needs to protect consumers by protecting minorities. Current laws do not adequately address biased data or practices that produce discriminatory outcomes for minorities. Without legislation or federal administrative action, high-tech companies will continue developing systems, intentionally or unintentionally, biased against minorities like people of color, women, immigrants, the incarcerated and formerly incarcerated, activists, and others. The FTC has the authority to effectively deter business practices unfairly and disproportionately impacting minorities by bringing enforcement actions against such practices. Further, the FTC should explicitly expand their interpretation of “unfair practices” to encapsulate practices disproportionately affecting minorities.

Right to Buy, But Not Repair?

By: Joanna Mrsich

In an increasingly advanced and technological world, gadgets continue to improve people’s lives and make what was once thought impossible, possible. However, as the technology behind these products becomes more advanced, so does the difficulty in fixing them when they malfunction. While companies such as Apple, Tesla, and others all provide arguably accessible product service and repairs, do consumers have the actual ability to fix or repair these products themselves? More importantly, should consumers have the ability, or right, to repair products they own?

What is the Right to Repair?

The “right to repair” movement is based on the belief that the technology in modern equipment has enabled manufacturers to reduce access to repair by claiming that repairs could violate their proprietary rights. Moreover, the movement argues that although people generally have the “right” to repair their own goods, the actual ability to do so can be influenced—or completely controlled—by a product’s original equipment manufacturer (“OEM”). These entities—OEM’s—are often the exclusive source of replacement parts and other physical and electronic tools that make repairs feasible. OEM’s often bar consumers from using independent repair or service firms by claiming such acts risk voiding their products’ warranties. Manufacturers have often said that “strict repair guidelines protect trade secrets and ensure safety, security and reliability.”

Some people within the right to repair movement argue that, in addition to providing consumers with this necessary right, legislation supporting a right to repair would have a positive environmental impact because more consumers would likely fix, rather than dispose of, their own devices. A 2019 report by the Colorado Public Interest Research Group noted that by making repairs difficult, the consumer electronics industry is creating a culture of disposable electronics. According to a 2014 Consumer Report, consumers were advised not to spend more than fifty percent of the cost of a new product on repairing on existing one. The executive director of The Repair Association—advocates for right-to-repair legislation—noted that repairs are “roughly 50% or more of the cost to replace the device,” and that the “holy grail of all of it is to send you to the showroom to buy another product.” Therefore, when OEMs act as gatekeepers for repairs, consumers are more likely to replace damaged products than repair them.

Opposition to the Right to Repair:

It is not uncommon for companies to oppose consumers’ right the right to repair their own products. An example can be found in Apple’s Repair Terms and Conditions. In §1.11.6—Disclosure of Unauthorized Modifications—this contract states that: “During the service ordering process, you must notify Apple of any unauthorized modifications, or any repairs or replacements not performed by Apple or an Apple Authorized Service Provider (“AASP”).” The section goes on to imply that repairs done by non-AASP services will likely void the product’s warranty or result in additional costs for completing the service “even if the product is covered by warranty or an AppleCare service plan.”

Another pertinent example can be found in the video game industry. According to the Entertainment Software Association (“ESA”), video game consoles are “unique from other devices, appliances and consumer products” because they “rely upon a secure platform to protect users, the integrity of the gaming experience, and the intellectual property of game developers.” In fact, the ESA’s Right to Repair Policy explicitly bars unauthorized parties from bypassing a console’s specialized software that protects its security and piracy risks. Moreover, this policy explains that while repair shops likely do not use repairs for illegal purposes, the “publication of a console’s security roadmap would allow bad actors to use this knowledge to undermine the entire console ecosystem.” The policy acknowledges the harm right to repair could have on the industry and notes that major video game console makers—such as Microsoft, Nintendo, and Sony—all provide “easy, reliable, and affordable repair services…to ensure that their consoles remain in good working order.” Lastly, the ESA ends its repair policy by noting that the right to repair movement does not contribute to a positive environmental impact because video game companies and retailers—like Microsoft, Nintendo, Sony, and GameStop—have robust recycling programs for consumers looking to dispose of consoles.

Despite the existence of many more technology-based examples of industry opposition to consumers’ right to repair, the last example in this post involves the agricultural industry. The American Farm Bureau Federation, National Farmers Union, and the National Corn Growers all support the Right to Repair. The agricultural industry believes it is only fair to require manufactures to make equipment field serviceable. Industry manufacturers, on the other hand, oppose the right to repair. For example, John Deere claims that is dangerous to allow farmers to repair their equipment.

Current Discourse:

Over the years, important discourse on the issue has taken place but not enough has been done. In 2019, the Federal Trade Commission (“FTC”) hosted a discussion called Nixing the Fix: A Workshop on Repair Restrictions. The workshop discussed issues consumers or independent repair shops face when manufacturers restrict or make it impossible to conduct product repairs without voiding a product warranty. The FTC later made all empirical research collected in preparation of the workshop available for the public to view. Additionally, state legislatures and Attorneys General have launched legislative and litigation initiatives in an attempt to bar popular user restrictions. In the farming context, former presidential candidates Warren and Sanders both promoted the Right to Repair.

However, some existing laws could be applied to the right to repair issue without amendment. The Magnuson-Moss Warranty Act—passed by Congress in 1975—requires businesses to provide consumers with detailed information about warranty coverage. This Act protects consumers from unfair or misleading disclaimers, including claims that warranties can be voided if a consumer removes a seal. Furthermore, as of February 2021, right to repair legislation has been introduced in 14 states.

What now?

Consumers should have the opportunity to repair the property they purchase. Under the status quo, it is increasingly difficult for people to buy repairable products. The European Commission may provide a good template for legislative changes the United States could implement. In Europe, these right to repair rules are supposed to cover phones, tablets, and laptops by 2021. There has also been some progress during the COVID-19 pandemic with Senate Democrats introducing a Bill to block manufacturers’ limits on medical devices. However, the problem is also present in nearly every other industry, and comprehensive legislation is required. If technology truly is the future, repair rights need to be expanded to protect consumers.

Data is the New Oil: How China’s Global Biometric Database Throws the World Power Dynamic in Flux

Photo by Martin Lopez on Pexels.com

By: Kelsey Cloud

Imagine receiving a phone call from a Chinese genome sequencing company who tells you that you are on the verge of developing heart disease and recommends a cocktail of medications to alleviate your future symptoms. Would you accept the medication?

On the one hand, if a Chinese company can micro-target you based on something identified in your DNA, why wouldn’t you want to be proactive and begin to solve a problem before it even arises? On the other hand, do we as a nation want another country to systematically eliminate our health care services?

Over the last few decades, the Communist Party of China—the majority political party in the People’s Republic of China (PRC)—has acquired the personally identifiable information (PPI) from an estimated 80% of American adults. In February, the National Counterintelligence and Security Center (NCSC) warned that these efforts to obtain healthcare data from countries around the world posed “serious risks, not only to the privacy of Americans, but also to the economic and national security of the U.S.” The race to control the future of health care through the accumulation of biometric data is the modern space race; however, there is more at stake than national pride.

The Plan: Made in China 2025

PRC’s authoritarian government, led by Xi Jinping, has brazenly declared their aspirations to take the world stage as the dominant leader in this biological age. As the U.S. Chamber of Commerce highlighted in its summary of PRC’s published manifesto, Made in China 2025, PRC designates biotech as a “strategic emerging industry” and prioritizes collecting healthcare information both domestically and internationally. By investing $9 billion into collecting and sequencing genomic data, PRC’s communist regime strives to collect and analyze large genomic datasets in order to globally propel its precision medicine industry.

Rather than administering one-drug-suits-all treatments, precision medicine aims to provide customized treatment for  individual patients based on their genetic makeup, lifestyle, and environment. By analyzing how a patient’s genes interact with their environment, precision medicine allows doctors to predict risk of disease and reactions to various medicines.

In the wake of the COVID-19 pandemic, PRC has invested billions of dollars into distributing COVID-19 tests around the world to accumulate genomic data from the global population for precision medicine advances. Propelled by China’s largest genomics company, the Beijing Genomics Institute (BGI), PRC has sold test kits to over 180 countries—establishing its own laboratories in 18—since August 2020.

COVID-19 Laboratories or Modern Day Trojan Horses?

When COVID-19 infections skyrocketed globally, BGI sent a letter to Washington State Governor Jay Inslee, proposing to construct and manage COVID testing laboratories. Promising to provide technical expertise and new equipment, BGI attempted to take advantage of the worldwide crisis with the ulterior motive of using testing to expand their collection of biometric information. Bill Evanina, former NCSC Director and veteran of the FBI and CIA, made a public statement in response to BGI’s letter, warning that “[f]oreign powers can collect, store and exploit biometric information from COVID tests.” Although it remains unclear whether BGI could receive DNA from nasal swabs, BGI has certainly found a way to establish a foothold in countries to start mining data.

The BGI headquarters house and operate the government-funded China National GeneBank, enabling PRC to map the human genome through a biorepository of 20 million genetic samples taken from humans, animals, and plants. Disguised by BGI as a way to foster new medical discoveries and cures that will “advance its Artificial Intelligence and precision medicine industries”, the company’s alleged mission acts as a modern day Trojan horse. By obtaining vast troves of foreign countries’ health data, PRC ultimately endeavors to weaponize that data and systematically eliminate foreign health care services, displacing America as a global biotech leader.

The Chase to Control Biodata: A Modern Space Race

In the 21st century, as global superpowers recognize that their future success hinges on acquiring robust amounts of human biometric data, the race to accumulate the largest, most diverse dataset has become the modern space race. Ultimately, the biggest dataset wins—hence PRC’s aggressive efforts to accumulate data from every country in the world.

As PRC rapidly stockpiles U.S. data to support these economic initiatives, it has simultaneously shut the door on access to their own data, creating a one-way street that thwarts the U.S. from similarly benefiting from Chinese healthcare data. This inequitable relationship could allow PRC to displace U.S. biotech companies as global biotech leaders. Even though new healthcare treatments produced in PRC could benefit American patients, America would ultimately become increasingly dependent on PRC’s drug industry. America’s dependence on China during the COVID pandemic for personal protection equipment would seem trivial compared to the potential for that kind of future dependence. NCSC warns that such a strong reliance on Chinese medicine would likely lead to a transfer of wealth, with the U.S. job market weakening as China’s strengthens.

Data as a Weapon

PRC’s vast accumulation of DNA, PPI, and personal health information could allow them to target specific individuals, including Americans, through extortion and manipulation, such as leveraging someone’s mental illness or addiction for blackmail. Knowledge of top national decision-makers’ DNA could additionally be exploited to bolster PRC’s national defense strategies. By targeting genetic weaknesses, PRC could utilize genetic information to enhance their own soldiers’ strengths and engineer pathogens to exploit American soldiers’ weaknesses. For instance, BGI’s latest research centered on how medicine could interact with genetic makeup to protect a soldier from brain injury or stop altitude sickness from impairing a soldier from performing at maximum strength during wartime.

While BGI claims its collaboration with military researchers was for solely academic purposes, the Human Rights Watch says otherwise: more than a million Uyghurs (Chinese citizens who belong to a Muslim minority) have been jailed in camps, in part due to two subsidiaries of BGI whom allegedly conducted genetic analyses used to facilitate Muslim repression. Deeming these camps a crime against humanity, the U.S. Department of Commerce placed trade sanctions on BGI. The company responded by stating it was not involved in human rights abuses, attempting to persuade the public that the camps are educational and vocational institutions.

The U.S. Must Protect its Citizens Data

While no one chastises a country that conducts medical research to improve treatments and find new cures, PRC’s accumulation of biometric data through BGI poses a substantial risk to the health and safety of not just their own citizens, but also citizens of every country worldwide. With national security, healthcare systems, the economy, and individual privacy on the line, the U.S. must protect its citizens from misuse and theft of their biometric data. America’s $100 billion biotech industry stands to lose their innovative edge in the genomic field to Chinese companies, threatening a long-term cost to the U.S. economy. To remain a global superpower, the U.S. government must ensure that they take an abundance of caution when collaborating with Chinese companies, and that their citizens’ data remains safeguarded from the grip of the PRC.