Electric Soothsayers: The Ethics of Brain-Machine Interfaces

By: Mason Hudon

“Over one’s mind and over one’s body the individual is sovereign.” – John Stuart Mill

            In mid-April of this year, a company called Neuralink released a video of a male macaque monkey playing a version of the Atari classic game, “Pong”. At first glance, the video appears to be nothing more than a cute gimmick… that is until the viewer realizes that the joystick the monkey is using isn’t even plugged in—the program is being controlled by the creature’s brain by way of a complex, proprietary microchip.

            Neuralink, the brainchild of billionaire tech tycoon Elon Musk (better known as the CEO of Tesla, Inc. and founder of SpaceX) develops “breakthrough technology for the brain” known as brain-machine (or brain-computer) interfaces (BCIs). Essentially, Neuralink and other companies like it, are seeking to blur the line between human and machine, introducing computer hardware into human brains to do anything from making the world more accessible for disabled communities, to enhancing the video game experience, to “achiev[ing] a symbiosis with artificial intelligence” as Mr. Musk puts it. According to Limor Shmerlin Magazanik, Director of the Israel Tech Policy Institute, “a BCI decodes direct brain signals—colloquially known as the firing of neurons—into commands a machine can understand. Using either an invasive method—a chip implanted directly in the brain—or non-invasive neuroimaging tools, letting the machine pull raw data from the brain and translate it to action in the outside world.” While the technological singularity (the merging of human and machine into an inseparable existence) may still be quite far off for the human race, the introduction of BCI technologies that implicate the human brain raise very serious legal and ethical concerns regarding personal autonomy, privacy, and the rights and identities of humans as we currently perceive them.

            It’s true, “[b]asic neurotechnologies have been around for a while—including technologies like cochlear implants and deep brain stimulation and more complicated brain-computer interfaces,” but technologies of the kind that Neuralink and other companies involved in advanced BCI development are seeking to introduce are wholly unprecedented. In fact, Maja Larson, general counsel for the Seattle-based Allen Institute, has expressed that this “commercialization” of formerly purely medical applications for BCIs has never been seen and risks turning “benign research politicized”. When profit margins and the “bottom line” are introduced into an equation that previously sought to solve relatively narrow issues (typically divorced from the idea of revenue generation and solidly situated within the clinical environment), all bets might be off.

Legal regimes and regulations have not been crafted to deal with many of the dilemmas that these technologies will pose, for example: how college admissions should be handled for students that have brain implants that aid them in their school work or allow them to access the internet, how brain data should be protected when a BCI is communicating with a public WiFi network, how advertising will be implicated if companies can detect your needs (like hunger), or, even more complexly, how the regime of intellectual property will be impacted as a whole. Additionally, Scientific American writes “[o]ne tricky aspect is that most of the neurodata generated by the nervous systems is unconscious. It means it is very possible to unknowingly or unintentionally provide neurotech with information that one otherwise wouldn’t. So, in some applications of neurotech, the presumption of privacy within one’s own mind may simply no longer be a certainty.” The legal community will ultimately be tasked with addressing these deep concerns, and efforts should begin sooner, rather than later to develop new laws that preemptively protect against abuses of this technology before it is too late.

            Robert Gomulkiewicz, Charles I. Stone Professor of Law at the University of Washington School of Law, discusses in his Legal Protections of Software class that intellectual property protections for software don’t always work extremely well because lawmakers in the mid-20th century chose to conform existing IP regimes like copyright, patent and trademark to novel technologies far different than the items and ideas that they protected in years prior to the advent of the computer. Instead of creating sui generis laws that might account for all of the nuances and complexities of software, lawmakers opted for the “easy option” by retrofitting copyright, patent and trademark laws to fit contemporary needs. Such an “easy option” may work adequately for protecting software when financial concerns are the only issues implicated, but when it comes to the human mind and the privacy of one’s own thoughts and emotions, a retrofitted system leaves much to be desired because the stakes are so high. Sui generis laws are thus both a legal and moral imperative for lawmakers seeking to tackle BCI technologies moving forward. While new statutory regimes may and should draw important aspects from intellectual property and existing privacy regimes into their language, it remains clear that crafting brand new policy cannot and should not be avoided.

            Given the complexity of the issues inherent in BCI technologies, it will be critical to involve stakeholders from different backgrounds and paradigms including lawyers, engineers, bioethicists, doctors, and perhaps even philosophers in coalescing competing ideologies of the role of BCIs into workable legal doctrine. Particular focus should be directed towards ensuring privacy, equity, autonomy, and safety for those wishing to partake in BCI technologies. Specifically, discussions should concern: (1) securing the protection of the fundamental autonomy of the human mind, (2) securing the protection of the fundamental autonomy of the human body, (3) allowing for the ability of BCI users to control third-party access to their data, (4) ensuring accuracy in the interpretive methods used by software that attempts to translate the data from people’s brains, (5) ensuring disclosure of the use of performance enhancing BCIs in academic and competitive settings, (6) mitigating the effects of hacking and malware on BCIs, (7) elucidating the role and risks of allowing artificial intelligence a role in BCIs as Elon Musk has discussed, and (8) ensuring that people remain psychologically sound after installation. This list is not exhaustive, but these should cover some of the central issues that will underpin the legal framework for BCIs in the future. As with many technical innovations, things move pretty fast, and this means that legal entities need to act now to protect the qualities of human existence that we currently hold dear.

Leave a comment