
By: Taylor Dumaine
In Jurassic Park, Dr. Ian Malcolm famously admonished the park’s creator by saying “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” Technological advancement for the sake of advancement alone ignores the genuine negative effects that advancement could cause or contribute to. The negative externalities of technological advancement have often been overlooked or ignored. There is also often a reliance on federal and state governments to regulate industry rather than self-regulation or ethics standards. That reliance has become especially true in the AI and generative AI spaces. The lack of government regulation in AI technology is far outpaced by its rapid development, hindering the government’s ability to address ethical issues adequately.
Relying on government regulation is a copout for large tech companies. Congress’s record on technology regulation is poor at best, with most bills failing to become law and those that do being insufficient to effectively regulate. The United States still does not have a national privacy law and there is little political will to pass one. The increasingly octogenarian makeup of Congress does not have the best track record of actually understanding basic concepts in technology let alone increasingly complicated technology, such as AI, they are tasked with regulating. During Senate testimony regarding the Cambridge Analytical scandal, Meta CEO, Mark Zuckerberg, had to explain some pretty rudimentary internet concepts.
Earlier this year, Open AI CEO, Sam Altman, called for government regulation of AI in testimony before Congress. Altman also carries a backpack around that would allow him to remotely detonate ChatGPT datacenters in the scenario where the generative AI goes rogue. While by no means a perfect example of ethics in the AI space, Altman seems to at least be aware of the risks of his technology. Altman relies on the federal government to regulate his technology rather than engaging in any meaningful self-regulation.
In contrast to Altman, David Holz, Founder and CEO of Midjourney, an image generation AI program, is wary of regulation, saying in an interview with Forbes, “You have to balance the freedom to do something with the freedom to be protected. The technology itself isn’t the problem. It’s like water. Water can be dangerous, you can drown in it. But it’s also essential. We don’t want to ban water just to avoid the dangerous parts.” Holz highlights that his goal is to promote imagination and is less concerned with how his goal may impact people so long as others benefit. This thinking is common in tech spaces.
Even the serious issues in generative AI, such as copyright infringement, seem almost mundane when faced with facial recognition tools such as Clearview AI. Dubbed “The Technology Facebook and Google Didn’t Dare Release,” these facial recognition tools have the disturbing ability to recognize faces across the internet. Clearview AI specifically has raised serious Fourth and Fifth Amendment concerns regarding police use of the software. Surprisingly, the large tech companies, Apple, Google, and Facebook, served as de facto gatekeepers of this technology for over a decade due to their acquisitions of facial recognition technology, recognizing the dangers of this technology. Facebook was subject to a $650 million lawsuit related to its use of facial recognition on the platform. Clearview AI’s CEO Hoan Ton-That has no ethical qualms regarding the technology he is creating and marketing specifically to law enforcement. Clearview AI is backed by Peter Thiel who founded Palantir, which has its own issues regarding police and government surveillance. The potential integration of the two companies could result in an Orwellian situation. Therefore, Clearview AI represents a worst-case scenario for tech without ethical limits, the effects of which have already been disastrous.
Law students, medical students, and Ph.D. students are all required to take an ethics class at some point. Many self-taught programmers do not incorporate ethics classes or study into their learning. There are very real and important ethical concerns when it comes to technology development. In an age, culture, and society that values advancement without taking the time to consider the negative ramifications, it is unlikely that society’s concern over ethics in technology will change much. In a perfect scenario, government regulation would be swift, well-informed, and effective to protect against the dangers of AI. With the rate of technological innovation, it is hard to stay proactive in the ethics space, but that does not mean there should be no attempt to. Arguing for a professional ethics standard in computer science and software engineering is not without its own serious problems and would be almost entirely impossible to implement. However, by creating a culture where ethical concerns are not just valued but considered in the development of new technology, we can hopefully avoid a Jurassic Park scenario.