By Sean Hyde
Artificial Intelligence (AI) systems emerging today create unintended consequences that raise ethical questions. Unsurprisingly, these ethical concerns have led some to call for ethics regulations in AI development. Regulation of ethics is not unique to AI development, but a mechanism to enforce the standards proposed by various organizations helps in making effective regulations. It is important to note that enforceability is not necessary for regulations to be effective; even nonbinding regulations can make a significant difference if the industry follows them.
- Types of Ethical Regulations
The American National Standards Institute (ANSI) recently wrote about the emergence of standards from a committee of the International Organization for Standards (ISO), which promulgates standards through the ANSI in the United States. Currently, ISO has only published standard use cases and a roadmap for future standards, with several standards in development. None of these documents, including those in development, appear to focus on ethics, rather they address basic design topics and data standards.
Leading the charge in developing standards is the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems. The IEEE is approaching the problem with two main deliverables: Ethically Aligned Design (EAD), which document sets the recommended standards, and gathering recommendations from a community of experts. The goal is to focus developers on the key issues and, hopefully, create compliance with the standards they release. The IEEE first published EAD back in 2016 and recently published an updated version, EADv2, in December of 2017, which combined the input of roughly 250 AI experts.
As a final note on ethical regulations; there are industries that have ethical rules that govern them. For example, professional conduct rules for lawyers or the AMA Code of Medical Ethics for medical doctors. The profession specific forms of regulation are typical in jobs that require a high level of public trust, a height I do not think AI developers have reached. It’s a possibility, but a long shot at best considering the breadth of the AI field.
- Can AI Standards be Binding?
With that background in mind, is it possible that these standards could become enforceable regulations? It depends, in part, on the organization that authorizes the standards. ISO falls under the World Trade Organization (WTO), an organization that promulgates extensive amounts of global regulation. The ISO has issued a wide array of standards from physical product specifications to management methodologies, including software development processes. However, ISO standards aren’t legally binding. In the United States, these standards are recommended by ANSI, but are not required for private industry. ANSI is a statutorily created organization, expressly requiring the use of standards for federal agencies and departments, but it lacks the capability to push standards directly onto industry.
Another possible source for the creation of regulations in the United States is the National Institute of Standards and Technology (NIST), created to study technology and create standards to improve the position of the United States in the global technology market. They are specifically tasked to work with organizations, like the ISO or IEEE, in their study and development of technological standards. Like ANSI, NIST is unable to create enforceable regulations on industry in the United States.
- Is There a Point to Nonbinding Standards?
In general, regulations are most effective when they have some enforcement power behind them to ensure compliance. The WTO can authorize tariffs or initiate settlement proceedings with a non-compliant member country, which promotes following the rules. But at the end of the day, the WTO does not regulate ethics, they regulate trade.
However, nonbinding regulations are not completely hopeless; they can be effective in a number of fields of transnational law. What’s more, these industry-made regulations can become binding norms in international courts via customary international law if a sufficient number of countries follow them for a sufficiently long enough period of time. For now, we have to rely on developers taking up the ethical torch on their own.
For now, there is no clear path to enforceable regulations for the ethical development of AI, in the United States or abroad. If the governing body is asking the question of whether or not there should be enforceable regulations for the ethical development of AI, the answer may largely depend on each regulator’s personal viewpoint. The standards currently being published are not enforceable without some other action to make them binding.
In the meantime, we will rely on the developers to be mindful of what they are doing, and hope they do not create Skynet.