What Can a Foul-Mouthed Twitter Troll and a Board Game Playing Robot Tell Us About Artificial Intelligence’s Ramifications for the Legal System?

AIBy Jeff Bess

Rapid technological development in the digital age has disrupted countless industries and fundamentally reshaped many aspects of modern life. Many of these technologies also present legal challenges; ranging from Constitutional privacy concerns stemming from government surveillance, to ongoing employment law disputes about companies’, like Uber, use of independent contractors. A perhaps even greater disruptor – to both the law and society in general – is found in the emerging field of Artificial Intelligence. There have been numerous scholarly inquiries into theoretical challenges of creating a moral and legal framework to govern Artificial Intelligence technologies, but recent accomplishments in the field can provide clues as to how the direction of the technology will inform necessary legal rules.

On March 23, 2016, Microsoft set an AI chatbot named “Microsoft Tay” – which was intended to be modeled after the speaking style and identity of a teenage girl – loose on Twitter. Tay learned words and phrases and created original tweets based on what it learned from the human users that engaged with it. Perhaps all-too predictably, after only a few days of talking to Internet trolls, Tay began to profess her love for Hitler, deny the Holocaust, and proclaim many other radical opinions. Microsoft quickly disabled Tay’s Twitter account and now it – assumedly following some technical modifications – appears to be back intermittently.

While the case of Microsoft’s Tay is amusing, it also demonstrates some of the issues that will become increasingly common as Artificial Intelligence becomes more widespread and sophisticated. A fundamental question is: Who is responsible for Tay’s actions? Is Microsoft? Who are the users who taught it to deny the Holocaust? Could there one day be a Tay so sophisticated as to be legally liable for its own actions? Tay’s quick turn toward fascist and racist speech shows that Artificial Intelligence could potentially be malevolent, even if only because of human inputs. This has potentially severe consequences if those ideologies are taught to an AI being that could do something about it beyond post to Twitter. Legislatures should prescribe rules regarding liability for an AI’s unlawful act before it becomes a reality and courts are forced to shoehorn existing law into the AI context.

A second recent achievement in Artificial Intelligence demonstrates how close society may be to truly formidable AI. Google’s AlphaGo, which is designed to play the complex strategy game Go, beat a grandmaster earlier this month for only the second time ever. Reportedly by “defying millennia of basic human instinct,” AlphaGo defeated Lee Sedol, the second-ranked Go player in the world. The same week, Google announced its plans to universally release its Artificial Intelligence engine over the cloud, granting access to the public. Both the strength of Google’s technology and its willingness to spread it widely signal that Artificial Intelligence has come a long way towards reality and that development is likely to increase in pace.

Artificial Intelligence has been the stuff of science fiction for decades, but it may be here sooner than we think. Cognitive technologies have the potential to unlock previously unimaginable scientific and technological breakthroughs, but even early applications demonstrate the potential risks. A legal framework should be developed so that it can shape the development of Artificial Intelligence in a way that benefits society rather than be purely reactionary when something goes wrong. Because truly autonomous AI does not currently exist, creators should be held liable for their actions, in-line with existing doctrine that holds people responsible for their employees, pets, etc. in certain situations. As Artificial Intelligence matures further, questions of liability may need to be revisited. We cannot know exactly what will become of Artificial Intelligence technology, but it is possible that AI-enabled technologies could at some point have the cognitive ability – and therefore moral responisibility – of human beings. Rather than waiting until then to craft a legal framework for Artificial Intelligence, law makers would be well-advised to start crafting a legal framework now that can be refined to meet the needs of emerging technologies.

Image source: commons.wikimedia.org.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s