Pages Menu
TwitterRssFacebook
Categories Menu

Posted by on Aug 30, 2017 in AI in Our Time, Featured, Science & Technology | 0 comments

Legality and Security: Where Humans and Machines Converge

Legality and Security: Where Humans and Machines Converge

 

Editor’s Note: This article is the fourth of Kara Anderson’s ongoing series on contemporary artificial intelligence and its implications for humanity.

For Part I: “Unchaining the Mind through AI: What Artificial Intelligence Actually Means for Humanity”

For Part II: “Mom was Right: Creativity Matters”

For Part III: “A Non-Political Solution to Global Healthcare”

Up until this point, our discussion has primarily focused on the philosophical and technical implications of Artificial Intelligence (AI), as these two areas appear to be the epicenter of public concern. These concerns are quite natural, as any new discovery, whether it be artistic or scientific, temporarily disorients and threatens humanity’s understanding and perception of the world. In terms of AI, this threat appears slightly more daunting, as AI has compelled us to expressively define abstract qualities like decision making, learning, personal choice. But these concerns are only scratching the surface of AI and its implications on various facets of society – economics, communication, scientific research, recreation, security, law, and so on.

This series has addressed the convergence of AI with many of these fields (medicine, politics, economics); yet the legality and security concerns of AI have not been explored. How do we regulate the development of this technology, considering the full potential of AI has not been realized yet? What are the acceptable uses and roles of AI in society? How do we maintain and protect the privacy of individuals, as AI machines are becoming more involved in our personal lives (healthcare assistance, home assistance machines, etc), and increasingly able to “derive the intimate from the available”?

Programmers, developers, and individuals that are still in control of the technology are becoming increasingly concerned with these questions, since the agents involved in the development of AI are held accountable to the basic institutions of law. But what happens when these self-sufficient machines deviate from their original purpose or become problematic? Do we hold them to the basic standards of the law, similar to that of humans, or do we create a subset of laws specifically for AI machines? Could we hold AI machines, which are essentially models of human cognition, up to human standards?

 Legality of AI Malfunctions: Super Computers and Twitter Bots

Here is a very extreme hypothetical that puts some of these questions in perspective: Imagine that the AI machine Watson, a super computer designed to field and answer questions, were to kill a person. The question is whether Watson itself or the creators, developers, and/or owners are responsible for the actions and malfunctions of Watson. Incarceration or financial compensation tend to be the punishments for such crimes committed by humans, but these outcomes would have very different implications for AI machines. So this is where it gets complicated. AI machines are somewhat independent, and this autonomy is only going to increase as technology improves. Therefore, new and unprecedented procedures and laws must be created to address AI malfunctions in order to protect users and maintain order and safe development.

Like several hypotheticals, this situation is highly unlikely, but it raises many important consideration points for cases such as the Netherlands Twitter bot case. In 2015, a Twitter bot, which is a non-AI script (lines of computer code) that generates speech on social media, posted a death threat on the page of a local fashion show that generated panic. Authorities were notified, and they traced the IP address of the bot back to a local resident. When the authorities arrived, the creator of the bot had no knowledge of the situation or what had caused this malfunction. The authorities confiscated and dismantled the bot immediately, but the consequences and implications of these types of incidents are not that simple. While this may seem like a fairly simple case, the legal aspect of AI is still in its initial phase, and thus have no precedents to go by. Currently, courts are adapting previously passed laws to include cyber and technological issues, such as Yahoo! Inc. v. La Ligue Contre Le Racisme et L’Antisémitisme. In the case, The United States Supreme Court maintained that the first amendment protected the cooperation’s right to sell Nazi paraphernalia along with the presence of hate speech on its website. These laws, however, are not enough and fail to recognize the fundamental differences and challenges that technology poses. AI simply complicates and muddles this legal grey area further.

This seemingly simple case not only exemplifies the vast number of unknowns we face when it comes to AI, but it also highlights the need for comprehensive regulations and monitoring systems to catch these malfunctions and prevent the perpetuation of these situations. While these AI machines are not humans, they possess many human qualities, and their resemblance is only improving. So, the question becomes: how similarly do we treat them to humans? While there are many answers to this question, it has become clear that when it comes to legal concerns, human standards are essential to maintaining order and security, but we must explore the extent of these standards.

Security: Regulating AI

At the root of these legal concerns is the issue of security. Like humans, AI machines require regulation, constant monitoring, and a standard to be held accountable. While the malfunctioning Twitter bot appears to be some type of coding or processing error, the outcome it is not dissimilar to human “malfunctions” or deviations. Thousands of individuals use Twitter and other social media sights to post inappropriate and threatening messages, which is why these sights are regulated by legal, internal, and external departments. Another key component to this monitoring system, however, are the very users of these sights. Twitter and other social media sights have options to report inappropriate comments, pictures, and videos. While this system will look considerably different in the field of AI, user monitoring is a relatively inexpensive and simple security measure that can help developers detect and correct issues before they escalate. Even in cases where legal or regulatory action cannot necessarily be taken due to a dearth of information, users can express their contempt for the existence of certain messages that will motivate corrections.

Cyber security laws provide a loose model regarding the development of AI and the security concerns that accompany such development. In its initial conception, internet regulation was virtually non-existent, and security was not a major concern. During this time, however, unsupervised deviations and malfunctions were less of a concern due to the novelty of the technology—the stage that AI appears to be in currently. But over time, security concerns have increased fourfold. From viruses, theft, and information fraud to cyber bullying and stalking, society has recently been forced to confront these issues ex post facto. Bullying occurrences, for instance, are nothing new, but the internet has become a less regulated platform for these issues. In response, organizations and individuals have begun lobbying for more comprehensive legislature, increased regulations, and more transparency, all of which has led to the development of the upcoming field of cyber security to counteract these unintended negative consequences.

The field of AI is likely to experience a similar progression with obvious variances. Yet, AI has an opportunity to take a proactive stance and begin developing some of these guidelines and regulations simultaneously to the development of AI as opposed to developing retrospective laws. This does not mean that societies will be able to logically account for every possible complication or situation, but taking a proactive stance on issues like that of legality and security would reduce the chance of malfunction, whilst also improving the overall viability of AI for all. Regardless of an individual’s stance, AI is here and is only growing. Failing to take proactive measures would be irresponsible, especially when we have seen how a failure to do so plays out time and time again.

Leave a Reply

Share This
%d bloggers like this: