Loading...

What will go wrong with AI?

What is with AI that makes us go overboard when thinking about its possible impact on humanity? Is it caused by the Terminator legacy deeply embedding this idea of a malicious AI trying to kill us all in our cultural and personal mindset? It probably is part of the problem given the many references to the Terminator and Skynet in a large number of AI related publications. Even without direct reference to the movie itself, you often feel its presence.

Take this article by Roman V. Yampolskiy for example:

The threat from dangerous AI systems is vastly underappreciated and under-researched. If we don’t study them, we can’t fight them – or prevent them.

 

Short as it is, it attempts to give a balanced view on what might go wrong with AI. Balanced up to the What might they do paragraph that is, where the most horrible things a reasoning AI can do to humans are summed up. Among them “Enslaving humankind(…)” and the worst one of them all (full quote to let this one sink in):

Abusing and torturing humankind with perfect insight into our physiology to maximize amount of physical or emotional pain, perhaps combining it with a simulated model of us to make the process infinitely long” (sic).

What does this say about the author, the society we live in and even our species in general if this is what we come up with when thinking about AI.

What makes this nightmarish list even weirder is the preceding paragraph Whom should we look out for? with a comprehensive list of people and groups who are interested in using AI in a bad way, or probably closer to reality: who are already trying to do it. Taking this list as a starting point it would have been easy to come up with realistic scenarios of how these groups can use AI to achieve their goals. So why leap to the most extreme and unlikely scenarios? Sure they make a provocative article but this kind of reasoning (and this author is not alone this) is a bad thing as it leads the discussion away from the scenarios that are more important right now. These don’t fall into the endless torture category but are far more real. And they are likely to be caused by an obvious category missing from the list:

People who with the best intentions come up with AI’s that have unintended and unexpected negative effects (the book Weapons of Math Destruction by Cathy O’Neil gives some great and real-life examples of this).

And this is the most problematic category by far as they cannot be identified beforehand but only after they have done their damage.

Before we continue, we should talk about terminology first. Terms like AI, machine intelligence, machine learning, deep learning, predictive analysis, algorithms are used freely and loosely often clouding the real discussion with discussions about if a term is correctly used in its context. For the sake of the argument, I am making here let’s stick to the simple one: machine decision making which -self-explanatory- means any decision-making originated by a non-human process. Whether this is a convolutional neural network deciding that “Yes, this a cat”, a predictive analysis deciding I shouldn’t get a loan or any other decision by a non-human.

Although its fundamentals concepts are decades old, we are just now at the dawn of applying machine decision making to real-life situations at an unprecedented scale, and as with any “dawn of” situation, this brings with it a lot of experimenting, uncertainty, trial and error, finding borders, huge successes and big failures.

The errors and failures at this point are not even close to endlessly torturing human. They are in the range of funny:

or mildly annoying:

to things that can have a bigger impact on your life or on society:

An algorithm is using government data to tell those from low socioeconomic backgrounds their likelihood of completing university, but privacy experts say it could be utilised for early intervention instead of discouragement.
Algorithms can dictate whether you get a mortgage or how much you pay for insurance. But sometimes they’re wrong. Lots of algorithms go bad unintentionally. Some of them, however, are made to be criminal. Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

Even if you take the positive stance that errors and failures are just temporary and will be solved along the line a more fundamental problem exists. Most machine decision making is an extremely efficient discriminatory process. Discrimination in this context is not referring to the racist and prejudiced kind but to its other meaning of the “recognition and understanding of the difference between one thing and another”. Whether this is a simple hotdog/not-hotdog situation or a more serious eligible/not eligible for a loan.

At the same time, we are painfully aware of what discrimination (in the non-neutral sense) can lead to.

So on the one hand, we have this broad movement in society where we slowly but surely try to eradicate differences resulting from discriminatory processes and on the other hand, there is this extremely powerful technology introducing perfect discrimination going beyond gender, age, race but aiming at you personally taking into account all of these and more. From an Anthropologist perspective this is an interesting time; because what will happen when these forces are going to clash? And this clash is going to happen at some point and the more they diverge (and they are) the harder it will be.

As a thought experiment let’s imagine a young startup called PayFair aiming to disrupt the salary negotiation world by using deep learning technology to propose a fair salary given a candidates profile. Sounds great, doesn’t it? And from a technical perspective, this is easy to do; you collect data from as many employees as you can, label them with a salary, train a neural network resulting in a model that given the right data will produce a fair salary for each candidate. It works like a charm, and PayFair attracts a lot of big clients resulting in a healthy and fast growing business. Until after a year a list is published with candidates and salaries showing that the model systematically produces lower salaries for certain identifiable groups like women. This was certainly not the intention of PayFair (it wants to live up to its name), not some programming error or intentional bias but just a result of the data used for training the model. Although imaginary this is a realistic scenario. And if you were to try this using data that is currently available you will find that women will get a different salary proposal than men because this is embedded in the training data being used.

This a simple example and easily preventable if you pay attention. But what about complex models where possible biases are hard to see? In a business situation, it is easy to get caught up in the technical aspects of applying this kind of technology but you do run a high risk of running into legal or reputational problems somewhere along the line if you are not careful. This is even truer for startups as they usually don’t have a legal or compliance department pointing this possibility out to them. And it already happened:

One homeowner is furious after $100,000 was knocked off the value of her home by an algorithm.
A group of journalists have sued the Chicago Police Department to obtain public records about a controversial algorithm the department uses to predict how likely someone is to commit a crime. Journalists, which include the Chicago Sun-Times and an independent journalists whose previous lawsuit against CPD led to the release of the Laquan McDonald shooting video, have filed a Freedom of Information Act lawsuit against the Chicago Police Department. They’re calming that the department has withheld public information about the algorithm, which identifies citizens who land on the department’s Strategic Subject List, known as a “heat list.” The CPD’s “heat list” is a list of hundreds of people the city has determined are likely to be involved in a crime, based on a computer algorithm. The specific variables that the algorithm takes into account are unknown, though officials told The Verge that location, whether someone has been involved in a crime, and if someone is socially connected to a

So instead of coming up with end-of-humanity scenarios pay attention to the more realistic scenarios. Especially if you are a company producing or using this technology you should really pay attention to the kind of data you use and the results your models produce because accountability already is and considering even DARPA takes this seriously, you should too!

AI and its promise We are living in the age of Big Data, but data is useless if we do not have algorithms that help us to interpret it. Algorithms are increasingly used in both the public and the private sectors; across industrial sectors for financial trading, recruiting decisions (hiring, firing, and promotions), and for… Read more

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Editor's choice