The Ethics Of Algorithms: Whom Would You Run Over?

23 October 2013 by Martin W. Angler, posted in algorithms, ethics, robotics

Let's play a thought experiment today, shall we?

It is a cold Monday morning in the middle of winter. You are driving to work by car. Even though you are the car’s owner and you are sitting in the driver's seat, you are not actually driving. You only fasten your seat belt, enter your destination - and your car does the rest. It is an autonomous car that takes its own decisions. It can read road signs and adjust its speed according to it. It can measure the distance to the car in front of you and stop when needed. It can avoid obstacles such as the brick the truck in front of you just lost. It can do many things that usually humans do.

But what an autonomous car really does is not just steering and accelerating and braking and avoiding obstacles. What it really does is make judgments. Sometimes these judgments are not purely rational but value-laden.

Let’s expand a little more on the idea of you driving an autonomous car. Imagine that at some point two little children jump in front of your car. The car detects an obstacle and tries to change lanes. On the other lane, however, there is an elderly lady, trying to cross the street. The car cannot bring itself to a full stop, because it is already too close to the children and the lady. Hence, the car would need to take a decision. Who will be run over, potentially be killed? Should the old lady die, because she has already lived a long life? Or should the children die?

The Problem With Bias

There are many parameters that could ease this decision. Are the children terminally ill and would die tomorrow, anyway? Did the old lady try to commit suicide? The point is, algorithms such as obstacle-avoidance algorithms do base their decisions on facts. These facts are delivered as sensory data and previously stored knowledge. This data is part of the algorithms’ inputs. Some facts, however, cannot be measured rationally. Are the children really terminally ill? We cannot know. Neither can our avoidance algorithm. But far more important are the value-laden judgments of who deserves to die. To philosophers, this dilemma is known as the trolley problem. There is no ethically correct answer. We cannot simply compare value-laden parameters. But how can we transform such judgments into algorithms?

Can algorithms implement ethics at all?

The answer is not so simple, because misjudgments may occur at many different levels:

  1. Signal error: Your data could simply be not correct. Maybe a sensor is broken, maybe previously collected data is faulty. In practice, this could mean your car’s sensors don’t detect an obstacle or misread a road sign’s speed limit, which would cause speeding in a pedestrian area. Not good.
  2. Parameter-based bias: When we compute a solution for a problem, we choose input parameters. Which parameter? How many? What do they express? All this will influence the results. Imagine the car does not take the speed limit as an input parameter but use the number of people currently on the street to adapt its speed.
  3. Method: How does the algorithm work in detail? Different people would solve the same task in different ways.

In 2010, Felicitas Kramer, Kees van Overveld and Martin Peterson published a paper with the title "Is there ethics in algorithms?" They clearly define what value-laden algorithms are and propose dealing with algorithm ethics already at the design stage. They describe a value-laden algorithm like this:

"An algorithm comprises an essential value-judgment if and only if, everything else being equal, software designers who accept different value-judgments would have a rational reason to design the algorithm differently (or choose different algorithms for solving the same problem)." - Kramer, van Overveld, Peterson, Is there ethics in algorithms?

In simple words: When there is another (rational) way of solving the problem, the algorithm is value-laden. Kramer, van Overveld and Peterson suggest software designers develop algorithms in such a way, that its users can decide which approach they take - according to their own preference. If there is no such possibility, the applied method should at least be made transparent to its user, they further suggest.

Whom should I listen to?

Two methods proposed for solving the same problem. But whom should he listen to? / Twentieth Century Fox Film Corporation

 

Transparency Or Control?

Stephan Noller, chairman of the policy committee of the Internet Advertising Bureau, writes in his piece for the Guardian: "Transparency is one of the most important principles when it comes to throwing light on the chaos." I can fully agree with that. Noller goes a step further and demands publishing algorithms' source codes as well as the data they use.

This is where it starts to get tricky.

Firstly, source codes provide essential value to their owners. These companies have a business interest in them. Moreover, they retain copyright of these algorithms. There will be no way a company publishes business-crucial algorithms if it isn't forced to. Even if the principles of an algorithm are well known: Asking Google to publish the crucial source code of its algorithms is like asking Coca Cola to publish the recipe of its eponymous soft drink.

And secondly, data cannot be made public just like that. Data privacy laws would legally prohibit publishing arbitrary person-related data. Especially, when we are talking about sensitive data. Companies would actually commit a crime when publishing person-related data. Other data might be suitable for being published. We need to distinguish this on a case-by-case basis.

Noller expresses another thought that I like very much: he proposes that we, as users, should be able to turn off personalized, algorithm-generated information and gain access to its unbiased, raw counterpart. For search engines, this could mean we could perhaps have a button that makes sure everybody who pushes it receives the same information.

This way, we could regain control of algorithm-generated, pre-filtered information. Why not do the same with autonomous cars? A manual override.

Algorithms Pull The Trigger

Value-laden judgments are not only a problem of cars and search engines. They apply also to drones. We need to clearly distinguish between remotely piloted drones and autonomous, self-organizing drones. As the former IT consultant and novelist Daniel Suarez says in his TED talk "The Kill Decision Shouldn't Belong To A Robot", there will be a time, when autonomous drones become a necessity, if only because the GPS-based steering of the human-piloted drones is vulnerable.

If this is true, then soon there will be no more pilot pulling the trigger. Military drones will have to decide on their own about whether or not to attack a target. To be precise, algorithms will take these decisions. They will base their decisions on parameters that have been selected for them, and on data that is collected either via sensors or coming from databases. Potentially biased data, parameters and methods will be the foundation of algorithms’ decisions about who dies and who lives.

Human control about kill decisions could make sense (at least more than fully automating them).

The original Trolley Problem deals with an uncontrolled trolley car that hurtles towards five tied up people that can’t move – but there are options to save them. Moral philosopher Judith Jarvis Thompon of the MIT describes her own version of the problem like this in the Yale Law journal:

“Being an expert on trolleys, you know of one certain way to stop an out-of-control trolley: Drop a really heavy weight in its path. But where to find one? It just so happens that standing next to you on the footbridge is a fat man, a really fat man. He is leaning over the railing, watching the trolley; all you have to do is to give him a little shove, and over the railing he will go, onto the track in the path of the trolley.” - Judith Jarvis Thomson, Yale Law Journal

Now, from a legal perspective, this is a tricky problem. There is no right way of doing it. Both choices are legally wrong:

  1. You shove the fat man down. The trolley is stopped, but the man is dead. You’d commit a murder.
  2. You don't do anything and watch the five people die. The fat man survives, but you’d fail to render assistance.

If we were to apply the suggestion of Kramer, van Overveld and Peterson to the Trolley Problem, we might think of an algorithm that gives us a choice between these two options. A choice between murder and failure to render assistance. But how would this look in practice? Would you like to enter your car and set it up for "murder mode"?

Who Is Liable?

But what if we left the choice entirely to the algorithm? How would the algorithm determin the best outcome? That depends on how it judges and what data it uses. If its main criterion would be to harm as less people as possible, it would go for the first option. If the goal is to minimize casualties, it would do the same. If the algorithm’s goal is, however, to minimize the degree of penalty of its driver, it would go for option two.

And after all, should you as the driver be held liable for an accident? I am asking this, because you would have not taken any decision. The algorithm would have. Technically speaking, its programmer could be liable as well, because he "taught" the algorithm its rules.

I am aware that we need algorithmic ethics. Just sticking to the law won’t be good enough as a strategy. Implementing just laws and leaving out human common sense and ethical judgments is not enough to fully automatize the world. The law is already failing to keep up with technological advancements. And implementing laws seems to be the simple part. As reality shows, implementing laws is not that simple, actually. Let me give you another example.

Earlier this year at the re:publica 2013 conference, Joerg Blumtritt clearly points this problem out. He cites Isaac Asimov's First Law of Robotics:

"A robot may not injure a human being or, through inaction, allow a human being to come to harm." - Isaac Asimov, Three Laws of Robotics

Blumtritt then discusses the meaning of the verb injure. The term itself is value-laden. Usually, injure means a bad thing. I think we can all agree on the fact that a cut is an injury. In that sense, medical robots performing surgical cuts would technically injure us – which is forbidden by Asimov’s First Law. Blumtritt is right: the meaning of injure is not easily measurable but depends on context and interpretation. This is where it starts to get complicated. Especially when we try to develop algorithms that implement this type of reasoning.

It's Not Just Algorithms

Yet, giving back control to humans is not necessarily a carefree solution. Pre-selection and the resulting bias have been happening in the media for a long, long time: every photograph we see in a newspaper shows a specific perspective, selected by its creator. Swiveling the camera just one centimeter to the left could have changed the entire meaning of the picture. Maybe another photographer would have selected a different perspective for depicting the same conflict. This amounts to the same value-laden judgments, as shown by Kramer, van Overveld and Peterson. Let alone the staging of photographs in conflict areas, which seems to be common practice, as war photographer Ruben Salvadori points out. Who questions these unethical practices? Few do.

Stephan Noller's idea of regaining some degree of control goes in the right direction. We cannot make sure our algorithms take ethically correct decisions. Perhaps this is not the point. Maybe we don’t want them to take ethical judgments for us. Maybe we just need to be aware of what they decide and gain insight into these methods.

I mean, how can we teach computers to solve ethical dilemmas when we cannot solve them ourselves?

References:

Jarvis Thomson, J (1985). The Trolley Problem. The Yale Law Journal, Vol. 94, No. 6 (May, 1985), pp. 1395-1415. http://philosophyfaculty.ucsd.edu/faculty/rarneson/Courses/thomsonTROLLEY.pdf

Kramer F, van Overveld K, Peterson M (2010). Is there an ethics in algorithms? Ethics and Information Technology. doi: 10.1007/s10676-010-9233-7. http://rd.springer.com/article/10.1007%2Fs10676-010-9233-7/fulltext.html


3 Responses to “The Ethics Of Algorithms: Whom Would You Run Over?”

  1. Elizabeth Reply | Permalink

    Option 3: Swerve off of the street.

    Seriously. Hop up on a sidewalk if it's clear, or someone's yard, or the bus lane, or the road shoulder, or the ditch on the side of the street. You don't have to sacrifice anyone!

    I don't like these sorts of hypotheticals because they almost always ignore a possibility of other options not stated. It's this sort of binary thinking which limits people to such a narrow viewpoint that they can't see any other alternatives.

  2. Martin Holzherr Reply | Permalink

    But what if we left the choice entirely to the algorithm?

    According to the Gedankenexperiment carried out above, an algorithm which touches ethic questions will be approved. There will be ethic comitees which judge right from wrong. And the programmer just has to implement this decision.

    If you think it to the end, then the conclusion will be clear: You cannot leave ethic decisions to inidividuals. Only careful pondering ethical comitees have the right to decide. The autonomous car and the intelligent, cool thinking thing of tomorrow will do the right thing, it will act as planned. In this future, it will be more and more regarded as irresponsible letting decide human individuals. In the situation outlined above a human driver could kill the wrong person, the person which should survive according to an ethical comitee. It is clear: You cannot let drive human drivers anymore if there is an alternative. Only certified machines will be trusted tomorrow. Never again one will trust a human. A human which could be in bad mood or even intoxicated by alcohol or lovingness

  3. Martin Holzherr Reply | Permalink

    we might think of an algorithm that gives us a choice between these two options [killing one man in order to save 5] A choice between murder and failure to render assistance. But how would this look in practice? Would you like to enter your car and set it up for "murder mode"?

    Indeed, Freedom of Choice is something very different than ethics. But if you own a car and this car autonomously decides to kill the "wrong" person, then you are dissatisfied because you paid for this car and therefore you expect that the car acts like you would act if you had the choice. You could imagine setting preferences after buying the autonomous car: Kill men not women, old persons, not youngs and so on. But Ethics does not mean following your preferences, it means doing the right thing.

Leave a Reply


− 2 = three