Should self-driving cars be utilitarian?

Charlotte Weekly discusses the ethical dilemmas behind self-driving cars, and the novel approaches taken to solve them.

A philosopher is probably the last person you would expect to see designing your next car. Yet, as self-driving cars cruise their way into the mainstream market, philosophers and engineers are being brought together to solve the impossible issue of algorithm morality, or rather, under what circumstances should your shiny new vehicle be willing to kill you?

Self-driving cars are coming. They have already been trialled in San Francisco, and the co-founder of ride-sharing service Lyft, John Zimmer, intends for most rides on his service to be driverless in five years. This offers a great deal to worry about. These cars can expertly navigate, change lanes, abide by traffic laws and detect obstacles up to 200 metres away. But what happens when they are presented with a no-win scenario – a collision where no matter the outcome, someone will get hurt?

Consider the following: the year is 2020, and you are travelling in a self-driving taxi. Suddenly, five pedestrians step out in front of the car and the only way for it to swerve is into a nearby tree. There is no time to stop so if the car hits the tree you’ll be fatally injured. Alternatively, if the car continues on its current path it will injure all five pedestrians. Should the car prioritise minimising the loss of life, even if that means sacrificing the passenger, or should it protect the passenger at all costs?

Philosophers have debated something similar for years. The infamous ‘trolley problem’ refers to a popular thought experiment in moral philosophy. First conceived by Philippa Foot in 1967, this classic scenario imagines a trolley hurtling towards five people standing on the track, facing certain death. As the train conductor, you can pull a lever to divert the trolley onto a different track where one person is standing, currently out of harm’s way but will certainly die as a result of your actions. The trolley problem explores the conflict of killing versus letting die: is it morally acceptable to kill one in order to save five (thus intervening with the natural progression of events), or, should you allow five to die rather than actively hurting one?

It’s easy to dismiss the trolley problem as a laughably implausible exercise in armchair philosophy. However, when thousands of autonomous cars take to the roads in the near future this extreme scenario could become an everyday occurrence.

A study carried out at the Massachusetts Institute of Technology (MIT) revealed a significant disconnect between the ethical programming we want self-driving cars to have and the cars we actually want to ride in. Results from the survey showed that people approved of utilitarian-minded vehicles, with 76% agreeing they should be programmed to sacrifice one passenger if it meant saving the lives of 10 pedestrians. However, when it came to their desire to ride in one of these utilitarian cars participants showed rather less enthusiasm. When asked to imagine themselves riding in the car the favourable rating dropped by an entire third. So, self-driving cars programmed to sacrifice their driver for the greater good are a fine idea, but only for other people. Cue the sound of Bentham rolling in his closet.

These results are not exactly shocking. Humans acting in the interest of self-preservation is not uncommon. But could an answer to algorithm morality lie in asking what humans would do?

If we would swerve, then so should the car; if we wouldn’t, neither should the car. You might think this begs the question: how can we know what a human being would do? The unpredictability of our nature is often exaggerated under stress where adrenaline can affect our capacity to make rational decisions. However, in a study published in Frontiers in Behavioural Neuroscience last yearGerman researchers devised a way to make self-driving cars behave like humans by having participants undergo a virtual reality experiment. Volunteers put on an Oculus Rift headset that simulated driving a car down a suburban street. They were forced to make trolley problem-like ethical decisions: should they swerve to hit a dog, if it would mean avoiding a child, for example? Based on the results of this experiment, they could devise an algorithm that would behave like a human.

This might seem unsatisfactory. Why should computerised cars be subject to the same limitations as we humans? Should we not strive for something better than humans can do?

Not necessarily. Driving safely on the road is a co-operative endeavour that relies on our ability to predict and trust one another’s behaviour. Imagine you’re trying to merge onto a busy lane. There’s a small opening and you trust that the driver in the lane beside you will wave you through, because that is what drivers should do. You safely merge. If the driver had obeyed faultless utilitarian logic, they might not wave you through: it might have worked out that if you had just waited a minute, things would have turned out best for everyone. Not knowing the driver is a robot, you try to merge and – crash!

Driving is, at least for the foreseeable future, a human activity, and so if self-driving cars are to integrate themselves onto our roads perhaps they should act like humans, even if that sometimes gives less than perfect results. While we cannot say for certain what self-driving cars should do in trolley problem cases, we can say that they should do whatever it is humans in fact do. If, and only if, humans behave like utilitarians should our self-driving cars do so too.