The Moral Dilemma of Self Driving Cars

Leave a Comment

The ethics problems facing Self-Driving cars are the same problems we’ve been facing since the Roman Empire.

We’re heading into what may seem like a scary future for some, where cars will be driving completely autonomously on the roads. I say for some because not everyone is freaked out by this, but they probably will be once they see them roaming around their streets.

The big question everyone is wondering is how the car will decide who lives and who dies?

But let’s step back a minute and think about this issue in a broader context. Before we even think about answering who lives and who dies in a situation where an accident is imminent, we have to think about what is right and wrong.

Now the origin of these concepts dates back before chronology. Basically, you have a couple of schools of thought that play off each other to help us understand this problem. The first is consequentialism, and this you can think of as “The ends justify the means” saying that no matter which course of action you take, the “right” one is the one that ends in a positive outcome.

This is important because in our situation someone is going to die, we’re just not sure who. So building on consequentialism, we have another ethical theory that developed stating that the best action is the one that maximizes utility. The utility is defined in various ways, typically regarding the well-being of sentient entities, such as humans. Jeremy Bentham, the founder of utilitarianism, described utility as the sum of all pleasure that results from an action, minus the suffering of anyone involved in the action.

// This is where things get interesting

If we believe that the “right” thing to do is to “Maximize Utility” then in a situation where a fatal accident is imminent, the “right” choice would be the one that kills the fewest people. As Spock said, “logic clearly dictates that the needs of the many outweigh the needs of the few.“ #LiveLongAndProsper

So do you agree with this ethical theory of maximizing utility?

If you were driving and you knew you were going to be in a fatal crash, what would you do? Would you “maximize utility” by avoiding the people in the crosswalk and killing yourself and passengers in the process?

If not, you’re not alone, but according to normative ethics principles, you’re letting your ego get in the way. This problem isn’t new and doesn’t even matter if we’re talking about a self-driven car or a human-driven one, the same moral dilemma still exists, what is “right” versus “wrong” and the answer isn’t entirely cut and dry. Philosophers have been arguing over right and wrong since before recorded human history. And we’re still looking for the answer.

MIT is even doing a study on this using a tool they built called the Moral Machine. In this instrument, they show you moral dilemmas, where a driverless car must choose between the lesser of two evils, such as killing two passengers or five pedestrians. You’re the judge in this scenario, and at the end, they show you how your answers compare to others. What could go wrong?!

See how you compare - visit https://teslanomics.co/moralmachine