The Philosophy of Autonomous Cars

I saw something interesting today. Apparently, some scientists at some university (I know, very official, but I forgot where they’re from) have created a game where players decide who lives and who dies in a autonomous car accident.

Let me back up – see, there’s a little bit of a problem with driverless cars. Human beings, whether they like it or not, make decisions about who lives and who dies all the time when they’re in a vehicle. Here’s a scenario: someone drives into your lane. Your only options are to either A) turn right really quick to avoid the car, but hit a concrete divider in the process, killing you and your passenger, or B) swerve to the other side of the road onto the sidewalk and hit a family of four.

What do you do? Who do you kill? Think about it for a minute and then start reading again. I mean really think about it.

Well, your basic human instinct should tell you to kill the four people on the sidewalk. After all, you are the most important human being on the planet (in your mind). But, from a logical perspective, you should be looking out for the greater number of human beings, or the four people on the sidewalk, and sacrifice you and your passenger’s lives for the good of the species as a whole.

This is a classic utilitarian problem. I mean, we all know what we want to do. We want to save ourselves. It’s basic instinct. We save ourselves so we can live and reproduce. But that shouldn’t be the case, should it? Surely one life is nowhere near as important as 4.

This is best illustrated in the Trolly Problem put forth by Philippa Foot in 1967. Foot gives you a situation like this: you’re standing there on the street corner. You see a train trolly going out of control, and it’s about to hit four people on the tracks in front of it. But, there’s a lever in front of you that, if pulled, will put the trolly onto another set of tracks that only has one person on them. Either way, someone will die. But who?

This is, among many, many, many other things, one of the main problems facing driverless car developers. They’re not only programming a vehicle to be safe and efficient for passengers and pedestrians, but they’re programming morality into an AI. So who decides what the answer to this problem is? Is it Google? Facebook? Apple? Ford? Any number of the other hundreds of companies developing autonomous vehicles?

Well, some are saying it’s the government’s job to decide who dies. And that thought makes a lot of people scared. Should the government start regulating who lives and who dies? Should the government start getting into moral/philosophic issues? Well, it already has with capital punishment and abortion, but that’s neither here nor there. It’s all about whether or not elected officials have the capacity to regulate death, artificial intelligence, and ethics.

I’m not here to make an argument for or against utilitarianism. As a humanist, I do agree 100% with the fact that four lives are better than one, and that one persons death compared to those four would of course be the logical way of thinking. But, as an animal, fuck yes I want my driverless car to kill those four people instead of me. I don’t want to die. That’s not how animals roll.

What I am here to say is that I’m so looking forward to driverless cars – SO looking forward to it. I can’t wait for a light to turn green and all of the cars at that light move in unison. But I think this issue is going to hold them back. I think that as soon as the first group of two pedestrians dies instead of a carload of four, a proverbial Pandora’s box of shit will open up into the world of autonomous vehicles, and we’re going to have to take a good, hard look at the ethics and values of our society.

The problem of life and death isn’t the only thing that’s going to plague autonomous vehicles. There’s also the little problem of wrecks.

When humans are driving, wrecks happen for a reason. Someone messes up by not stopping at a stop sign. Someone’s doing their makeup and rear ends someone else on the highway. You know, the usual. Someone is at fault. But when a driverless car has a wreck, whose fault is it? Google’s? The driver’s? The software developer that programed the vehicle?

Who. Knows. Not me. That’s for sure. You can argue both these points for months on end and come up with a thousand valid points for and against every possible argument, but nobody’s right. It’s philosophy. That’s what makes it fun. No one’s right, everybody’s wrong, and the points don’t matter. But one day we’re going to have to decide what’s right and what’s wrong. Either we, the people, are, or Google is. I don’t know which I prefer.

It’s both scary and exciting. We’re living at a very strange point in history where the groundwork for laws regarding AI and robotics will be laid out without us really knowing what the impact will be. I, Robot? 2001: A Space Odyssey? Hopefully we’ll be around to find out what happens. Or, hopefully not.

Advertisements

One thought on “The Philosophy of Autonomous Cars

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s