This is why Google self-driving car tech needs more work

There's a moral dilemma that won't be resolved anytime soon.

Google autonomous self-driving pod car

Google's self-driving pod car has no steering wheel. Currently, California is considering regulations that would require a human driver behind every autonomous vehicle's steering wheel.

Credit: Google

Autonomous cars are in the news again, this time about new research into the moral implications of what happens in an accident.

In several interesting examples, researchers posed questions about what happens when there is a choice between two evils.

In one example, there are three choices. The robotic car has to decide between killing one passerby but not a crowd or the driver, killing people in the crowd, or killing the driver only. That’s a horrific situation and reminds me of what happened in Las Vegas when a human driver killed several people in a crowd (but lived herself).

The study is really getting into something that is not technology-related at all, at least not quite yet. As humans, we often have to make tough decisions, but we’re not always consistent. We tend to decide things in the moment, especially in a panic situation, and we are majorly influenced by a multitude of factors -- say, our mood, the time of day, our drowsiness level, or even if we’re sober.

The report suggests that we haven’t done a good job in parsing this information tree down into something manageable, which means we won’t be very good at programming it in an autonomous car, either. This even applies to Tesla and Google. The implication is that we need a “moral algorithm” first, which is going to be a tough task even for the smartest robotic programmers in the world. Part of me thinks, even if they are that smart, maybe they should figure out how to dig more wells in Africa first.

In terms of the robotic technology in a car, there are some clear advantages, and I’ve experienced many of them myself firsthand. A robot never gets tired or drunk. It can look in every direction at once, and it doesn’t get distracted by a big sign that says “free beer” across the street. Robotic tech in a car like the 2016 Ford Focus I’m testing right now can constantly, forever, and relentlessly watch for lane markings and nudge you when you don’t notice an error. Humans, not so much. Robotic tech is exactly like having a 100 different people in the car with you, all watching for problems around the car, all vigilant about dangers, staring at lane markings and looking for other cars. As the connected car revolution progresses, the 100 robots in my car will connect to the 100 robots in your car. It’s a nirvana state for everyone involved.

Except that it isn’t. With apologies to Google, autonomous driving is a good thing but not a great thing. It’s incremental, not groundbreaking. Cars will nudge us and prod us, they won’t be able to make perfectly acceptable moral decisions anytime soon. It’s more than a lack of moral algorithms. My issue with letting robots have full control is that we don’t quite understand the fact that robots respond to our programming. If we don’t program a certain behavior, that behavior won’t occur.

Here’s an example. Let’s say you use a Roomba in your living room. Good job, you have saved countless hours of labor. It’s an intelligent machine, sensing furniture and recharging itself. However, it doesn’t know a lick about the difference between dirt and gravel. Not really. I’ve seen a Roomba grind a bunch of fresh mud into carpet before, happily rolling over it multiple times while it tries to extract the material. There’s an incremental improvement in vacuuming lint and light dirt, cat hair, and maybe a stray bit of popcorn. Programmers at Roomba might disagree with me, but the vacuum didn’t react any differently to mud. It didn’t perform any soil sample tests. It's programmed to stay in a certain area and suck up anything it finds, that's it.

Now, back to cars. Maybe the goal here is to figure out how to let cars nudge us a little more. As the Tesla Model S videos showing people sleeping and reading books demonstrate, this is not how Elon Musk wants people to use the autonomous driving functions. Google seems even more aggressive, having plans to make cars (eventually) that do not have a steering wheel or brakes. Guess what? Cars will need a steering wheel and brakes for a while. Maybe they can drive around in a parking lot with bumpers on every wall, or along a path between two buildings, but I’ve seen how human drivers think on the highway in my area. It’s anything but moral.

This article is published as part of the IDG Contributor Network. Want to Join?

To express your thoughts on Computerworld content, visit Computerworld's Facebook page, LinkedIn page and Twitter stream.
Windows 10 annoyances and solutions
Shop Tech Products at Amazon
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.