I have a good hunch about what caused the recent Google autonomous car crash.
From all reports, it looks like the Lexus RX 450h that Google uses for testing on public roads made a lane change and didn’t notice a bus was speedy up slightly. A human driver was behind the wheel, but the car was in autonomous mode. The Google car had let two other cars enter the lane, and it was likely a moment of confusion about who had the right of way. Google admitted some of the blame for the accident.
This does happen frequently in traffic conditions, where there's a misunderstanding in who is doing what. Yet, the dirty little secret here is that, while artificial intelligence has many advantages over a human driver (it can look in all directions at once, it can use multiple sensors, it never gets distracted), it could be another 20 years before robots can muster something that humans posses even from a very young age.
I’m talking about intuition, of course. It has a few other names -- a “feeling” or a vibe, a sixth sense, or an awareness that’s incredibly difficult to program into a robot.
Remember that any robotic action requires a complex set of programming routines. This is what makes the “Terminator scenario” so implausible. Robots only do what we program them to do, at least for now. Humans barely understand our own intuition -- why the hairs on the back of your neck stand up when you sense imminent danger. There are barely any facts involved. You just know there is a storm coming, or that a guy who wants to date your daughter is a creep, or that you lost your car keys.
One of my most vivid memories related to human intuition came when I was called in for jury duty a few years ago. There was a slight mix-up and I was directed to one room, then another. Somehow, I ended up in the wrong room -- the one where the criminals were waiting for their trial. I didn’t have any facts to go on. No one really looked like a criminal. I just picked up on the fact that this was definitely the wrong room and made a bee-line for the door.
How do we program a computer to know that?
In traffic conditions, you may not be able to look in all directions at once or scan out several hundred feet in front of you and calculate the path of another moving vehicle, but we do have an innate ability to sense danger. Maybe it’s a flash of movement in our peripheral vision combined with a weird sound off in the distance, maybe it’s a hundred different indicators all combined into one feeling about a situation. It’s almost impossible to quantify -- you just know something is not right. We're still in the early stages of AI where we can program specific actions, but we're not anywhere close to making robots that have feelings or a sixth sense about a situation.
This is one of the reasons Stanford is still testing high-speed autonomous driving. A few years ago, I met with Stanford professor Chris Gerdes and went for a few rides in the Audi TT they use for testing around a track in California. I remember how he explained that there are so many “micro” scenarios with autonomous cars -- e.g., slight variations in speed, traffic conditions, and even weather. A computer can analyze hundreds or even thousands of these scenarios. But what about the bird that flies into your window right when the car spots a pylon in the road and has to swerve? Gerdes said it will be possible to quantify many of these scenarios and write routines that help autonomous cars understand road conditions, but the work is still in process.
It’s not all doom and gloom, though. No autonomous car has to be perfect in every situation -- humans are constantly making mistakes on the road. The dirty little secret is that we are not striving for 100% perfection in robotic driving. We just need cars to be a little smarter than us before we are ready to hand over the wheel.
This article is published as part of the IDG Contributor Network. Want to Join?