Tesla is about to adjust how their Autopilot self-driving tech works. According to Elon Musk, radar will be used more aggressively to scan for objects, sending out a signal that’s not all that different from how a weather radar works looking for darkening clouds.
In terms of a tech discussion, it’s an interesting update in light of the fatality that occurred while a Model S sedan was driving in autonomous mode. The New York Times posted a thorough breakdown of how the crash happened, including how the car swerved suddenly.
Musk has stated several times that the incident was related to the braking system, not the Autopilot mode. However, in a conference call with journalists on Sunday meant to explain the new update, Musk made a bold claim. It’s something robotic engineers have been saying for some time, and it’s a level-headed statement about where this self-driving tech is all going.
It’s also something I happen to disagree with.
Musk insisted that there will never be a time when there are zero fatalities on the road. That phrasing is a subtle reference to the term Volvo has used in their Vision 2020 program. Toyota at one time had taken a different stance, suggesting that humans will always have some control over the vehicle and that turning cars over to the bots is a bad idea in the first place. They have since changed their minds.
We all know that accidents will happen, and no technology is foolproof. If humans are involved, there will be mishaps. However, my issue with the statement about “never” is that it assumes the transportation industry will never figure this out, that there will never be an advanced autonomous driving system that prevents fatalities completely on any highways ever. What about 100 years from now? What about 200 years from now? Saying never means it is not even possible, and that’s dangerous.
I’ve written about this before, but I can imagine a fairly foolproof transportation system with autonomous cars. First, they’d need to connect to one another -- they’d always know where every other car is on the road. Second, the Internet of Things would become so prevalent that every railing, ever bridge, and every person would send a signal to the car as well. It's audacious, but in the near future, it could happen.
And, that’s just the technology available today. In the far future, I could see using special roadways (possibly the ones used for trucking in California today) that are designed only for autonomous cars, certified as 100% safe. All of the cars would platoon, and the technology would work so efficiently in keeping the cars a set distance away from each other that mishaps would be incredibly rare.
But here’s my real issue. We have to set the goal at zero. The Vision 2020 program is a good example of this. Maybe that isn’t possible. Maybe it isn’t even realistic. Yet, anything less than perfection means we are accepting that the technology can fail.
Here’s a good, timely example of this. Think of your smartphone. The goal is for your phone to never explode. We know that this can happen, but no one ever goes around and says “Well, if humans are involved there could be mistakes and a phone could explode.” Instead, Samsung issues a recall and then we state that the phone won’t ever explode. If you bake one in the oven, sure -- bad things can happen. But the goal and the desired condition is for no explosions, ever. Anything less than that is a waste of time.
Autonomous cars can save lives. They can be developed in a way that ensures safety on the road at all times. Someday, even if it is 100 years from now or 200 years from now, we might be able to say that they don’t ever get in accidents. We know satellites can fall out of the sky. We know catastrophes can happen. We know phones can explode.
But the goal should be zero fatalities. Period.
This article is published as part of the IDG Contributor Network. Want to Join?