I’ve followed the autonomous car market for the past decade, and there’s one troubling problem. If you have ever driven in a car that drives itself, you know the car can be tentative -- and really slow. I remember how someone told me once that, when they went for a drive in a Toyota Prius as part of the Google autonomous car experiment, the car drove like his grandmother.
Interestingly enough, Toyota just spilled the beans about a new intelligent car program they are working on in conjunction with Stanford and MIT that’s designed to help humans drive better. They are kicking in a whopping $50M to fund the efforts. Gill Pratt, a roboticist who worked at DARPA, will head the program. One of the goals is to make a car that cannot get in an accident, according to the news reports. It’s a departure from the autonomous car model where you take your hands off the wheel and let a computer drive; instead, it’s a technology to augment driving and to protect you.
I imagine this will be like driving with an invisible shield around the car, stopping, swerving, and even speeding up to avoid problems. In fact, there’s a hidden secret in the announcement, one that might be a stretch but also fairly easy to infer. Toyota wants driving to be fun, and autonomous cars are not fun. I once drove in one of the first Stanford robotic cars and I’m one of the few who has driven along in the Cruise autonomous car that’s still under development. They are cautious and robotic, a bit like going on a ride at Disneyland.
An intelligent car might actually help us drive faster, because the car will watch the road, make adjustments, and keep us safe. It’s a logical extension of what Lexus (part of Toyota), Infiniti, Audi, BMW, Volvo, Cadillac, Mercedes-Benz, and many others have been doing for years. Modern cars can brake for you all the way down to a full stop, then resume. They can stay in the lane (there’s my invisible force field idea) without any driver intervention.
Not to diss the Chrysler 200 too much, but that car is a good example of what happens when the force field doesn’t really work right. If you take your hands off the wheel on the highway, the car starts to drift and then correct itself in a way that’s a bit disconcerting. (Note that Chrysler does not want you to take your hands off the wheel but to just let the car provide assistance.)
Intelligence implies augmentation, not replacement. Pratt told The New York Times there are terms roboticists use for this. Parallel means the car assists the driver; serial means the car does the driving. I prefer a blend. I like the idea of the car doing the mundane robotic driving in stop-and-go traffic for periods of time, as the Volvo XC90 can do. Yet, if the goal is to have the car drive at all speeds in all scenarios, that just seems too boring.
I want a future Lexus (and it will be Lexus first, not a Prius) to let me drive 100 legally on the highway and not live in fear of someone cutting into traffic. If someone does, the Lexus would see it long before me and start slowing down and warning me. If I fail to respond, I want the car to intervene if it means avoiding a collision. I don’t want to hand over the keys just yet. I want to enjoy the ride, not just put up with the journey.
How about you? Is an intelligent car just another way of describing autonomous driving? I’m interested in your opinion, so post in comments if you are for or against autonomous driving.
This article is published as part of the IDG Contributor Network. Want to Join?