We’ve all heard the hype about driverless cars, but where are they? Why aren’t we seeing them everywhere?
To hear Google (whose self-driving car project is now called Waymo) explain it, self-driving cars are already a reality, and they should have been populating our streets and revolutionizing our travel years ago.
In fact, earlier this year, Google hit a major milestone, with over 2 million miles driven by its 60 driverless vehicles in 4 states. There haven’t been any at-fault accidents caused by these vehicles, and the project continues to be refined, so what’s stopping us from buying these things and putting an end to the era of human error on the roads?
There are a host of legal problems associated with driverless cars that I won’t get into -- lawmakers are slow to act and driverless cars are new types of complicated entities, but that’s still only one portion of the problem.
Instead, I want to focus on the complicated artificial intelligence (A.I.) issues keeping Google from trying harder to get its cars in the hands of consumers. Despite the appearance of smooth running, there are still some major A.I. hurdles to overcome.
1. Moral decisions
First, driverless car A.I. needs to address some moral ambiguities. Consider the trolley problem, as described by The Washington Post: “Imagine a trolley hurtling toward a cluster of five people who are standing on the track and facing certain death. By throwing a switch, an observer can divert the trolley to a different track where one person is standing, currently out of harm’s way but certain to die because of the observer’s actions.” If a driverless car could avert the death of several people by running into a wall and killing its own driver, should it? What if there’s an unavoidable accident, but the driverless car has the option of running into a heavy, safe vehicle or a smaller, unsafe vehicle -- which should it choose? Should it prioritize other driverless cars? How should it make these decisions?
These moral ambiguities are further explored by Patrick Lin, but the basic problem is this: How can A.I. make moral decisions for driverless cars, and how should those decisions be coded?
2. Weather conditions
Self-driving cars “see” the world much in the same way that you or I do -- they use cameras to detect things like traffic lights and lane markers. So what happens when there’s two inches of snow on the ground? What if it’s foggy?
Currently, driverless cars haven’t been extensively tested in these conditions, and there’s no universal direction for how to solve this algorithmic problem. First, these vehicles need to be able to detect these highly variable, sometimes unpredictable decisions, and then they need to come up with a way to work around them. The problem is made even more complicated by overlapping conditions, such as fog with snow and hail.
3. Dark spots
Cameras aren’t the only ways self-driving cars see the world, though. They also use radar and lasers to scan for obstacles. However, as New York Times writer Neal E. Boudette notes, vehicles currently have a hard time distinguishing between potholes, puddles, oil patches, and even shadows -- basically, any dark spot in the road could be interpreted as one of many different things, all of which require a different approach.
Trying to code an algorithm to proactively analyze a feature like this requires some kind of new detection technology in combination with a fast-working and highly accurate decision tree for a possible response.
Google cars currently use maps that are far more detailed and sophisticated than what’s publicly available in Google Maps -- they need to know everything about their environments, including traffic signs and other laws, if they’re going to function and remain intact. So what happens when a city closes a road due to intense flooding? What happens when construction shuts down a busy intersection? What if a new traffic sign is put up before the onboard maps have a chance to update?
A.I. in driverless cars need to be prepared for literally anything; responses for ideal conditions are basically perfected, but unknown conditions require far more sophisticated programming.
When can we expect these A.I. problems to be addressed and have driverless cars hit the streets? It’s tough to say. The scope of these problems, to anyone versed in the complexity of A.I. technology, is massive, and most A.I. coding issues require intuitive breakthroughs to solve completely. We’ll likely see random bursts of forward progress as problem solvers come up with ingenious solutions, rather than steady improvements over time.
Based on that, it seems like we’re many years away from a finished product. But on the other hand, technology almost always develops faster than we expect. We could even see the first commercial driverless cars emerging in the next year or two -- if programmers can address these A.I. hurdles.
This article is published as part of the IDG Contributor Network. Want to Join?