Real Problems with Self-Driving Cars

I’m anxious for self-driving cars to arrive, as I’d love to make better use of commuting and errand time. And while I do expect many aspects of the automated driving experience (highway driving, parking) to become mainstream shortly, this Business Insider article (h/t Jon Kennell) misses the mark. BI identifies three aspects of driving that could cause problems for automated cars:

  • Snow
  • Bad maps
  • Construction

This list is extremely superficial and misses the deeper issues. For each of these potential problems, the car has plenty of time to slow down and hand off maneuvering to a person behind the wheel (who likewise would have enough time to put down their cell phone). Google (or another developer of this software) could instruct the car: (1) never self-drive in snow, and (2) look a long distance ahead for road or lane closures (which would take care of both the maps and construction issues).

Where Business Insider goes wrong is that it’s thinking like a human driver. Humans have trouble with snow, sometimes get lost, and–when distracted–miss oncoming construction or traffic cops. Avoiding trouble or an accident in these situations is dead simple for a computer: at worst case, slow down and stop.

Computers, unlike humans, don’t get distracted; rather computers have a lot of trouble with a talent that humans have in abundance — quickly incorporating subtle clues to predict what will happen next. For instance, imagine your a car going left to right in this picture of a pedestrian in the middle of the road:

What do you do? You make sure that the pedestrian sees you (by a quick glance up and a slightly slowing of her gait perhaps), and you continue driving at pretty much the same speed because you believe with very high probability that she will: stop at the double yellow line, let you pass, and then continue walking across the street.

That type of reasoning is very hard for a computer. In fact, there is a good chance that Google–desiring to maintain their perfect automated driving record–will develop software that is too conservative (video example). Maybe when a pedestrian darts halfway across the street, an automated car will presume that the pedestrian will continue moving in the same direction (not a terrible assumption), putting the jaywalker in the direct path of the car. In that case the car will have no choice but to stop suddenly. I fear a very jerky ride.

Another example of a potential inconvenience: on my way home there is a right-on-red turn with a lightly-trafficked cross street. Even when there is a car at the intersection in front of me, I often don’t slow down as I approach the intersection because I know with high probability that by the time I reach the intersection that car won’t be there anymore. Will an automated car be smart enough to do the same? Seems unlikely, at least in the early versions.

These prediction problems are the real issues for self-driving cars. Undoubtedly, that’s why Volkswagon is starting with “auto-pilot” on the highway, which is one of the easiest places to predict what will happen next. (Although even in this case, I wonder what Volkswagen’s software does when it sees a deer grazing on the side of a forested highway road.)

In sum: there is no such thing as a “perfect driver”. We all make trade-offs–balancing an efficient, smooth ride for the possibility that a low-probability event (drunk driver in other lane, irresponsible pedestrian, deer crossing) will result in an accident. Which trade-off points are programmed into automated driving algorithms is a very intriguing set of decisions.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


*