In the long run, this incident won't prevent the eventuality of fully-autonomous vehicles on our streets and highways. However, it is a reminder that these vehicles have a long road ahead, literally and figuratively, before they become a part of our everyday lives. Furthermore, it reinforces the point that, for all their touted benefits, self-driving cars are unlikely to be a perfect solution to all of our traffic-related ills.
SAE International defines six levels of automation for vehicles. Level 0 is no automation; a human driver controls all aspects of steering, accelerating, and braking. Level 1 automation provides "driver assistance" technologies such as adaptive cruise control or parallel parking assist, while more sophisticated driver assistance systems currently available on some vehicles, such as Audi's Traffic Jam Assist or Cadillac's Super Cruise, fall under Level 2.
Level 5 is full automation, wherein a self-driving car controls all aspects of driving, on any road and in any condition, with no human involvement outside of entering a destination (there would be no need for a steering wheel or brake pedal in Level 5 cars). Level 5 automation is the ultimate goal of autonomous vehicle development; Uber, Waymo and other automobile companies are currently testing cars, such as the one involved in last weekend's fatality, with Level 5 autonomy in mind. Level 5 autonomy is also what most people have in mind when they think of "self-driving cars."
Putting aside for a moment any legal, regulatory or public acceptance obstacles to driverless vehicles, the technology itself is still years away from something that can safely operate on any road, in any condition, with no human assistance whatsoever. Testing is currently occurring in designated areas within a handful of cities - such as Tempe - with backup drivers on hand. But this technology is still in its infancy; a tremendous amount of further testing, coding, mapping and validating is required before driverless cars can be deployed nationwide and worldwide. Last weekend's fatality will only slow that process while the cause of the collision is investigated and solutions developed, e.g. improved sensors or rewritten software. It also calls into question the wisdom of using public streets to "beta test" driverless technologies:
To [Arizona State Professor David] King, whose research focuses on the urban impacts of new transportation technologies, the location of the crash—and how it happened—raises red flags about Uber’s approach to road safety. Since Uber arrived in Tempe in March 2017, he’s often seen Uber vehicles testing in that exact spot, charting details of the roadways to perfect the company’s internal maps. This seemed like familiar territory for them. Based on what is known about Uber’s technology, King said, a pedestrian or other foreign object should have been readily detected by the AV.
“If there is any real-world scenario where it would be seemingly safe to operate in an automated mode, this should have been it,” he said. “Something went seriously wrong.”
Precisely what went wrong may be unlocked by federal and local investigations now underway. Already, though, law enforcement interpreting video footage from the Uber vehicle’s external cameras seem to have placed the blame squarely on the victim: On a multi-lane corridor with scant crosswalks, Herzberg was crossing outside of a crosswalk.
“The driver said it was like a flash, the person walked out in front of them,” Sylvia Moir, the chief of Tempe Police Department, told the San Francisco Chronicle. Viewing the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway,” she said.
That video footage has not been made public yet, however, and other observers say it’s too soon to draw conclusions about a situation with no precedent. The fundamental safety promise of autonomous vehicles, after all, is their ability to automatically detect and brake for people, objects, and other vehicles using laser-based LIDAR systems: In darkness and light, they’re supposed to be programmed to drive far more safely than humans. King believes that releasing those videos, as well as the onboard vehicle data, would be a step towards transparency by Uber and law enforcement—as well as a signal to the public that safety is a priority, whether the blame rests on Uber’s software, its employee, or its victim. (Note: since this article has been published, some video has been made available.)To be fair, in order to ensure they can operate on public streets, it seems necessary to test these vehicles on those same public streets. But it's also fair to note that Uber, which is hemorrhaging money and therefore eager to eliminate human labor costs by getting driverless cars on the road as soon as possible, might not be prioritizing safety in their testing regime.
Sunday's incident aside, it's worth reminding ourselves that a key promise of self-driving cars is that they figure to be much safer than their human-driven counterparts: they won't get drunk, drowsy or distracted, they'll obey traffic regulations, they'll even be able to communicate with each other to avoid collisions. But the technology is not completely safe yet, as last weekend's death sadly showed, and it will only be as safe as we want it to be:
I’m enthusiastic about the potential for autonomous vehicles. Their great promise is that they could be safer than fallible human drivers, who kill 37,000 Americans a year. And I do believe AVs will be safer. They will not drive drunk or distracted, and they will not get overwhelmed by more information to process.
How much safer they are, though, will depend on the humans who design them and make the rules. Their driving style, whether aggressive or timid, is something that will be baked into their programming, based on real-world traffic environments and how traffic laws are enforced. The question is the same as always: Are we programming for a world that’s built for humans, or a world that’s built for cars?
The killing of a pedestrian in Tempe, Arizona, on Sunday by a self-driving Uber is showing how autonomous vehicle safety might slide from promise to nightmare.We gain nothing if we merely replace pedestrian (and cyclist) deaths caused by human-driven cars with deaths caused by computer-driven cars. Furthermore, in order for autonomous vehicles to truly achieve their promise, our roads need to be ready for them:
The rise in cyclists and pedestrians in and along our roads has also led to a rise in pedestrian deaths. While total traffic-related fatalities fell 18 percent from 2006 to 2015, pedestrian fatalities rose by 12 percent during that same period. Thanks to air bags and safe car designs, drivers and passengers are better protected in the case of a collision, but pedestrians involved in accidents don’t reap that benefit. And now that they’re sharing our roads in greater numbers, they’re more often involved. Many American roadways are ill-equipped to handle the influx of walkers and bikers trying to share our towns.The effect that automated vehicles will have on the overall transportation network, furthermore, is still unclear and likely will not be fully understood until well after they are implemented en masse. On one hand, driverless cars could reduce congestion by operating more efficiently and avoiding accidents caused by human error. On the other hand, they could make congestion worse. Fleets of self-driving cars deployed by Uber or Lyft could clog city streets as they circulate, awaiting their next fare. Retail companies could send out hundreds of automated vehicles to serve as “mobile showrooms” that constantly travel around city streets waiting to be dispatched to potential customers. Personal automated vehicles could continually loop around city blocks while they wait for their owners to get coffee or pick up dry-cleaning. Commutes could grow longer as people replace the stress of driving to work with the pleasure of napping to work.
If we don’t have roads that prevent human drivers from killing cyclists, how can we hope autonomous vehicles will do a better job? Some propose that a bicycle-to-vehicle or vehicle-to-everything sensor network could fix this issue. Everyone and everything, outfitted with sensors, would be detectable (and thus avoidable) by driverless vehicles. But the logistics of distributing and enforcing such a network are staggering. Unfortunately, so are the alternatives—like the idea that we need to rethink the way humans and vehicles interact on our roadways. But that may be necessary before autonomous vehicles can take to the road en masse.
Without a human behind the wheel, we may need to rethink the logistics of our cities, streets, and highways. In a world where driverless cars communicate with one another, there’s no need for streetlights or stop signs—vehicles can maneuver seamlessly around one another in algorithm-fueled choreography. And while we impatiently wait at stops and crosswalks for pedestrians, in an autonomous world, passengers’ attention will be on their mobile devices, their work, or their conversations with other passengers. Such irritating delays for a driver may not be so irritating when the driver becomes the passenger. We could, for example, transform some streets into self-driving minihighways and dedicate others purely to foot and bicycle traffic. With humans out of the equation, the entire design of our transportation grid could be reimagined.
As it stands, bike lanes and pedestrian-friendly intersections are an afterthought—and everyone on the road treats them as such, resulting in avoidable accidents and needless deaths. We need to begin to plan for a world in which our transportation priorities have shifted and safety and efficiency are the primary motivators. Autonomous vehicles can’t be seriously implemented until bike lanes and pedestrian routes in our cities are rethought, but like self-driving-car development itself, it’s going to take time.
Autonomous vehicle technology, furthermore, will not alter basic roadway geometry; while it might use that roadway more efficiently than human-driven cars, there will still be limits as to the number of cars that can fit on any given lane-mile of roadway. Streets and highways that are clogged today with human-driven vehicles are just as likely to be clogged with computer-driven vehicles tomorrow; driverless cars will not "solve" traffic congestion as long as the same trips are being made. (This is why I am highly skeptical of the argument that autonomous vehicles will make public transit obsolete because everybody rides a bus today will simply summon an on-demand driverless car to take them to their destination in the future. Along densely-traveled corridors or within dense activity centers, there will always be space constraints that will make mass transit, at least during certain times of the day, more efficient in its ability to move people than individual automobiles, whether they be driverless or not.)
The fact is, for all their promise, autonomous vehicles still present a lot of unanswered questions. How do you keep autonomous vehicle systems - be it the computer in the car itself or the communications network that ties all the cars together - from being hacked? What happens if the self-driving software crashes or the communications network goes down while the vehicle is speeding down the road? What do you do about the millions of people - truck drivers, bus drivers, taxi drivers, chauffeurs - who will become unemployed as their jobs are eventually replaced by automated vehicles? How will land uses change in the era of autonomous vehicles? Will we even need parking lots and garages anymore? What other unanticipated consequences of driverless technology might we be missing?
I am not suggesting that autonomous vehicles are impossible or worthless, or that planners should ignore their impacts, but there are good reasons to be cautious and skeptical, and to implement public policies that maximize their benefits and minimize their costs.
If my analysis is correct, autonomous vehicles may become commercially available in the 2020s, but will initially be costly and constrained, adding a few thousand dollars in annualized costs, and able to self-drive only on designated highways in good weather, and so will mainly be purchased by affluent, longer-distance motorists. Like most automated systems, autonomous vehicles will often be frustrating. Like automated vehicle navigation systems, they will sometimes choose sub-optimal routes. Like computers, they will sometimes stop unexpectedly, requiring a reboot or expert intervention. Like automated telephone systems and bank machines, they will often be confusing and require extra time and effort to use.
It will probably be the 2030s or 2040s before autonomous vehicles are sufficiently affordable and reliable that most new vehicle buyers will purchase vehicles with self-driving ability, and the 2050s before most vehicle travel is autonomous. This technology will probably contribute to numerous crashes, resulting in modest net safety benefits. For safety sake, they will often travel slower than human-driven cars, leading to traffic delays. Self-driving taxies may become affordable and common in urban centers, but in suburban and rural areas most households will continue to own personal rather than shared vehicles. Autonomous vehicles will not displace the need for walking, cycling, and public transit; on the contrary, efficiency and equity require public policies, such as efficient road pricing and High Occupancy Vehicle (HOV) lanes, to favor sharing and prevent autonomous driving from increasing total vehicle travel, traffic congestion, and energy consumption.
My main conclusion: autonomous vehicles will not reduce the importance of good planning.
One important issue that doesn't get much publicity is called the "safety limit." This, at the risk of oversimplication, refers to the relationship between "safe stopping distance from the maximum permitted speed" and "minimum spacing between vehicles." If the minimum spacing is greater than the safe stopping distance, then the "system" operates "within" the safety limit. If the minimum spacing is less than the safe stopping distance, then the system operates "beyond" the safety limit.
The obvious fact that streets and highways operate beyond the "safety limit" is well known and widely accepted. However, with the exception of "legacy" tramway (streetcar) systems, virtually all fixed-guideway transport systems are required (by regulatory authorities) to operate within the safety limit. Remarkably, the signal systems in the Downtown Seattle Transit Tunnel enforce the safety limit between light-rail trains - and between LRT trains and buses - but not between buses.
Enforcement of the safety limit on road vehicles would cause significant reduction of road capacity, by reducing the number of vehicles that could occupy each lane. However, autonomous-vehicle technology would permit this - and real-world considerations might require it. Consider the dilemma of suppliers, hauled into court as part of a product-liability action, forced to acknowledge either 1.) that the technology could not enforce the "safety limit," or 2.) that the technology could do this, but "it was decided" that the resultant reduction of highway capacity would be unacceptable.
At the risk of being labeled a Luddite, I will state my belief that social and political factors will ensure that large-scale implementation of autonomous-vehicle technology on controlled-access highways will not occur for decades - and that large-scale implementation on uncontrolled ("surface") roads and streets will not occur during the foreseeable future.
Post a Comment