'Move fast and break things' won't work for autonomous vehicles

'Move fast and break things' won't work for autonomous vehicles
© iStockphoto

The rush to deploy autonomous road vehicles in the United States is understandable: It's a potentially trillion-dollar market. But it must be tempered by safety considerations. Applying the Silicon Valley mantra of “move fast and break things” to autonomous vehicles would be self-defeating.

Recently-proposed legislation to authorize a “Highly Automated Systems Safety Center of Excellence” intended to review the safety of automated technologies is a good idea. But other proposals would grant the National Highway Traffic Safety Administration (NHTSA) the power to initially exempt 15,000 self-driving vehicles per manufacturer from safety standards written with human drivers in mind. This would escalate to 80,000 per-manufacturer within three years. This is a bad idea.

Suppose that each of the 80,000 autonomous vehicles deployed by a given manufacturer for testing carries an average of 1.5 passengers. This means that 120,000 people are experimenting with this new technology on public roads, not counting the pedestrians, cyclists and others who unwittingly share those roads in this experiment.

ADVERTISEMENT

This is equivalent to an aircraft manufacturer deploying 600 new mid-range commercial jet aircraft with an average occupancy of 200 passengers per airplane without meeting mandatory, quantitative safety standards — something that the Federal Aviation Administration would never permit. They regulate an aviation industry that has produced the safest and most highly automated transportation systems in the world.

Highly automated vehicles should reduce traffic fatalities substantially, just as automated flight control systems have made commercial aviation extremely safe. But we are unlikely to see significant improvements in autonomous vehicle safety without regulation and oversight.   

Appropriate regulation channels innovation; it does not stifle it. Witness the improvements in both crashworthiness and efficiency that today’s automobiles exhibit over those of a few decades ago, stimulated by regulations the industry fought. Setting appropriate, quantifiable safety goals for autonomous vehicles, and ensuring that the vehicles on the road meet those standards, will not hold U.S. industry back. Instead, it will stimulate it to produce vehicles that other countries will want to buy. And it will provide a basis for protecting all of us from unsafe products produced elsewhere.

This is not to say that good regulation is easily achieved. Development of safety standards for autonomous vehicles that can be measured and assured will require focus and imagination. But the attention already garnered by a small number of crashes of the relatively few highly automated cars now on the roads should teach us that people won’t easily accept autonomous vehicles that are as unsafe as an average driver. Replacing unsafe human drivers with equally or just slightly better machines will continue the unacceptable carnage of around 38,000 annual deaths on U.S. roads.

Witness the consequences of the FAA’s abdication of its safety certification and governance responsibility: It resulted in 346 deaths on two 737 MAX crashes, did many billions of dollars of damage to Boeing and severely tarnished previously unblemished reputations of both the manufacturer and the regulator.

In spite of a legislative mandate to do so, NHTSA has yet to issue Federal Motor Vehicle Safety Standards (FMVSS) for Automated Driving System (ADS) levels of vehicle automation. They appear to be more concerned with exempting certain existing human-driver safety standards so as to facilitate early deployment of ADS-equipped vehicles.

So, let us fund a center tasked to develop the appropriate safety and security standards for autonomous vehicles. But let’s not put those vehicles on the road until they can meet those standards.

Dr. Jaynarayan Lala is a former program manager at the Defense Advanced Research Projects Agency (DARPA). Dr. John Meyer is a professor emeritus of computer science and engineering at the University of Michigan, Ann Arbor. Dr. Carl Landwehr is a visiting professor at the University of Michigan and collaborates with the Cyber Security and Privacy Research Institute at George Washington University. Dr. Charles B. Weinstock is a principal researcher at Carnegie Mellon University.

All of the authors are members of the International Federation for Information Processing’s Working Group 10.4 on Dependable Computing and Fault Tolerance. The opinions expressed by the authors do not necessarily represent those of their affiliated organizations.