Researchers at the University of Michigan have devised a new way to test autonomous vehicles that bypasses the billions of miles that would normally need to be logged.
The process, which was developed using data from more than 25 million miles of real-world driving, can cut the time required to evaluate robotic vehicles’ handling of potentially dangerous situations by 300-100,000 times. It could also save 99.9% of testing time and costs, researchers say.
The approach has been outlined in a new White Paper published by Mcity.
“Even the most advanced and largest-scale efforts to test automated vehicles today fall woefully short of what is needed to thoroughly test these robotic cars,” said Huei Peng, director of Mcity and the Roger L McCarthy professor of mechanical engineering at the university.
In essence, the new accelerated evaluation process breaks down difficult real-world driving situations into components that can be tested or simulated repeatedly, exposing automated vehicles to a condensed set of the most challenging driving situations. In this way, just 1,000 miles of testing can yield the equivalent of 300,000 to 100 million miles of real-world driving.
For consumers to accept driverless vehicles, researchers have stated that tests will need to prove with 80% confidence that they’re 90% safer than human drivers. To get to that confidence level, test vehicles would need to be driven in simulated or real-world settings for 11 billion miles. However, it would take nearly a decade of round-the-clock testing to reach just 2 million miles in typical urban conditions.
“Test methods for traditionally driven cars are something like having a doctor take a patient’s blood pressure or heart rate, while testing for automated vehicles is more like giving someone an IQ test,” said Ding Zhao, assistant research scientist in the U-M Department of Mechanical Engineering and co-author of the new White Paper, along with Peng.
To develop their accelerated approach, the U-M researchers analyzed data from 25.2 million miles of real-world driving collected by two U-M Transportation Research Institute projects – Safety Pilot Model Deployment and Integrated Vehicle-Based Safety Systems. Together they involved nearly 3,000 vehicles and volunteers over the course of two years.
The accelerated evaluation process can be performed for different potentially dangerous maneuvers. Researchers evaluated the two most common situations they’d expect to result in serious crashes: an automated car following a human driver and a human driver merging in front of an automated car. The accuracy of the evaluation was determined by conducting and comparing accelerated and real-world simulations. More research is needed involving additional driving situations.
May 31, 2017