Last year, as the result of the £13.5m (US$17.1m) HumanDrive research project, led by Nissan and eight other consortium partners including testing specialists Horiba MIRA, an autonomous Nissan Leaf successfully completed two trials: a 230-mile ‘Grand Drive’ self-navigated journey on UK roads using an advanced autonomous control system; and test track-based activity that explored human-like driving using machine learning to enhance the user experience.
During the 30-month study, Horiba MIRA devised a scenario-based simulation testing program designed to interrogate the behavior of the artificial intelligence (AI) within vehicles. According to the company, the research involved the creation of a three-tier scenario system – functional, logical and concrete – to generate a set of cases covering all the complex situations that vehicles encountered. Building on its findings, the company is now calling on the automotive industry, government and academic institutions to use scenario-based testing to standardize, simplify and automate CAV validation in a repeatable and reliable way.
Ashley Patton, lead connected and autonomous vehicle (CAV) test engineer, Horiba MIRA, said, “For HumanDrive, we focused on a series of scenarios ranging from a basic scenario, through to a very complex one typical of an autonomous CAV test program. A surrogate test vehicle was used, as we could specifically control it using a full set of industry leading driving robots, to reduce the risks in this development activity.
“In very complex scenarios, we explored other factors including the test track layout. These make a significant difference to the potential route options the CAV may choose to take and therefore the complexity of programming and quantity of actors we may need to deploy. For example, robotic, crashable actors, representative of other vehicles, pedestrians and cyclists, were used in the majority of these tests, as there is still some inherent risk of collision.
“HumanDrive enabled our CAV test team to focus on innovative scenario-based testing processes, focusing on standardization, simplification and automation. One of the most interesting conclusions is that whether CAV test requirements are specified as a functional, logical or concrete scenario, we must program a concrete scenario for the test to run. We are currently developing tools to help our customers complete testing in an efficient and reproducible way with confidence that the output of the test will validate the requirement set – which is a testament to the great progress made during the project.
Ashley concluded, “In terms of CAV testing, what’s clear is that to be efficient and dynamic, the scenario execution rate must increase. We’re working very hard on how to standardize the inputs to automate the CAV test processes, which are founded on base engineering principles for ensuring industry-leading best practices. The automation of some of the manual test processes will give significant time and cost savings to complete CAV test programs safely, efficiently and accurately.”