Hackers, pranksters and the role of sheer chance mean that car developers are entering a long-term phase of trial and error. The future of autonomous vehicles, and all the safety and reliability benefits this is due to bring, will be dependent on overcoming a whole stream of threats and attacks, writes Dr Stefano Longo, senior lecturer in vehicle control and optimization at the Advanced Vehicle Engineering Centre, Cranfield University, UK.
Performance artist James Bridle caused a flurry of interest and delight with his video clip ‘Autonomous trap 001’. A white circle was made on a side road using salt, comprising a full circle on the inside and a dashed line on the outside. The resulting film showed a car driving into the circle and sitting there transfixed within the ‘magical’ limits of the complete line, a technology trapped by a rule-bound semi-intelligence.
It was an exercise intended to highlight the limitations of technology, and any form of artificial intelligence in particular. On the one hand it’s an unfair criticism the car used in the film wasn’t even an AV, and the AI technology is already available which would enable the car to see and recognize when it was being trapped in this way. On the other hand, there’s a darker significance. This was just a cheeky stunt, but it’s a signal of what’s coming: a battle between autonomous car design and the threats and challenges from a sometimes ugly real world.
In themselves, roads are obviously far more complex than just tracks for transportation, [that takes the form of] regular movement in regular patterns. The unpredictable behavior of human-controlled vehicles is the single most challenging factor: drivers are emotional, tired, irrational, potentially even intentionally malicious. And that’s just the start.
The spectrum of weather conditions just in the UK, let alone globally, is very wide, and each will need to be recognized and understood by an AV in terms of how it can ‘read’ the roadway. No matter how much an AV system ‘learns’ about the look and habits of the world there will always be potential for the unexpected a burst tire in the road, a cyclist weaving in and out of traffic, pedestrians trying to cross a road in an unusual spot, a falling rock from a hillside.
These are all relatively predictable sets of challenges for an AV. The other area of threats is hackers and pranksters. As AVs slowly evolve and become more of a common and accepted feature of our roads the limitations of AI are going to be exposed.
Most crudely we know that driverless cars are going to be a target for people wanting to make a joke of them, proving their inability to hurt pedestrians by walking into the road in front of them (a highly dangerous trick that could cause a chain reaction of accidents).
But the reality is that, at this stage, we don’t know what all the ‘salt circle’ traps might be. In theory, there is the ability for hackers to interfere with radar systems, the networks of sensors and inter-vehicle communications that vehicles rely on.
Critically then, we’re entering a rigorous phase of testing and learning; the development of banks of sophisticated knowledge that will toughen AVs for the real world combining efficiency with values-based, ‘human’ decision making.
Part of the frontline of the battle is at the new Multi-User Environment for Autonomous Vehicle Innovation (MUEAVI) test site (see the November issue of ATTI). MUEAVI is a mile of smart roadway running through the Cranfield University campus and the US$11.8m (£9m) development is due to be operational from October this year.
It’s unique in terms of being a ‘living lab’, a new arterial road for the site that is in everyday use by vehicles and pedestrians and rigged and ready with AV systems. One of the first test projects will be the Innovate UK-funded HumanDrive, which exposes AVs to threats from both expected and unexpected real-world conditions.
November 15, 2017