Mechanical engineer Raunaq Bose gives an insight into Blink, a revolutionary communications technology being developed by a UK-based team at Humanising Autonomy, that will indicate to pedestrians a vehicle’s intent
How long has the system been in development?
The system has been in development for three months and started as a project in the Innovation Design Engineering double Masters programme at Imperial College London and the Royal College of Art. I was awarded an Industrial Design Studentship from the Royal Commission for the Exhibition of 1851, that provided funding for fees, materials and travel, which allowed me to make space to develop the project further.
Where did the idea come from initially?
The idea behind Blink was initially unrelated to autonomous vehicles. As a group, we were exploring how we could make the urban environment a more pleasant and comfortable place for people to live in, while in parallel researching trends and emerging technologies in this urban environment. Autonomous vehicles is one such emerging technology that repeatedly came up in our research, and we realized that our position and our values seemed to be at odds with what was being shown by many of the automotive industry’s autonomous car concepts; we thought that this gave us scope for an interesting human-centered project.
We think that a lot of attention has been put on the inner features and passenger experience of autonomous vehicle concepts, and that not enough consideration has gone into how people outside the vehicle feel when interacting with these autonomous vehicles, and how the vehicles can show that they have acknowledged pedestrian presence and intent.
There are some autonomous car concepts which, for example, indicate to pedestrians when it is safe for them to cross or not, but our issue with these solutions is that they are very prescriptive it is always the car telling the pedestrians that they can cross. With Blink we are developing a language for two-way communication between the pedestrian and the autonomous vehicle, where pedestrian presence and intent is communicated and responded to in turn by autonomous vehicles.
What is this project’s ultimate goal?
We believe that while infrastructure exists to balance the power between pedestrians and vehicles, much of the current infrastructure was built around the needs of the vehicle. Aside from the potential of autonomous vehicles to cause far fewer accidents and fatalities in the urban environment, their arrival also provides an opportunity to rebalance the power dynamics and give pedestrians an equal weighting in the conversation between pedestrians and vehicles on the road.
A major issue with autonomous vehicles is that people are inherently distrustful of autonomous technology. By developing a new common language of trust between people and these vehicles, where pedestrian presence and intent is communicated and acknowledged, we can encourage a more empathetic manifestation of autonomous technology to create a safer and more pleasant urban environment.
From left: The team at Humanising Autonomy comprises Adam Bernstein, an electrical engineer, architect, Leslie Nooteboom, industrial design engineer, Maya Pindeus and Raunaq Bose, a mechanical engineer
What tests have been performed on the system?
We have currently performed tests with pedestrians in front of stationary vehicles to test the intuitiveness of the gesture-based feedback system and evaluate the responses from pedestrians. The next step is to perform these tests with moving vehicles within a controlled setting in order to test key parameters such as time-elapsed between recognizing pedestrians and asking them what they would like to do.
How much of a role has virtual testing played in the system’s development?
All of the gestures and interactions that were developed have been based on observations and insights gained from real-world interactions. We believe that the urban road environment is an extremely complex context to design for, and that any virtual environment that we create for testing these interactions would not accurately reflect this complexity.
However, we have made use of a form of supervised machine learning for the system to learn and respond to the human gestures. The design enables training of hundreds of culture-specific gestures that would reflect the complexity of the road situation across different cultures and localities.
Do you intend to perform any real-world testing (i.e. on public roads)?
We do not intend to perform testing on public roads by ourselves in the current state of the project; for the autonomous vehicle context, we are looking at collaborating with vehicle OEMs to further develop its implementation in such vehicles and follow their appropriate testing procedures.
What challenges have you had to overcome?
As mentioned beforehand, we are aiming to rebalance the power dynamic between vehicles and pedestrians the most challenging aspect that we’ve overcome is determining what that balance is in this project. Just because a pedestrian raises their hand, it doesn’t mean that the autonomous vehicle should immediately stop in all situations that would need to depend on the specific road situation, road regulations, and whether the vehicle would be able to safely stop before it reaches the vicinity of the pedestrian.
For example, if travelling at speed, the autonomous vehicle would visualize a pedestrian on the side of the road automatically as red if it doesn’t have time to ask the pedestrian what they want to do, or if it can’t physically brake in time, in order to communicate directly to the pedestrian that it is unsafe to cross.
Has anything unplanned happened during testing?
We learned that it is really important that individual pedestrians know that the vehicle is communicating directly and specifically to them, and not for example another person near them. While our initial visual indicators were similar to autonomous vehicle concepts shown by other automotive manufacturers in that they were abstract shapes or lines moving in a particular direction to indicate whether it’s safe to cross or not we saw from our user testing that this was not intuitive and did not build that sense of trust we are aiming to develop.
As such, we moved toward representing pedestrians on the visual displays by digitally mirroring them; we saw from our testing that people are very good at recognizing their silhouettes and their motion, and that this really helped to promote direct communication and trust between vehicle and pedestrian.
What’s the next step in the project?
We think that technologies interacting with people should adapt to fit what is natural for people to do, and not the other way around, where people are simply forced to deal with useful, but ill-designed, technology.
The system and philosophy developed in this project is applicable in several contexts. Most autonomous systems cars, planes, delivery drones etc do not allow for two-way communication. It is merely a broadcasting of information or direct control over a specific function. By allowing the human to influence the behavior of the system in a more intuitive way, Blink creates a more trusted and natural interaction between human and machine. The Humanising Autonomy team is looking to realize this solution in different ways, using their skills in engineering, design and architecture.
Several automotive manufacturers have expressed an interest in the system, in addition to companies and manufacturers from other industries and applications.
February 28, 2017