How to build trust in self-driving cars?
Self-driving cars will hit the roads in a few years. One day we will wake up and these new machines will be everywhere around us. Solving the technical challenges is one thing. But how will we deal with the human side? Will we feel safe surrounded with robo-cars?
Autonomous cars are like planes, although they work by themselves, you ought to get them checked timely on just car checks. You know they are safer than normal cars, but still, you have some bad feelings. Building trust is not an easy thing. When we face a challenge like this at UX studio, we usually start observing how people behave and test different ideas with quick prototypes.
People look at the driver's face when they want to make sure that he or she is in control and sees everything. This instinctive human behavior won't disappear because of some new technology, so it's better to adapt to it. We imagine the self-driving cars with one display in the center. Everyone in the car can see it and it shows the state of the car all the time. This will be the "face" of the driver that you can check if you want.
We trust machines when we know how they work and we can predict what they will do. Humanoid robots are scary because their bodies suggest more power than they actually have. Big machinery in a factory is scary because you don't know what it will do after you touch that big red button. Except if you are one of the few trained professionals, of course. On the other hand, a vacuum cleaner bot is not scary because you know how it works and what it does.
So we designed a UI where the car always shows what it is doing in that exact moment, and what it sees around us. We have a schematic view of all the cars, bikes and pedestrians, which proved to be easily understandable during our user tests. You can be sure that the car is aware of every obstacle. We also display the route so you can predict the car's moves.
I don't think people will watch this display all the time. But when there's a risky situation it's always there in the same place in the middle, so you can check it quickly. We experimented with different signs for the objects that need special attention, like bikers or people crossing the street.
First we highlighted the important obstacles with red. When we tested this interface with people we realized it does more harm than good. The red sign indicates danger and people want to do something to prevent the problem which is not possible in a fully autonomous car. So they became stressed. In our next iteration we drew red lines instead of the red boxes. The lines separate the car from the people like a fence and ensures you that the car is paying attention to those pedestrians and bikers. People understood the concept, this design was way more successful on the tests.
In the video below you can see how this works in action.
During the coming weeks we will share more interesting findings from our self-driving lab project. To be up to date, follow us on Facebook or Twitter.
We also made a Behance project about our self-driving design. You can find all the screens and many beautiful little details there, so go and check it out.