Home > Psychology > Thoughts on Difference between Interpersonal Trust and Trust in Automated Cars

Thoughts on Difference between Interpersonal Trust and Trust in Automated Cars

Self-driving cars are widely discussed now. Whereas we are not sure when autonomous cars will come as product on our streets, it seems just a matter of time. Technology develops fast, bringing continuously higher intelligent driver assistant systems into the vehicles. Maybe the development goes even faster than some people imagine or like to think. Rather than technology maybe people need time to adjust to this fast development. Would you today assign the task of carrying you safely from destination A to B to a self-driving car? How would it react in case of an accident? Trust is an important keyword. What makes us trust something or someone?

Before a system can be designed with a characteristic called “trust” it is necessary to understand what trust means, for a designer and for a user. Most important is to understand the meaning from a user’s perspective, because that is characteristic of the system that a designer wants that the user “feels” when interacting with the system. A starting point can be interpersonal trust, meaning trust between humans towards each other. Trust is a base for long term relationships with other people. We have no problem to confess to them, or to ask for help. We also have expectations on them, e.g. when we ask for help to be helped, or a certain level of politeness in conversation whatever we say. When we are interacting with computers it seems we apply certain concepts of conversation to that interaction as well. In literature trust is described as a multifaceted concept dependent on a range of factors. Trust requires time to develop, but is easy to lose. There are short-cuts in interpersonal trust, e.g. when someone makes a confession and so seems to be vulnerable, then he/she makes it easier for another person to develop trust or when a group of people works towards a shared goal. Both shortcuts are not implemented in current systems of automation. Systems do not have intentions, so characteristics that we apply to other humans, such as loyalty and benevolence, do not apply.

Research suggests that etiquette in dialogue design could lead to an increased level of trust. If an interaction could apply politeness (if the user is doing the requested task, it does not need an additional notice of the system to ask for a task to be done) and be non-interruptive (the system should ask for one task at a time) that leads to a higher level of trust into the system. Other researcher suggested to design a personality into the system. It does not need to be a 3D modelled avatar. A personality could simply be built from a voice, a certain accent, and a certain use of language. The more similar accent and language are to the user, the better the trust develops. People like to trust things similar to themselves. On a general level, developing such a personality not only could make decisions from the automation more comprehensible for a user, but it also makes it easier to trust the system. People tend to be more forgiving to other people than to machines.

Trust in automation has additional quality aspects than interpersonal trust, rooted in the design of the system, such as reliability, and a low rate of false alarms. We need to find out how to merge those requirements into a suitable system design for the everyday driver.

A big challenge will be to make limitations of automation, the right level of trust, understandable for the everyday user. The right level of trust means to provide the user with an understanding when to use the system or when not (boundaries of an automated system could be adverse weather conditions, e.g. snow, fog, or heavy rain). Of course cars are not the first area with highly automated systems, but everyday transportation involves a different challenge than to simply adapt lessons learned from aviation or power plant design as in those areas is highly trained personnel interacting with the automated system. The everyday user is not highly trained, and likely does not wish to know technical details.

 

References:

Parasuraman, Raja & Miller, Christopher, “Trust and etiquette in high-criticality automated systems”, 2004

Reeves, Byron & Nass, Clifford, “The media equation – How people treat computers, television, and new media like real people and places”

Miller, Christopher, “Trust in adaptive automation: the role of etiquette in tuning trust via analogic and affective methods”

Lee, John & See, Katrina, “Trust in automation: designing for appropriate reliance”

Hoffman, Robert, Matthew, Johnson, & Bradshaw, Jeffrey, “Trust in automation”, 2013

Litman, Todd, “Autonomous vehicle implementation predictions – Implications for transport planning”, 2015

Sciencewise Expert Resource Centre “Automated vehicles: what the public thinks”, 2014

 

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: