Archive

Posts Tagged ‘autonomous cars’

Driving Style of Autonomous Vehicles

February 28, 2016 Leave a comment

Jaguar Land Rover investigates how a natural driving styles (meaning driving styles from everyday drivers on the roads) could be adapted into an autonomous vehicle. Basis for the data comes from instrumented vehicles which collect data to understand the everyday driving style and then to apply that into an autonomous vehicle. The research strategy contributes to find a way to make highly automated or autonomous cars more trustworthy. Trust is a challenge for new technology, bound strongly to acceptance of new technology. Certainly, as humans, we tend to trust things more if their behaviour is similar to our own (see also my previous blog on automation and trust). However, the driving style varies dependent on the driver’s personality, experience, driving environment, and purpose of the travel. Occasionally a usually calm driver changes his / her driving style if the purpose of travel is urgent, e.g., a family member is sick and awaits the visit. On another occasion one might just want to enjoy the beautiful landscape without any other specific purpose of travel. That again influences the driving style. Of how much the driving style changes is a question to be answered.

The vehicle could ask for the purpose of travel and try to associate a driving style to that. Asking a user helps in general to understand the intentions of the task that the user wants to do and so to provide better service. However, it involves a trade-off of receiving the knowledge to deliver a better service and asking too much, making the user impatient.

To implement human like behaviour into autonomous cars occurs as trend. I found a report from last year that Google is doing research in that area as well. The report reveals another important reason to adopt human like behaviour in a car. Whereas autonomous cars are great in following rules such a behaviour can result literally in a roadblock in the unexpected environment of everyday travel. A rule is, for example, to never go over double yellow lines marking the edge of the road. Now, if a car parks in a way that another car cannot pass on the street without going over the yellow line a natural driver behaviour would be to just go over the double yellow lines. An autonomous wouldn’t do that. For no understandable reason, for the driver, it would recalculate the route. Another issue is the faster reaction time of autonomous vehicles, letting autonomous vehicles come to an abrupt stop when they sense a pedestrian. It results in a challenge for human drivers behind who are not so fast to react. Implementing adaptive human like strategies in autonomous cars helps them to deal with an environment where not everything is working along the rules, but also makes their behaviour more understanding for a human and brings their skills to level where they can interact safely with human traffic participants surrounding them.

See here for details directly at the Jaguar Land Rover website.

Advertisements

Thoughts on Difference between Interpersonal Trust and Trust in Automated Cars

January 27, 2016 Leave a comment

Self-driving cars are widely discussed now. Whereas we are not sure when autonomous cars will come as product on our streets, it seems just a matter of time. Technology develops fast, bringing continuously higher intelligent driver assistant systems into the vehicles. Maybe the development goes even faster than some people imagine or like to think. Rather than technology maybe people need time to adjust to this fast development. Would you today assign the task of carrying you safely from destination A to B to a self-driving car? How would it react in case of an accident? Trust is an important keyword. What makes us trust something or someone?

Before a system can be designed with a characteristic called “trust” it is necessary to understand what trust means, for a designer and for a user. Most important is to understand the meaning from a user’s perspective, because that is characteristic of the system that a designer wants that the user “feels” when interacting with the system. A starting point can be interpersonal trust, meaning trust between humans towards each other. Trust is a base for long term relationships with other people. We have no problem to confess to them, or to ask for help. We also have expectations on them, e.g. when we ask for help to be helped, or a certain level of politeness in conversation whatever we say. When we are interacting with computers it seems we apply certain concepts of conversation to that interaction as well. In literature trust is described as a multifaceted concept dependent on a range of factors. Trust requires time to develop, but is easy to lose. There are short-cuts in interpersonal trust, e.g. when someone makes a confession and so seems to be vulnerable, then he/she makes it easier for another person to develop trust or when a group of people works towards a shared goal. Both shortcuts are not implemented in current systems of automation. Systems do not have intentions, so characteristics that we apply to other humans, such as loyalty and benevolence, do not apply.

Research suggests that etiquette in dialogue design could lead to an increased level of trust. If an interaction could apply politeness (if the user is doing the requested task, it does not need an additional notice of the system to ask for a task to be done) and be non-interruptive (the system should ask for one task at a time) that leads to a higher level of trust into the system. Other researcher suggested to design a personality into the system. It does not need to be a 3D modelled avatar. A personality could simply be built from a voice, a certain accent, and a certain use of language. The more similar accent and language are to the user, the better the trust develops. People like to trust things similar to themselves. On a general level, developing such a personality not only could make decisions from the automation more comprehensible for a user, but it also makes it easier to trust the system. People tend to be more forgiving to other people than to machines.

Trust in automation has additional quality aspects than interpersonal trust, rooted in the design of the system, such as reliability, and a low rate of false alarms. We need to find out how to merge those requirements into a suitable system design for the everyday driver.

A big challenge will be to make limitations of automation, the right level of trust, understandable for the everyday user. The right level of trust means to provide the user with an understanding when to use the system or when not (boundaries of an automated system could be adverse weather conditions, e.g. snow, fog, or heavy rain). Of course cars are not the first area with highly automated systems, but everyday transportation involves a different challenge than to simply adapt lessons learned from aviation or power plant design as in those areas is highly trained personnel interacting with the automated system. The everyday user is not highly trained, and likely does not wish to know technical details.

 

References:

Parasuraman, Raja & Miller, Christopher, “Trust and etiquette in high-criticality automated systems”, 2004

Reeves, Byron & Nass, Clifford, “The media equation – How people treat computers, television, and new media like real people and places”

Miller, Christopher, “Trust in adaptive automation: the role of etiquette in tuning trust via analogic and affective methods”

Lee, John & See, Katrina, “Trust in automation: designing for appropriate reliance”

Hoffman, Robert, Matthew, Johnson, & Bradshaw, Jeffrey, “Trust in automation”, 2013

Litman, Todd, “Autonomous vehicle implementation predictions – Implications for transport planning”, 2015

Sciencewise Expert Resource Centre “Automated vehicles: what the public thinks”, 2014