Moral Decision Making in Fatal Accident Scenarios with Autonomous Driving Cars

Self-driving, autonomous, vehicles appear in conversations everywhere. A part of the discussion concerns ethical aspects about how to make moral decisions in a fatal accident scenario. Such a typical ethical dilemma scenario is that the autonomous driving system needs to decide between two fatalities, no other option: either crash into a wall resulting in all passengers being dead, or crash into a group of pedestrians resulting in all pedestrians being dead. Number of people, involved animals and characteristics of the people involved varies to analyse differences rules for the moral decision-making, e.g. the involved people can have different age, weight, or oblige traffic rules or not. There is no right choice in such a scenario, rather the decision shows what appears to be less adverse based on the person’s internal moral judgements. Those internal moral judgements are driven by various factors, e.g. represented in the model from Bommer, M., Gratto, C., Gravander, J., and Tuttle, M. (1987).

You can get think about ethical dilemmas and learn about your moral choices on an online platform developed at the MIT. That platform presents users with ethical dilemmas and analyses the user’s decision-making. Have a go and try it yourself: http://moralmachine.mit.edu/hl/de

The chosen characteristics in the scenarios are interesting. There are no statistics on the results published yet.

The scenarios simplify the outcome of the scenario (death in all options), the options, and the fatality risk of the two options of killing either passengers of a car and pedestrians. Predictions of such a moral dilemma in a real situation is hard (and partly impossible), including the decisions of other traffic participants (source). It needs to be considered that neither human passenger nor machine might process all relevant variables in the situation, e.g. humans can only process a certain amount of information in a given time and the autonomous car might not accurately know the number of its passengers or age of the pedestrians. Even if the decision is made, we can think of variables that can lead to a different than the intended results. For example, when passenger/autonomous driving car decides to crash into a wall and not a group of school children, it might looses control in course of the steering and braking manoeuvre due to an oily patch on the road and ends up steering into the group of children. From our own experience we can remember everyday situations were we expected a certain outcome, acted accordingly, but the situation developed differently than the expectation. Whereas the course of events of such a situation remains a gamble in real live there are variables that can be influenced beforehand. One of those factors is the design of the autonomous driving car that be enhanced to protect its passengers and pedestrians, and so reduce the risk of the dire outcome of ethical dilemma situations in life (source, source).

Advertisements

Ancient toe prosthetic

Archeologists discovered a 3000 years old prosthetics of a big toe. The prosthetic is wooden and skillfully carved to resemble anatomic details of the toe including the toe nail. It seems to have been dyed, perhaps to match the user’s skin color. The user could attach the prosthetic to the foot via an adjustable leather band. The leather band is flexible to enable the prosthetic to follow the foot’s movement, but also to be durable.

Source:

University of Basel

Science Daily

Categories: Design for Disability

Everyday Usability 39: How not to design a warning for distracted drivers

Hopefully someone only made a joke with this warning for distracted drivers:

Categories: Everyday usability

Self-driving cars

It’s been a while since I last posted an article on this blog, here we go. Recently I came across an article about self-driving cars. It is a topic that is hard to avoid lately. It appears that every automobile manufacturer is investing into research about self-driving cars. The majority of research thereby concerns the communication between “driver” and car, and ethical considerations how to handle safety critical situations. Toyota recently proposed something like a personal assistant in the car that is suggested to communicate with the driver (Source). The communication between a self-driving car and pedestrians or other traffic participants is, at the moment, on a sidetrack of research.

However, this communication is an important part of the driving task in city environments. A driver communicates with other traffic participants to clarify each others intentions and negotiate a safe passage for everybody. For example, a pedestrian might wish to cross a road without traffic light, if there is a traffic jam or heavy but slow flowing traffic on the road. To ensure he/she can pass the road safely, the pedestrian communicates with the driver in front of whose car he/she wants to cross the road. The pedestrian observes the drivers face and ensures that the driver is looking at him/her. The driver, in response, indicates that he/she has seen the pedestrian with a smile, a nod or a hand-signal and slows down the car or increases the distance to the car in front. Then the pedestrian crosses the road safely.

With a self-driving car this communication is lost. There is no driver to communicate with. What does a self-driving car need to deal with such a situation? A solution would be to leave the decision about the next action up to the pedestrian and let the car handle the situation with its pedestrian recognition and emergency brake system, but that might be unpleasant for the self-driving car’s passengers and it would leave the pedestrian with an uncertainty if the car will stop. So a communication would solve the uncertainty and reduce the risk in that situation. To establish a communication between self-driving car and pedestrian, the car would need the ability to recognise that someone wants to communicate with it, e.g. recognising the pedestrian who wants to cross the road. At best, car would then have a set of signals that helps the pedestrian to see its intentions. The car can then interpret the pedestrians answer and adjust its behaviour, e.g. slow down to allow the pedestrian a safe passage.

The self-driving concept car from Volkswagen (VW) has an interesting design that pushes towards natural communication. The designers have taken Don Normans quote, that turn signals of cars are their facial expressions, literally . The concept car’s face could be used to communicate with pedestrians in a natural way, signalling that it recognised the pedestrian (looking at him/her), a nods in response to the pedestrians request to cross the road (moving its eyes up and down), indicating where the car goes (e.g. movement of its eyes) and if it stops (eyes look forward and rad brake light).

A difficulty could be that the car’s face appears to be visible from the front only. In a traditional car, it is possible to see where the driver is looking through the windows at least from both sides. Other signals that help to predict a drivers intentions are indicators, brake light, and the driver’s reflection in the mirror. It is easy to implement indicators and brake light in a self-driving car. The design becomes more challenging to make its behaviour clearly visible from the sides. Perhaps, if the car’s eyes are implemented on front and side, like the indicators in a traditional car. Another challenge lays in the communication itself. Pedestrians can choose from a range of signals to communicate – e.g. smile, head movement, eye contact, and hand signals. A part of a communication involves to understand each others signals. For a natural communication a car would need to learn a range of signals and how to respond to them. Or do we need to learn a sign language to communicate with a self-driving car?

An interesting aspect is how the communication proceeds over time. When pedestrians learned that self-driving cars always brake for them, does that change their behaviour? Would you as pedestrian be more persistent in your attempts to cross a road? Certainly, self-driving cars are an interesting area of research with manifold aspects to consider from a technical and from a human-centred side.

Sources:

https://www.theverge.com/2017/3/6/14832354/volkswagen-sedric-self-driving-van-concept-mobility

https://www.theverge.com/2017/1/4/14169960/toyota-concept-i-artificial-intelligence-yui

http://www.jnd.org/dn.mss/the_personality_of_a.html

http://www.jnd.org/dn.mss/chapter_11_turn_sig.html

Nagoya Experiment – drivers are bad followers

October 27, 2016 Leave a comment

When we are asked to follow another car at a constant speed we tend to make little variations in speed rather than driving at a constant speed. Those variations can initiate brake reactions in drivers following us, and then drivers following them – a wavelike proceed of the brake reaction that can initate a traffic jam for no reason. Mathematical this behaviour is can be described similar to a damper, two waves influencing a third wave. One wave describes the acceleration/deceleration of the lead driver, another wave the acceleration / deceleration of the following driver, those both waves influence the acceleration / deceleration of the traffic behind those cars (described by the third wave). How easy and fast, within a couple of seconds, that can happen you can see in the video below. The study behind the video was conducted by Prof. Sugiyama from Nagoya university. Drivers were asked to drive in a roundabout with constant speed, following other drivers. There occur traffic jams for no reason after a couple of seconds.

Connected car technology might be able to limit the proceeding of traffic jam shockwaves such as in the video by informing drivers about variations in traffic seconds before the drivers notices a behaviour change in their lead cars (e.g. Fuchs et al.). Such system feedback might be “too late” to help drivers driving directly behind the lead vehicle as the lead car’s system needs time to recognise and communicate the braking event to the other cars, but it could help the driver behind that car. Another variable to consider is how drivers react to such a system feedback, if the system feedback could / should include guidance for the response – e.g. recommending the driver to decrease to a certain speed to keep in flow to avoid too harsh braking. Harsh braking might result in the next shockwave and should be avoided. Research in car-following behavior is not new, e.g. Ranney discusses car-following models in his paper from 1999 or by Panwai et al. in 2005, rather “rediscovered” in course of higher automated vehicles and connected cars.

Another interesting aspect is the personal distance that a driver prefers and keeps to other cars for hisher feeling of safety. It is something indivdual, depending, e.g. on personality and driving style. However, in dense traffic drivers maybe forced to keep smaller distances. The driver needs to satisfice between the safety levels that the he/she wants to keep and getting with the traffic, e.g. if the traffic density is high and the driver would keep a longer safety distance to the car in front this longer gap could be used by other drivers to move in. IF the safety distance is reduced, does this then influence how drivers react to braking of a lead vehicles? How would they react if the system provides a suggestion for a certain speed, would they follow the suggestion?

Source:
New Scientist

Fuchs et al.

Ranney
Panwei and Dia

Shower head shows water temperature

October 22, 2016 Leave a comment

Everyday Usability 38: coffee break with Don Norman

October 22, 2016 Leave a comment

Discussion about conceptual models with Don Norman and Bruce Tognazzini. The video is from 2013, but I just found it on YouTube. You can also find a link to the video on Don’s website: jnd.org

Categories: Everyday usability