Home > Human Factors history and studies > Moral Decision Making in Fatal Accident Scenarios with Autonomous Driving Cars

Moral Decision Making in Fatal Accident Scenarios with Autonomous Driving Cars

Self-driving, autonomous, vehicles appear in conversations everywhere. A part of the discussion concerns ethical aspects about how to make moral decisions in a fatal accident scenario. Such a typical ethical dilemma scenario is that the autonomous driving system needs to decide between two fatalities, no other option: either crash into a wall resulting in all passengers being dead, or crash into a group of pedestrians resulting in all pedestrians being dead. Number of people, involved animals and characteristics of the people involved varies to analyse differences rules for the moral decision-making, e.g. the involved people can have different age, weight, or oblige traffic rules or not. There is no right choice in such a scenario, rather the decision shows what appears to be less adverse based on the person’s internal moral judgements. Those internal moral judgements are driven by various factors, e.g. represented in the model from Bommer, M., Gratto, C., Gravander, J., and Tuttle, M. (1987).

You can get think about ethical dilemmas and learn about your moral choices on an online platform developed at the MIT. That platform presents users with ethical dilemmas and analyses the user’s decision-making. Have a go and try it yourself: http://moralmachine.mit.edu/hl/de

The chosen characteristics in the scenarios are interesting. There are no statistics on the results published yet.

The scenarios simplify the outcome of the scenario (death in all options), the options, and the fatality risk of the two options of killing either passengers of a car and pedestrians. Predictions of such a moral dilemma in a real situation is hard (and partly impossible), including the decisions of other traffic participants (source). It needs to be considered that neither human passenger nor machine might process all relevant variables in the situation, e.g. humans can only process a certain amount of information in a given time and the autonomous car might not accurately know the number of its passengers or age of the pedestrians. Even if the decision is made, we can think of variables that can lead to a different than the intended results. For example, when passenger/autonomous driving car decides to crash into a wall and not a group of school children, it might looses control in course of the steering and braking manoeuvre due to an oily patch on the road and ends up steering into the group of children. From our own experience we can remember everyday situations were we expected a certain outcome, acted accordingly, but the situation developed differently than the expectation. Whereas the course of events of such a situation remains a gamble in real live there are variables that can be influenced beforehand. One of those factors is the design of the autonomous driving car that be enhanced to protect its passengers and pedestrians, and so reduce the risk of the dire outcome of ethical dilemma situations in life (source, source).

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: