Archive

Archive for the ‘Human Factors history and studies’ Category

Moral Decision Making in Fatal Accident Scenarios with Autonomous Driving Cars

Self-driving, autonomous, vehicles appear in conversations everywhere. A part of the discussion concerns ethical aspects about how to make moral decisions in a fatal accident scenario. Such a typical ethical dilemma scenario is that the autonomous driving system needs to decide between two fatalities, no other option: either crash into a wall resulting in all passengers being dead, or crash into a group of pedestrians resulting in all pedestrians being dead. Number of people, involved animals and characteristics of the people involved varies to analyse differences rules for the moral decision-making, e.g. the involved people can have different age, weight, or oblige traffic rules or not. There is no right choice in such a scenario, rather the decision shows what appears to be less adverse based on the person’s internal moral judgements. Those internal moral judgements are driven by various factors, e.g. represented in the model from Bommer, M., Gratto, C., Gravander, J., and Tuttle, M. (1987).

You can get think about ethical dilemmas and learn about your moral choices on an online platform developed at the MIT. That platform presents users with ethical dilemmas and analyses the user’s decision-making. Have a go and try it yourself: http://moralmachine.mit.edu/hl/de

The chosen characteristics in the scenarios are interesting. There are no statistics on the results published yet.

The scenarios simplify the outcome of the scenario (death in all options), the options, and the fatality risk of the two options of killing either passengers of a car and pedestrians. Predictions of such a moral dilemma in a real situation is hard (and partly impossible), including the decisions of other traffic participants (source). It needs to be considered that neither human passenger nor machine might process all relevant variables in the situation, e.g. humans can only process a certain amount of information in a given time and the autonomous car might not accurately know the number of its passengers or age of the pedestrians. Even if the decision is made, we can think of variables that can lead to a different than the intended results. For example, when passenger/autonomous driving car decides to crash into a wall and not a group of school children, it might looses control in course of the steering and braking manoeuvre due to an oily patch on the road and ends up steering into the group of children. From our own experience we can remember everyday situations were we expected a certain outcome, acted accordingly, but the situation developed differently than the expectation. Whereas the course of events of such a situation remains a gamble in real live there are variables that can be influenced beforehand. One of those factors is the design of the autonomous driving car that be enhanced to protect its passengers and pedestrians, and so reduce the risk of the dire outcome of ethical dilemma situations in life (source, source).

How good do you think you remember a well known Logo?

March 13, 2015 1 comment

There is an amount of brands we see everyday, just think of Apple, Coca Cola, BMW and Audi. How good do you think you remember their logo? Does it sound like an easy task?
Companies invest a lot of money on a good perceivable and memorable logo and aim to spread their logo into the last corner of the world. Regular exposure to the company’s logo keeps it in the memory of potential customers. Having the logo in memory increases the chance that potential customers buy the company’s product instead of a competing product, if they need such a product. This appears to happen even if it is a seldom needed product – such as a car.

The UCLA (University of California, Los Angeles) conducted a study asking people to recall the Apple logo. As research results turned out the Apple logo was not as good memorable as one might think. 85 students took part in the study. The student’s age was between 18 and 35 years. 52 of the students are Apple users. Half of the people did not recognize the correct logo from a set of 8 distracting logos (exemplary see below). Interestingly the recognition rate was only marginally better for Apple users compared to other PC users.

However, if asked we seem to be confident about the accuracy of our memory. Overconfidence in the own memory accuracy was specifically higher if participants were asked before the memory task to rate their memory accuracy. When the participants were requested to do the memory task first, it made them more aware of the memory complexity and reduced the level of overconfidence in their own memory.

Source: Blake, A., Nazarian, M. & Castel, A. "Rapid communication - The Apple of the mind’s eye" http://dx.doi.org/10.1080/17470218.2014.1002798

Source: Blake, A., Nazarian, M. & Castel, A. “Rapid communication – The Apple of the mind’s eye” http://dx.doi.org/10.1080/17470218.2014.1002798

Beside a huge capacity for visual memory and long-term memory of visual data we seem to have poor memory for details. The researchers point out that the Apple logo is overrepresented in our daily live. In reaction to the overrepresentation we spend less attention to it, generalizing the logos form and avoiding attention to details. Naturally, if something appears frequently it is not necessary to memorize it in detail. So we memorize the logo as in the generalized form of an apple and add details as we think the apple Apple should look like. Maybe the memory is also influenced by the need for detail. I kept thinking of a similar logo that the apple Apple could potentially be confused with, but I cannot recall of something similar. Maybe people would have shown a better recognition for details if they were required to distinguish the Apple logo from something very similar.

If you want to try the test yourself: http://gnodevel.ugent.be/memory-logo/

Source: Memory of the Apple Logo – research conducted at the University of California, Los Angeles

Early “graphic pen” in Sketchpad demo from 1963

October 14, 2014 Leave a comment

Already back then they had a, well kind of, graphic pan to paint on the display. Amazing.

 

History of ergnomics – time and motion studies Frank Gilbreth

September 30, 2014 Leave a comment

The today known field of Human Factors Engineering (or sometimes called Ergonomics) has a long history. It started out from efficiency to make tools more suitable for man so that they can work more efficient and productive. The studies used therefor are known as time-and-motion studies. Probably the earliest ones were conducted by Taylor. But Taylor’s was mainly focusing on work rates and worker motivation with the purpose to reduce process time.

The study conducted by Frank B. Gilbreth in 1911 is likely the first one which focused on the relationship between human, environment and tools to make the work more efficient by reducing motions. One of his famous time-and-motion studies concerned brick laying. Beside this he was also known for his systematic management style. For conduction of the motion studies he coded movements and actions of the worker related to work into 18 basic motions (http://en.wikipedia.org/wiki/Therblig) which he called “Therbligs” (Gilbreth spelled backwards). During the conducted motion study the therbligs are written down and analysed for optimisation like unnecessary movements.

In case of the bricklaying study he discovered that the bricklayer needs to bow for each brick and at cases a second time to get a bit water for the brick. Additional effort was it that not each brick could be used. If it was of minor quality the worker would need to throw it and bow again. The study resulted in the invention of a height adjustable scaffold on which the worker arranged a heap of bricks and the mortar. The scaffold could be quickly raised or lowered enabling the bricklayer to work at the most convenient positions and it further enables them to keep up with the height of the wall. By making the motions of handling and inspecting the bricks more efficient Gilbreth enabled bricklayers to increase the number of bricks that they could lay from 120 to 350 per man per hour.

Gilbreth enhanced the method developed by Taylor with what he called micro-motion analysis. Meaning the workers actions are recorded on a video with a chronometer running in the video. Afterwards follows the video analysis with a magnifying glass to discover every motion a worker makes and suggest improvements based on the length and sequence of the motions.

In the studies described were the first ones considering fatigue resulting from a job and the thought that the provision of an adequate work environment and regular breaks forms a more efficient process in the end.

However, the important component of cognition has still to wait until World War 2 until it is considered as important point and integrated into the field of human factors engineering. Cognition came into importance due to growing application of automation and automation specifically taking over the heavy physical work from the operator. What remains for the operator are mainly perception of information and decision-making tasks. Leading questions are e.g.: How much information can be absorbed? How many oxygen masks sizes are required to fit to all men? This was also the point where the field of human factors engineering melted with psychology. Because the questions could no longer be answered by engineers – it needed psychologists, physiologists and physicians.

Sources:

Wikipedia

On the website of the MIT there is a paper: http://web.mit.edu/allanmc/www/TheGilbreths.pdf

Frank B. Gilbreth’s book about bricklaying is free available online: http://web.mit.edu/allanmc/www/TheGilbreths.pdf

Hawthorne effect

The Hawthorne is also referred to as observer effect. It was first discovered by Roethlisberger and Dickson in 1939. They were hired by General Electric (GE) to conduct a series of studies in their plant in Chicago to examine how to improve work conditions for increased productivity. The concerned employees worked in an assembly-line.

Roethlisberger and Dickson started with a measurement in the baseline condition. They measured the employee’s productivity without changing the work conditions. Then they planned to make one change in the work environment and measure again to compare if and how the value for productivity changed. Firstly they increased lighting, and found that productivity improved. Then they increased lighting more, and again found that the productivity improved. Then they put the lighting setting to its original level, and, surprisingly, found that the productivity improved.

Cause for improvement was non of the changes they made to the work environment. Cause for the productivity increase was simply that the employees felt valued and respected if asked about their work and that the employees knew when a change occurred that it is made to find the best working conditions which made them work harder.

For their experiment it really helped them to use an ABA test condition. Meaning if they would have stopped their experiment after an AB test (usual work condition and condition with increased lighting) they may would have drawn the conclusion that increased lighting increases productivity. Setting it back to its original level and measuring again revealed the effect.

The effect was then named after GE’s plant: Hawthorne effect.

Their study is also described in their book “Management and worker” which you can find as google-book online.

However it is not uncriticised. Some researchers mention that the important thing is not the awareness of the observation itself that causes a change but a the subjects interpretation of the situation. If the experimental condition maps with the participants goals it could change the behaviour. Also influential can be effects to want to “please” the experimenter.

Miscited psychology concepts – Yerkes-Dodson law

As I wanted to write an article about the well-known workload scale I found this interesting story. Usability engineers know the and work with the workload scale. A task performance has its ideal level of workload. Is the workload higher than the ideal level the operator compensates this with application of more resources. If the workload rises more the operator can no longer compensate and gets overloaded, performance declines. In the other direction, if workload is below the ideal level the operator applies additional resources to keep attention to the task. If workload declines further, attention can no longer be kept to a sufficient level and performance declines. Some literature refers for origin of this process to the Yerkes-Dodson law. Let us have a look what they originally did in their study from 1908.

The study conducted by Yerkes and Dodson (original paper) evaluated the relation of stimulus strength and habit formation. Habit formation is seen as a type of learning. The experiment was conducted with a set of mice. The mice’s task was to distinguish between a black and a white box and learn to choose the white box. Selection of the black box resulted in an electrical shock. In the first experiment they presented the electrical stimulus in different strengths and assumed that the rate of failures (choice of the black box) is declining with stimulus strength. But the result revealed a rise of failures in the setting with the highest electrical stimulus. So they mused if this result could be related to the difficulty of the task (discrepancy between black and white box). For proof they conducted two other experiments varying the ease of distinguishability between black and  white box. Indeed the results showed that for the easiest condition failure rate declined with rising electrical stimulus strength.

Beside the miscitation in literature the study itself has its drawbacks: first the sample set of mice used was very small, the difference of black and white box was not measured with a photometer, the task was just comparable simple perceptual one and the analysis of results could be more accurate, they seem mixed over the experimental settings.

In respect of the study one would assume it to be cited in context of stimulus strength and habit formation, or maybe in context of a behavioural learning theory. But teh study has been cited in the literature as base for performance, workload and (physiological) arousal, concepts which where not part of the original experimental setting. A citation in relation to the workload scale ignores their second finding – if the task is easy, failure rate declines with increasing stimulus strength. Following question would be, what happens if the task is even easier? Are those effects applicable to humans with a wider range and flexible learning algorithms? Further it is not clear if a learning induced by a negative stimulus, as applied in the experimental setting, is comparable to higher workload which means also higher stimulus (rate / variance) but that is not negative… there are many questions rising in my head from this experiment. Well, somehow the misciting started…

A good review of the miscitations is this paper from Karl Halvor Teigen, link. Sometimes it is very hard to get hold of the original paper or its hard to read all that many literature of a topic and if something like this theory is so often cited in many psychology books it is a simply belief that it is true.  However, a critical eye is in advantage.

Origin of ergonomics

Wojciech Jastrzębowski (1799 – 1882) is known for the oldest known definition of the word ergonomics and is seen therefore also as prime father of ergonomics. The definition was published in a philosophical narrative book in 1857 (An Outline of Ergonomics, Or The Science of Work based upon the truths drawn from the Science of Nature). The word ergonomics is generated from two greek words: ergon = work and nomos = rule / law.
A rough translation of the definition: “Ergonomics is a scientific method, so that we are able to gain the most out of this life, under commitment of the least affords with the most fulfillment for the own and for the public welfare.”
The most of his life, it seems, he concentrated on Chemistry and Botany. His biographies mention him to be scientific but practical, putting everything in his research into a practical application. Thereby he must have come up with the idea to not fit the human to work, but fit work for human.