Moving to a new website

September 13, 2019 Leave a comment

Dear reader, this blog will move to a new website. Please go there for revised posts and new posits:

Categories: Common

Implementation of Rapid Serial Presentation Tasks: RSVP, RSAP and a tactile task RSTP

January 12, 2018 Leave a comment

Recently I needed three highly attention capturing tasks for a research project. All are similar, detection tasks. This means a stream of signals is presented to the participant and has to react to a defined subset of signals by either tapping on the screen of a tablet or clicking the mouse (either one you decide to use for presentation of the tasks). The tasks capture attention each in another sensory modality: audio, visual and tactile. I implemented the tasks OS independent in JavaScript, HTML, and the NodeJs framework. You find the tasks for download on Github:

The visual and the auditory task present a series of rapidly changing numbers and letters. In literature those tasks are called Rapid Serial Visual Presentation (RSVP) and Rapid Serial Auditory Presentation (RSAP) tasks. Each letter or number appears for a predefined timeframe. After exceeding this timeframe, it disappears and a blank screen is presented for a predefined timeframe. Thereafter, the next letter or number appears. Whenever a number is shown (target) the participant should tap on the screen or click the mouse (dependent on the device you use to present the tasks). The tactile task was designed with a similar characteristic. The JavaScript foresees to communicate with two motors controlled by an Arduino for this task. The participant holds one motor in the left and one in the right hand. During the course of the task the activity of the motors will change: only the left vibrates, only the right vibrates, both vibrate or none vibrates. Whenever both motors vibrate (target) the participant should tap on the screen or click the mouse.

To start the visual or the auditory task, open the desired html file of the task. You will see a website in which you can specify the setting for the task: duration of the task, duration of a cue (letter / digit) is presented, duration of a blank screen, number of targets (numbers) and number of letters between the targets. When you click start, the next page opens showing a screen with an underscore. Now you can get the participant ready. When you press the start button the task will start after 5 seconds.

For the tactile task you need to set up the hardware (Arduino and two vibrating motors (e.g. Lilypad)) and you need to install the NodeJs framework and the JohnnyFive framework for this task. In order to prepare the Arduino to communicate with the JavaScript server you might need to install StandardFirmata on the Arduino. To start the tactile task you need to start the JavaScript file first in the command line window: “node PilotTTask.js”. This will start the JavaScript server. Then, open your browser and type in the URL “localhost”. You should now see a website to define the settings for the tactile task similar to the RSVP and RSAP task.

Moral Decision Making in Fatal Accident Scenarios with Autonomous Driving Cars

Self-driving, autonomous, vehicles appear in conversations everywhere. A part of the discussion concerns ethical aspects about how to make moral decisions in a fatal accident scenario. Such a typical ethical dilemma scenario is that the autonomous driving system needs to decide between two fatalities, no other option: either crash into a wall resulting in all passengers being dead, or crash into a group of pedestrians resulting in all pedestrians being dead. Number of people, involved animals and characteristics of the people involved varies to analyse differences rules for the moral decision-making, e.g. the involved people can have different age, weight, or oblige traffic rules or not. There is no right choice in such a scenario, rather the decision shows what appears to be less adverse based on the person’s internal moral judgements. Those internal moral judgements are driven by various factors, e.g. represented in the model from Bommer, M., Gratto, C., Gravander, J., and Tuttle, M. (1987).

You can get think about ethical dilemmas and learn about your moral choices on an online platform developed at the MIT. That platform presents users with ethical dilemmas and analyses the user’s decision-making. Have a go and try it yourself:

The chosen characteristics in the scenarios are interesting. There are no statistics on the results published yet.

The scenarios simplify the outcome of the scenario (death in all options), the options, and the fatality risk of the two options of killing either passengers of a car and pedestrians. Predictions of such a moral dilemma in a real situation is hard (and partly impossible), including the decisions of other traffic participants (source). It needs to be considered that neither human passenger nor machine might process all relevant variables in the situation, e.g. humans can only process a certain amount of information in a given time and the autonomous car might not accurately know the number of its passengers or age of the pedestrians. Even if the decision is made, we can think of variables that can lead to a different than the intended results. For example, when passenger/autonomous driving car decides to crash into a wall and not a group of school children, it might looses control in course of the steering and braking manoeuvre due to an oily patch on the road and ends up steering into the group of children. From our own experience we can remember everyday situations were we expected a certain outcome, acted accordingly, but the situation developed differently than the expectation. Whereas the course of events of such a situation remains a gamble in real live there are variables that can be influenced beforehand. One of those factors is the design of the autonomous driving car that be enhanced to protect its passengers and pedestrians, and so reduce the risk of the dire outcome of ethical dilemma situations in life (source, source).

Self-driving cars

It’s been a while since I last posted an article on this blog, here we go. Recently I came across an article about self-driving cars. It is a topic that is hard to avoid lately. It appears that every automobile manufacturer is investing into research about self-driving cars. The majority of research thereby concerns the communication between “driver” and car, and ethical considerations how to handle safety critical situations. Toyota recently proposed something like a personal assistant in the car that is suggested to communicate with the driver (Source). The communication between a self-driving car and pedestrians or other traffic participants is, at the moment, on a sidetrack of research.

However, this communication is an important part of the driving task in city environments. A driver communicates with other traffic participants to clarify each others intentions and negotiate a safe passage for everybody. For example, a pedestrian might wish to cross a road without traffic light, if there is a traffic jam or heavy but slow flowing traffic on the road. To ensure he/she can pass the road safely, the pedestrian communicates with the driver in front of whose car he/she wants to cross the road. The pedestrian observes the drivers face and ensures that the driver is looking at him/her. The driver, in response, indicates that he/she has seen the pedestrian with a smile, a nod or a hand-signal and slows down the car or increases the distance to the car in front. Then the pedestrian crosses the road safely.

With a self-driving car this communication is lost. There is no driver to communicate with. What does a self-driving car need to deal with such a situation? A solution would be to leave the decision about the next action up to the pedestrian and let the car handle the situation with its pedestrian recognition and emergency brake system, but that might be unpleasant for the self-driving car’s passengers and it would leave the pedestrian with an uncertainty if the car will stop. So a communication would solve the uncertainty and reduce the risk in that situation. To establish a communication between self-driving car and pedestrian, the car would need the ability to recognise that someone wants to communicate with it, e.g. recognising the pedestrian who wants to cross the road. At best, car would then have a set of signals that helps the pedestrian to see its intentions. The car can then interpret the pedestrians answer and adjust its behaviour, e.g. slow down to allow the pedestrian a safe passage.

The self-driving concept car from Volkswagen (VW) has an interesting design that pushes towards natural communication. The designers have taken Don Normans quote, that turn signals of cars are their facial expressions, literally . The concept car’s face could be used to communicate with pedestrians in a natural way, signalling that it recognised the pedestrian (looking at him/her), a nods in response to the pedestrians request to cross the road (moving its eyes up and down), indicating where the car goes (e.g. movement of its eyes) and if it stops (eyes look forward and rad brake light).

A difficulty could be that the car’s face appears to be visible from the front only. In a traditional car, it is possible to see where the driver is looking through the windows at least from both sides. Other signals that help to predict a drivers intentions are indicators, brake light, and the driver’s reflection in the mirror. It is easy to implement indicators and brake light in a self-driving car. The design becomes more challenging to make its behaviour clearly visible from the sides. Perhaps, if the car’s eyes are implemented on front and side, like the indicators in a traditional car. Another challenge lays in the communication itself. Pedestrians can choose from a range of signals to communicate – e.g. smile, head movement, eye contact, and hand signals. A part of a communication involves to understand each others signals. For a natural communication a car would need to learn a range of signals and how to respond to them. Or do we need to learn a sign language to communicate with a self-driving car?

An interesting aspect is how the communication proceeds over time. When pedestrians learned that self-driving cars always brake for them, does that change their behaviour? Would you as pedestrian be more persistent in your attempts to cross a road? Certainly, self-driving cars are an interesting area of research with manifold aspects to consider from a technical and from a human-centred side.


Nagoya Experiment – drivers are bad followers

October 27, 2016 Leave a comment

When we are asked to follow another car at a constant speed we tend to make little variations in speed rather than driving at a constant speed. Those variations can initiate brake reactions in drivers following us, and then drivers following them – a wavelike proceed of the brake reaction that can initate a traffic jam for no reason. Mathematical this behaviour is can be described similar to a damper, two waves influencing a third wave. One wave describes the acceleration/deceleration of the lead driver, another wave the acceleration / deceleration of the following driver, those both waves influence the acceleration / deceleration of the traffic behind those cars (described by the third wave). How easy and fast, within a couple of seconds, that can happen you can see in the video below. The study behind the video was conducted by Prof. Sugiyama from Nagoya university. Drivers were asked to drive in a roundabout with constant speed, following other drivers. There occur traffic jams for no reason after a couple of seconds.

Connected car technology might be able to limit the proceeding of traffic jam shockwaves such as in the video by informing drivers about variations in traffic seconds before the drivers notices a behaviour change in their lead cars (e.g. Fuchs et al.). Such system feedback might be “too late” to help drivers driving directly behind the lead vehicle as the lead car’s system needs time to recognise and communicate the braking event to the other cars, but it could help the driver behind that car. Another variable to consider is how drivers react to such a system feedback, if the system feedback could / should include guidance for the response – e.g. recommending the driver to decrease to a certain speed to keep in flow to avoid too harsh braking. Harsh braking might result in the next shockwave and should be avoided. Research in car-following behavior is not new, e.g. Ranney discusses car-following models in his paper from 1999 or by Panwai et al. in 2005, rather “rediscovered” in course of higher automated vehicles and connected cars.

Another interesting aspect is the personal distance that a driver prefers and keeps to other cars for hisher feeling of safety. It is something indivdual, depending, e.g. on personality and driving style. However, in dense traffic drivers maybe forced to keep smaller distances. The driver needs to satisfice between the safety levels that the he/she wants to keep and getting with the traffic, e.g. if the traffic density is high and the driver would keep a longer safety distance to the car in front this longer gap could be used by other drivers to move in. IF the safety distance is reduced, does this then influence how drivers react to braking of a lead vehicles? How would they react if the system provides a suggestion for a certain speed, would they follow the suggestion?

New Scientist

Fuchs et al.

Panwei and Dia

Shower head shows water temperature

October 22, 2016 Leave a comment

A pen that can draw conductivity

September 8, 2016 Leave a comment

The japanese start-up company Kandenko invented a pen that can draw conductivity. Watch the impressive video of what the pen can do. The pen uses a silver ink which seems to harden when it is drawn on a surface. So the drawing is actually 3D and lines that were drawn with the pen can be picked up as thin solid layer. When the drawing is picked up it looks a bit like in a children’s pop-up book.

The pen is already available as product on the Japanese Amzon website.