Moving to a new website

September 13, 2019 Leave a comment

Dear reader, this blog will move to a new website. Please go there for revised posts and new posits:

Categories: Common

Implementation of Rapid Serial Presentation Tasks: RSVP, RSAP and a tactile task RSTP

January 12, 2018 Leave a comment

Recently I needed three highly attention capturing tasks for a research project. All are similar, detection tasks. This means a stream of signals is presented to the participant and has to react to a defined subset of signals by either tapping on the screen of a tablet or clicking the mouse (either one you decide to use for presentation of the tasks). The tasks capture attention each in another sensory modality: audio, visual and tactile. I implemented the tasks OS independent in JavaScript, HTML, and the NodeJs framework. You find the tasks for download on Github:

The visual and the auditory task present a series of rapidly changing numbers and letters. In literature those tasks are called Rapid Serial Visual Presentation (RSVP) and Rapid Serial Auditory Presentation (RSAP) tasks. Each letter or number appears for a predefined timeframe. After exceeding this timeframe, it disappears and a blank screen is presented for a predefined timeframe. Thereafter, the next letter or number appears. Whenever a number is shown (target) the participant should tap on the screen or click the mouse (dependent on the device you use to present the tasks). The tactile task was designed with a similar characteristic. The JavaScript foresees to communicate with two motors controlled by an Arduino for this task. The participant holds one motor in the left and one in the right hand. During the course of the task the activity of the motors will change: only the left vibrates, only the right vibrates, both vibrate or none vibrates. Whenever both motors vibrate (target) the participant should tap on the screen or click the mouse.

To start the visual or the auditory task, open the desired html file of the task. You will see a website in which you can specify the setting for the task: duration of the task, duration of a cue (letter / digit) is presented, duration of a blank screen, number of targets (numbers) and number of letters between the targets. When you click start, the next page opens showing a screen with an underscore. Now you can get the participant ready. When you press the start button the task will start after 5 seconds.

For the tactile task you need to set up the hardware (Arduino and two vibrating motors (e.g. Lilypad)) and you need to install the NodeJs framework and the JohnnyFive framework for this task. In order to prepare the Arduino to communicate with the JavaScript server you might need to install StandardFirmata on the Arduino. To start the tactile task you need to start the JavaScript file first in the command line window: “node PilotTTask.js”. This will start the JavaScript server. Then, open your browser and type in the URL “localhost”. You should now see a website to define the settings for the tactile task similar to the RSVP and RSAP task.

Moral Decision Making in Fatal Accident Scenarios with Autonomous Driving Cars

Self-driving, autonomous, vehicles appear in conversations everywhere. A part of the discussion concerns ethical aspects about how to make moral decisions in a fatal accident scenario. Such a typical ethical dilemma scenario is that the autonomous driving system needs to decide between two fatalities, no other option: either crash into a wall resulting in all passengers being dead, or crash into a group of pedestrians resulting in all pedestrians being dead. Number of people, involved animals and characteristics of the people involved varies to analyse differences rules for the moral decision-making, e.g. the involved people can have different age, weight, or oblige traffic rules or not. There is no right choice in such a scenario, rather the decision shows what appears to be less adverse based on the person’s internal moral judgements. Those internal moral judgements are driven by various factors, e.g. represented in the model from Bommer, M., Gratto, C., Gravander, J., and Tuttle, M. (1987).

You can get think about ethical dilemmas and learn about your moral choices on an online platform developed at the MIT. That platform presents users with ethical dilemmas and analyses the user’s decision-making. Have a go and try it yourself:

The chosen characteristics in the scenarios are interesting. There are no statistics on the results published yet.

The scenarios simplify the outcome of the scenario (death in all options), the options, and the fatality risk of the two options of killing either passengers of a car and pedestrians. Predictions of such a moral dilemma in a real situation is hard (and partly impossible), including the decisions of other traffic participants (source). It needs to be considered that neither human passenger nor machine might process all relevant variables in the situation, e.g. humans can only process a certain amount of information in a given time and the autonomous car might not accurately know the number of its passengers or age of the pedestrians. Even if the decision is made, we can think of variables that can lead to a different than the intended results. For example, when passenger/autonomous driving car decides to crash into a wall and not a group of school children, it might looses control in course of the steering and braking manoeuvre due to an oily patch on the road and ends up steering into the group of children. From our own experience we can remember everyday situations were we expected a certain outcome, acted accordingly, but the situation developed differently than the expectation. Whereas the course of events of such a situation remains a gamble in real live there are variables that can be influenced beforehand. One of those factors is the design of the autonomous driving car that be enhanced to protect its passengers and pedestrians, and so reduce the risk of the dire outcome of ethical dilemma situations in life (source, source).

Interface design: The gulf of execution explained in a funny way

Driving Style of Autonomous Vehicles

February 28, 2016 Leave a comment

Jaguar Land Rover investigates how a natural driving styles (meaning driving styles from everyday drivers on the roads) could be adapted into an autonomous vehicle. Basis for the data comes from instrumented vehicles which collect data to understand the everyday driving style and then to apply that into an autonomous vehicle. The research strategy contributes to find a way to make highly automated or autonomous cars more trustworthy. Trust is a challenge for new technology, bound strongly to acceptance of new technology. Certainly, as humans, we tend to trust things more if their behaviour is similar to our own (see also my previous blog on automation and trust). However, the driving style varies dependent on the driver’s personality, experience, driving environment, and purpose of the travel. Occasionally a usually calm driver changes his / her driving style if the purpose of travel is urgent, e.g., a family member is sick and awaits the visit. On another occasion one might just want to enjoy the beautiful landscape without any other specific purpose of travel. That again influences the driving style. Of how much the driving style changes is a question to be answered.

The vehicle could ask for the purpose of travel and try to associate a driving style to that. Asking a user helps in general to understand the intentions of the task that the user wants to do and so to provide better service. However, it involves a trade-off of receiving the knowledge to deliver a better service and asking too much, making the user impatient.

To implement human like behaviour into autonomous cars occurs as trend. I found a report from last year that Google is doing research in that area as well. The report reveals another important reason to adopt human like behaviour in a car. Whereas autonomous cars are great in following rules such a behaviour can result literally in a roadblock in the unexpected environment of everyday travel. A rule is, for example, to never go over double yellow lines marking the edge of the road. Now, if a car parks in a way that another car cannot pass on the street without going over the yellow line a natural driver behaviour would be to just go over the double yellow lines. An autonomous wouldn’t do that. For no understandable reason, for the driver, it would recalculate the route. Another issue is the faster reaction time of autonomous vehicles, letting autonomous vehicles come to an abrupt stop when they sense a pedestrian. It results in a challenge for human drivers behind who are not so fast to react. Implementing adaptive human like strategies in autonomous cars helps them to deal with an environment where not everything is working along the rules, but also makes their behaviour more understanding for a human and brings their skills to level where they can interact safely with human traffic participants surrounding them.

See here for details directly at the Jaguar Land Rover website.

Paper prototyping – 2D and 3D

February 21, 2016 Leave a comment

What is it

Paper prototyping is a classic method for usability testing, specifically in the early stages of the product development process. Jakob Nielsen described the method in his blog as one of the fastest and cheapest rapid prototyping methods in the design process. All it needs is an idea for the conceptual interface design, paper, scissors, and glue. The conceptual design of an interface is sketched on paper. The paper sketches are shown to a user who is then asked to fulfil a task on the interface. The user then e.g., presses a button and in consequence the designer (playing the “computer) changes the picture in front of the user. Users can interact with the paper interface as they would do with a real product. It is an easy method to compare different conceptual designs without worrying about the implementation. Paper prototypes should be simple and should not include a finalised colour concept or high quality graphics as that is not what a paper prototype helps to evaluate. At best it is simply a sketch. The following characteristics of an interface can be evaluated with a paper prototype:

  • General Concept
  • Understandability
  • Navigation
  • Information Architecture
  • Functional Requirements (test if complete to fulfil the task)

How does it work in general

The video shows how a paper prototype and interaction with it look like. Different menus, tool tips and pop-ups, all that can be designed. Remember to think about which tasks you want to evaluate and how they can be achieved in the interface. What interactions does the user need to make? For each interaction an according change in the paper needs to be prepared. Users might take different interactions then expected. So prepare paper prototype “reactions” for unexpected interactions as well. That could be a site with “lorem ipsum” or just a blank page with “under construction”. Those preparations help to let the user explore the interface and helps you to see where users have difficulties in getting along with the interface. Are the actions might want to take to fulfil the task clearly presented in the interface? Does the user get apropriate feedback for each interaction he/she does on the interface?

The advantage of a paper prototype is that it can be easy redesigned. In the next iteration you could, e.g., try a different concept or rename a menu and see if that supports the user to better fulfil the task. You do not need many users for a usability test. According to Jakob Nielsen about 85% of usability faults can be found with about 5 users (“Why you need to test with 5 users”).

If you want to read more about paper prototyping, Carolyn Snyder wrote a book about it.

Paper prototypes in 3D

I recently came over an article were paper prototyping is applied to a 3D design. The article from Säde et al. (1998) is quite old, but because of the 3D paper prototype I thought of it worth explaining herein. They used a paper prototype to test a design for a drink can refund machine. The prototype was build out of foamcore cardboard, glued together with a glue pistol. The interface was represented by coloured print-outs. Lights in the interface were represented as coloured paper attached to the panel. To design 3D paper prototypes it is way to look at industrial designers. Here is another example for a 3D paper prototype of a toaster (yes, a toaster):

Last but not least you find some tools for paper prototypes on the bottom of this website. More information of how to to design 3D paper prototypes with cardboard can be found on this website.

Other Sources:

Säde, S., Nieminen, M., and Riihiaho, S. (1998). “Testing usability with 3D paper prototypes – Case Halton System”

Jakob Nielsen (2003). “Paper Prototyping: Getting User Data Before You Code”. (online)

Thoughts on Difference between Interpersonal Trust and Trust in Automated Cars

January 27, 2016 Leave a comment

Self-driving cars are widely discussed now. Whereas we are not sure when autonomous cars will come as product on our streets, it seems just a matter of time. Technology develops fast, bringing continuously higher intelligent driver assistant systems into the vehicles. Maybe the development goes even faster than some people imagine or like to think. Rather than technology maybe people need time to adjust to this fast development. Would you today assign the task of carrying you safely from destination A to B to a self-driving car? How would it react in case of an accident? Trust is an important keyword. What makes us trust something or someone?

Before a system can be designed with a characteristic called “trust” it is necessary to understand what trust means, for a designer and for a user. Most important is to understand the meaning from a user’s perspective, because that is characteristic of the system that a designer wants that the user “feels” when interacting with the system. A starting point can be interpersonal trust, meaning trust between humans towards each other. Trust is a base for long term relationships with other people. We have no problem to confess to them, or to ask for help. We also have expectations on them, e.g. when we ask for help to be helped, or a certain level of politeness in conversation whatever we say. When we are interacting with computers it seems we apply certain concepts of conversation to that interaction as well. In literature trust is described as a multifaceted concept dependent on a range of factors. Trust requires time to develop, but is easy to lose. There are short-cuts in interpersonal trust, e.g. when someone makes a confession and so seems to be vulnerable, then he/she makes it easier for another person to develop trust or when a group of people works towards a shared goal. Both shortcuts are not implemented in current systems of automation. Systems do not have intentions, so characteristics that we apply to other humans, such as loyalty and benevolence, do not apply.

Research suggests that etiquette in dialogue design could lead to an increased level of trust. If an interaction could apply politeness (if the user is doing the requested task, it does not need an additional notice of the system to ask for a task to be done) and be non-interruptive (the system should ask for one task at a time) that leads to a higher level of trust into the system. Other researcher suggested to design a personality into the system. It does not need to be a 3D modelled avatar. A personality could simply be built from a voice, a certain accent, and a certain use of language. The more similar accent and language are to the user, the better the trust develops. People like to trust things similar to themselves. On a general level, developing such a personality not only could make decisions from the automation more comprehensible for a user, but it also makes it easier to trust the system. People tend to be more forgiving to other people than to machines.

Trust in automation has additional quality aspects than interpersonal trust, rooted in the design of the system, such as reliability, and a low rate of false alarms. We need to find out how to merge those requirements into a suitable system design for the everyday driver.

A big challenge will be to make limitations of automation, the right level of trust, understandable for the everyday user. The right level of trust means to provide the user with an understanding when to use the system or when not (boundaries of an automated system could be adverse weather conditions, e.g. snow, fog, or heavy rain). Of course cars are not the first area with highly automated systems, but everyday transportation involves a different challenge than to simply adapt lessons learned from aviation or power plant design as in those areas is highly trained personnel interacting with the automated system. The everyday user is not highly trained, and likely does not wish to know technical details.



Parasuraman, Raja & Miller, Christopher, “Trust and etiquette in high-criticality automated systems”, 2004

Reeves, Byron & Nass, Clifford, “The media equation – How people treat computers, television, and new media like real people and places”

Miller, Christopher, “Trust in adaptive automation: the role of etiquette in tuning trust via analogic and affective methods”

Lee, John & See, Katrina, “Trust in automation: designing for appropriate reliance”

Hoffman, Robert, Matthew, Johnson, & Bradshaw, Jeffrey, “Trust in automation”, 2013

Litman, Todd, “Autonomous vehicle implementation predictions – Implications for transport planning”, 2015

Sciencewise Expert Resource Centre “Automated vehicles: what the public thinks”, 2014


Categories: Psychology Tags: ,