Archive

Archive for the ‘Robotics’ Category

Self-driving cars

It’s been a while since I last posted an article on this blog, here we go. Recently I came across an article about self-driving cars. It is a topic that is hard to avoid lately. It appears that every automobile manufacturer is investing into research about self-driving cars. The majority of research thereby concerns the communication between “driver” and car, and ethical considerations how to handle safety critical situations. Toyota recently proposed something like a personal assistant in the car that is suggested to communicate with the driver (Source). The communication between a self-driving car and pedestrians or other traffic participants is, at the moment, on a sidetrack of research.

However, this communication is an important part of the driving task in city environments. A driver communicates with other traffic participants to clarify each others intentions and negotiate a safe passage for everybody. For example, a pedestrian might wish to cross a road without traffic light, if there is a traffic jam or heavy but slow flowing traffic on the road. To ensure he/she can pass the road safely, the pedestrian communicates with the driver in front of whose car he/she wants to cross the road. The pedestrian observes the drivers face and ensures that the driver is looking at him/her. The driver, in response, indicates that he/she has seen the pedestrian with a smile, a nod or a hand-signal and slows down the car or increases the distance to the car in front. Then the pedestrian crosses the road safely.

With a self-driving car this communication is lost. There is no driver to communicate with. What does a self-driving car need to deal with such a situation? A solution would be to leave the decision about the next action up to the pedestrian and let the car handle the situation with its pedestrian recognition and emergency brake system, but that might be unpleasant for the self-driving car’s passengers and it would leave the pedestrian with an uncertainty if the car will stop. So a communication would solve the uncertainty and reduce the risk in that situation. To establish a communication between self-driving car and pedestrian, the car would need the ability to recognise that someone wants to communicate with it, e.g. recognising the pedestrian who wants to cross the road. At best, car would then have a set of signals that helps the pedestrian to see its intentions. The car can then interpret the pedestrians answer and adjust its behaviour, e.g. slow down to allow the pedestrian a safe passage.

The self-driving concept car from Volkswagen (VW) has an interesting design that pushes towards natural communication. The designers have taken Don Normans quote, that turn signals of cars are their facial expressions, literally . The concept car’s face could be used to communicate with pedestrians in a natural way, signalling that it recognised the pedestrian (looking at him/her), a nods in response to the pedestrians request to cross the road (moving its eyes up and down), indicating where the car goes (e.g. movement of its eyes) and if it stops (eyes look forward and rad brake light).

A difficulty could be that the car’s face appears to be visible from the front only. In a traditional car, it is possible to see where the driver is looking through the windows at least from both sides. Other signals that help to predict a drivers intentions are indicators, brake light, and the driver’s reflection in the mirror. It is easy to implement indicators and brake light in a self-driving car. The design becomes more challenging to make its behaviour clearly visible from the sides. Perhaps, if the car’s eyes are implemented on front and side, like the indicators in a traditional car. Another challenge lays in the communication itself. Pedestrians can choose from a range of signals to communicate – e.g. smile, head movement, eye contact, and hand signals. A part of a communication involves to understand each others signals. For a natural communication a car would need to learn a range of signals and how to respond to them. Or do we need to learn a sign language to communicate with a self-driving car?

An interesting aspect is how the communication proceeds over time. When pedestrians learned that self-driving cars always brake for them, does that change their behaviour? Would you as pedestrian be more persistent in your attempts to cross a road? Certainly, self-driving cars are an interesting area of research with manifold aspects to consider from a technical and from a human-centred side.

Sources:

https://www.theverge.com/2017/3/6/14832354/volkswagen-sedric-self-driving-van-concept-mobility

https://www.theverge.com/2017/1/4/14169960/toyota-concept-i-artificial-intelligence-yui

http://www.jnd.org/dn.mss/the_personality_of_a.html

http://www.jnd.org/dn.mss/chapter_11_turn_sig.html

Affective computing – android robots

There is an interesting PDF written about affective computing and newer developed android robots written by Christian Becker-Asano. It concentrates on human like robots called androids and how to develop them for a good conversation with humans. The androids, such as Geminoid HI-2 and Repliee Q2, look like a human. From a simply photo it cannot be distinguished if its human or android (specifically in case of Geminoid HI-2). Check the experiment presented on vimeo:

For a good conversation there are several hurdles to take. Considering theories of social intelligence it would need to recognize the correct emotion in the face of the other person, to present oneself concentrated to the others persons talk / act, to mirror (unintentional) their mimic / gestures in a synchronized way and when the emotions of the conversation partner are analyzed to take the correct social strategy to answer them and proceed in the conversation.

Specifically the part of mirroring the other persons mimic / gestures is an important step and related to the structure of our brain. In the brain are cells that bring us to mirror the mimic / gestures of another person if we are in an concentrated talk. It helps us to understand the emotions of the other persons and further it ensures a calm and positive atmosphere. However it is not simply copying this would be recognized by the other person as rude.

For androids this effect would be necessary to be covered in other ways. Perhaps it could help to recognize small expressions in the face which we usually not recognise. If it learns to interpret them correctly it would be a step in the right direction. But this is even difficult for humans who know each other for a long time.

Categories: Robotics

Violent robot makes depressed

Did you ever feel depressed? No? Do you want to know how it feels?
Researchers at the Waseda University in Japan developed a robot rat for the only reason to get lab rats depressed as fast as possible. Depressed rats are used for testing of anti-depressiva which would not work using normal lab rats. It does sound strange and I am not morally feeling good with the idea that an animal is made depressed in a lab. Further it seems to be the wrong end to work at as it does not delete the root cause of the depression.

But my point of interest is how the robot rat works. There are three interaction modes the robot uses to make a rat depressed:

  • “chasing”, the robot rat follows the rat throughout the cage but without touching it. The rat gets feared.
  • “continuous attack”, the robot rat continuously bumps into the rat
  • “interactive attack”, the robot rat moves as soon as the other rat moves, it attacks the other rat for a period of time but makes breaks between the attacks

The researchers found out that it is most effective using continuous attack on immature rats and using interactive attacks on mature rats.

Comparing the behavior that we experience in everyday live it is imaginable that the same way the lab rat gets depressed are also true for use, for example situations at work. If you work on a task but get disturbed by a colleague whenever you make a move for a period of time (but not long or intensive enough to be recognizable for other colleagues), it sooner or later makes you either aggressive or so disappointed that you are not willing to work any longer on this task. In difference to the lab rat we can reflect about our behavior and identify such annoying events. We decide how to proceed, like to chose a different task and avoid to work with that person.

Read more:
WASEDA university – the projects research homepage:
http://www.takanishi.mech.waseda.ac.jp/top/research/rat/robot-WR.html

original article:
http://www.wired.co.uk/news/archive/2013-02/14/robot-rat-terror

Thank to my colleague who showed me this article.

Categories: Robotics Tags:

Robot – swarm intelligence

October 17, 2012 Leave a comment

Robots with swarm intelligence have been developed at the TU Dormund. They can be used in case of accidents in chemical plants or fire to get an overview of the situation. Flying autonomously over the terrain they take photos which can be stitched together with a software and take measures. Goal of the project was to develop light sensors, a high reliable and flexible communication system, storage and processing of geographical information and integration of meteorological data into action planning of the robots. The routes of the robots are predefined and selected by the robots. Collected data is sent to a coordination team and evluated there. In the final phase of the project included a real scenario for the flying robots.

Airshield Project at the TU Dortmund

Currently tasks of robots in accidents and catastrophies are limited to surveillance and overview as they would need a quiet complex mechanic to find their way in the often destructed terrain after an accident. Further they would need different kind of tools to move rocks or metal parts, pull handles, … so they can be used to get a first overview, plan the mission according but then nevertheless humans need to go and search for survivors and repair destroied machines.

Categories: Robotics Tags:

Eliza next generation

October 7, 2012 1 comment

Winner of this years Turing Contest is a chatbot simulating a 13year old kid from Ukraina with a guinea pig. Well,conversation does not flow as easily as with Eliza. Sometimes answers are not related to the previous question or statement. Try out yourself: Chatbot Eugene

Categories: Robotics Tags:

New generation of industrial robots

Industrial robots are used in manufacturing (e.g. car manufacturing). Typical tasks are: welding, painting, assembly, picking, placement and product inspection. The first industrial robot, an automated hydraulic arm, was developed by Joseph F. Engelberger in 1956 for the General Motors car manufacturing plant in New Jersey. The robot replaced human workers, handling heavy and extremely hot metal parts too dangerous for the employees. Today industrial robots are big, moving so fast that they need to be kept in a cage, and they can be programmed only with expert knowledge. Rethink Robotics developed the next generation of industrial robots.

Baxter, the newly developed industrial robot by Rethink Robotics does not move fast (in comparison) as he is watching the environment with a sonar and if a person gets in the range of his movements he slows down. Further, his movements are made predictable for the workers visualising his current state on a display with a pair of eyes. Baxter looks where he moves next and looks surprised if something unexpected happens. In difference to other industrial robots new movements (due to a new line of production or new production process) can be learned by moving his arm and fingers simulating the new behaviour, the learning process is shown on the display.

Read the full article:
Article in the Economist
Website of the manufacturer:
Rethink Robotics

Categories: Robotics Tags: