Home > Media News >
Source: http://www.independent.co.uk
Facebook is teaching robots to appear more human-like.
The company’s AI lab has developed a bot that can analyse the facial expressions of the person it’s interacting with, and adjust its own appropriately. It’s controlled by a deep neural network, which watched 250 video recordings of two-person Skype conversations – where both faces were displayed side-by-side – as part of its training. The researchers identified 68 “facial landmarks” that the system monitored, in order to detect subtle responses and micro-expressions.
“Even though the appearances of individuals in our dataset differ, their expressions share similarities which can be extracted from the configuration of their facial landmarks,” they explained in a paper spotted by New Scientist.
“For example, when people cringe the configuration of their eyebrows and mouth is most revealing about their emotional state.
“Indeed, small variations in expression can be very informative.”
They then tested the bot on a group of humans. They were asked to “look at how the [bot’s] facial expressions are reacting to the user’s, particularly whether it seemed natural, appropriate and socially typical”, and decide on whether or not it seemed to be engaged in a conversation. The bot passed the test, with the judges considering is facial expressions to look “natural and consistent” and “qualitatively realistic”.
“Interactive agents are becoming increasingly common in many application domains, such as education, health-care and personal assistance,” said the researchers.
“The success of such embodied agents relies on their ability to have sustained engagement with their human users. Such engagement requires agents to be socially intelligent, equipped with the ability to understand and reciprocate both verbal and non-verbal cues.”
They say the natural next step is to make the bot interact with a real human in the real-world.
Right Now
Top Stories