Tip:
Highlight text to annotate it
X
Affective computing is a recent research domain
which aims to bridge the gap between human emotions and computational technology.
The aim of affective computing is to make machines able to recognize our emotions
and to respond to it in an appropriate way.
There are two complementary interests in affective computing.
The first one corresponds to fundamental research
where we try to better understand relationships between perceptive cues
- such as facial expressions, speech behaviours - and some specific emotions.
On the second side, we can integrate this technology into systems
that can use affective computing to provide a more user-friendly interaction between humans and machines.
There are several challenges in affective computing.
One of them concerns the definition of the emotion itself, because the perception of emotion can be very subjective.
A second challenge concerns the extraction of information from speech or facial behaviour
that is highly relevant to emotions while having very few variations between people.
We are working for quite a long time - about ten years - on meeting analysis
and producing some meeting tools to replay meetings or to assist people during meetings.
What you lose in remote meetings is the understanding awareness of the emotion of people that are remote.
This is particularly important in inter-cultural settings.
The EmotiBoard is actually enriching the video conference with some emotional feedback of the meeting participants.
The goal of EmotiBoard is to try to compensate some lack of information when we are working in remote interaction.
There is a lack of emotional awareness because we are not in co-presence.
The technology can try to recognize the emotions and display these as a feedback.
There are different technologic tools used in EmotiBoard.
One concerns a database that we use to perform emotion recognition in real time.
It's a kind of dictionary of specific behaviours in speech and facial expressions with their corresponding emotion.
We use that corpus to recognize in real time the emotion produced by the people.
A second tool concerns the visualization of the information, because
when we know which emotion has been produced, it is not really easy to know the best way to represent this information.
We optimize the performance of the automatic emotion recognition system by analysis of existing databases.
Then, when the system performs well enough, we integrate this tool in a real setup
to see whether it can be enhanced further for remote interaction.
Currently, we are looking into making the automatic emotion recognition more reliable for various recording conditions
and people from different age, gender, and cultural backgrounds.
We are also investigating an appropriate manner to represent an emotion as a feedback,
because it has to be understood very well in a short amount of time to not interfere with the communication.
This research allows us to better understand how human emotions are encoded during spontaneous interactions
which in turn can give machines the ability to understand human affective behaviour and to know how to react to it.