Kathrin Kaulard

Alumni of the Department Human Perception, Cognition and Action
Alumni of the Group Recognition and Categorization

Main Focus

Facial expressions form one of the most important and powerful communication systems of human social interaction. To date, research has mostly focused on the static, emotional aspect of facial expression processing, using only a limited set of “generic” or “universal” expression photographs, such as a happy or sad face. That facial expressions carry communicative aspects beyond emotion and that they change over time in daily life, however, has so far been largely neglected.

My PhD thesis will first create a database for natural facial expressions that allows research to investigate both the emotional and the almost unexplored conversational aspect of facial expressions. Moreover, the database also allows investigating the importance of temporal information for the recognition process. Using this database, I will examine the general categorization of everyday facial expressions and the roles of temporal and semantic information using classic psychophysical methods. Finally, neuroimaging studies are planed to investigate the brain areas that allow such a categorization.

What are the properties underlying similarity judgments of facial expressions?

Introduction

In our everyday interaction with the world, facial expressions are frequently used for both expressing emotions (emotional expressions) and conveying intentions (conversational expressions) [1,2]. To date, research has mostly focused on the emotional aspect of expressions, although only very few facial expressions reflect emotional content [3,4]. The perception of emotional facial expressions has often been examined by means of their visual similarity. However, the perceptual and cognitive properties (e.g. physical aspects or action tendencies) driving the similarity judgments facial expressions are largely unknown.

Goals

Here we attempt to map perceptual and cognitive properties of facial expressions onto their corresponding visual similarity judgments to examine the features underlying similarity judgments of facial expressions. Furthermore, we are interested in whether this mapping is different for emotional and conversational facial expressions.

Methods

We assessed the perceptual and cognitive properties of facial expressions, and visual similarity of facial expressions in two separate experiments. Perceptual and cognitive properties were investigated by using 27 questions addressing the emotional (taken from [5]) and conversational content of expressions using semantic differentials. The visual similarity was determined by obtaining ratings of perceived similarity of sequentially presented expression pairs. Both experiments used the same set of (conversional and emotional) facial expressions videos taken from [6]. We reasoned that if a certain cognitive property is driving visual similarity ratings, it should be a good predictor of the visual similarity rating.

Initial results

The mapping of cognitive properties onto visual similarity was done using multiple regression with the semantic-differential ratings as predictors. The best model for emotional expressions explained 75% of the variation in similarity ratings and consisted of the two emotional questions: “How much are the expectations of the person met?” and “How much is the person under control?”. The same model explained significantly less variation for conversational expressions (38%). Using all questions as predictors explained 72% of the variation for conversational expressions.

Initial conclusion

This study demonstrates a relationship between cognitive properties of facial expressions and their visual perception and sheds light onto which cognitive properties might underlie visual similarity ratings. Emotional questions regarding self-control and expectations of the actor allow precise prediction of the variation in similarity ratings of emotional expressions. These two properties, however, explain much less of the variation in the similarity ratings of conversational expressions. For these expressions, the underlying cognitive properties when rating the visual similarity seem to be more complex. Our results suggest that different perceptual and cognitive properties underlie similarity judgments about emotional and conversational expressions. This study is part of our research line on the detailed investigation of emotional and conversational properties of facial expressions [7] and was done in collaboration with Stephan de la Rosa, Johannes Schultz and Christian Wallraven.

References

1.   Darwin C (1965/1872), The expression of emotion in man and animals, University of Chicago Press, Chicago.

2.   Nusseck M, Cunningham DW, Wallraven C, Bülthoff HH (2008) The contribution of different facial regions to the recognition of conversational expressions, Journal of Vision, 8, 1-23.

3.   Ekman P (1979) About brows: emotional and conversational signals, in von Cranach M, Foppa K, Lepenies W, and Ploog D, editiors, Human ethology: Claims and limits of a new discipline, 169-202, Cambridge University Press, Cambridge.

4.   Reilly J and Seibert L (2009) Language and emotion, in Davidson RJ, Scherer KR, and Goldsmith HH, editors, Handbook of Affective Science, 535-559, Oxford University Press, New York.

5.   Fontaine JRJ, Scherer KR, Roesch EB, Ellsworth PC (2007) The world of emotions is not two-dimensional, Psychological Science, 18(12), 1050-1057.

6.   Kaulard K, Cunningham DW, Bülthoff HH, Wallraven C (2011), The MPI facial expression database – a validated database of emotional and conversational facial expressions, submitted.

7.   Kaulard K, Wallraven C, de la Rosa S, Bülthoff HH (2010), Cognitive categories of emotional and conversational facial expressions are influenced by dynamic information, Perception, 39 (ECVP Abstract Supplement) 157

Figure 1: Snapshots of each of the 12 facial expression videos. Emotional expressions are given in the upper row and the conversational expressions are shown in the lower row. In the experiments, all expressions were shown by each of the 6 models

Figure 2: Results of best subset regression for the emotional expressions using the emotional questions (Q) as predictors. Color-coded are the number of predictors for different models: light green suggests a model consisting only of one predictor (Q1) that explains 69% of the variation in similarity ratings; dark green suggests a model consisting of 8 predictors explaining 80% of the variation.

Curriculum Vitae

since 2008: PhD student supervised by Prof. Dr. Christian Wallraven, Dr. Stephan de la Rosa and Prof. Dr. Heinrich H. Bülthoff on "The visual representation of emotional and conversational facial expressions"

2007 - 2008: Diploma thesis at the MPI for Biological Cybernetics on "Visual Perception of dynamic facial expressions - Implementation and validation of a database for conversational facial expressions" supervised by Dr. Christian Wallraven

Go to Editor View