Ekaterina Volkova

Alumni of the Department Human Perception, Cognition & Action
Alumni of the Group Perception & Action in Virtual Environments
Alumni of the Research Group Body & Space perception
former members of the agbuvr group

Main Focus

My main research interest lies in human emotional perception. Fields where this can be applied or studied are machine learning, computer graphics, human-computer interaction,  artificial intelligence and neuroscience.

The general aim of my PhD research is (a) to investigate  perception of emotional body language through behavioral studies using reaction time, written reports, video recordings and brain imaging technologies; (b) to apply the knowledge to the generation of  emotional body language in virtual agents.

Both aspects contribute to each other. On the one hand, the more we study human emotional perception and response, the more we know about the parameters the virtual characters need to conform to in order to be appealing to human nature, trigger empathy and emotional response. On the other hand, the fact that virtual characters can be controlled by many parameters makes them a helpful tool for creating stimuli for more fine-grained and versatile research of human emotional perception.

My current research foci are:

  1. Crucial properties of biological motion in emotional story telling scenarios: Particular motion characteristics allow us to recognize emotions through body language, they impact emotions expressed though meaning, voice and face. By using motion capture, human evaluation and statistical analysis I can implement animation algorithms for various emotional categories, dimensions and contexts.
  2. Empathy towards virtual agents: We easily distinguish between real humans and virtual agents. However, this does not prevent us to experience emotional states toward virtual characters if the contexts demands it (c.f. cartoon animations with captivating emotion-rich plots). By conducting behavioural and brain imaging studies I want to find out whether and in which aspects does empathy towards real humans and virtual characters differ.

A virtual human able to express naturalistic yet controllable emotions could be a powerful tool to study human emotional perception. To implement a virtual actor we first need to know, what emotions the virtual actor should express and at what point of time [1], secondly, how to express them through the avatar. Facial animation and text-to-speech technology are rather advanced [2,3], yet emotional body language and its simulation are largely understudied.

Goals
The goal of my project is to analyze emotional body language in story-telling scenarios and investigate the features in the body motion that cause participants to label the animation as a particular emotion.

Methods
Through a series of experiments we show participants visual (static and animated) and auditory stimulus and ask them to label the emotional expression in this stimuli.  Further, we use motion capture techniques to record human motion from amateur actors. In perceptual studies we use ten emotion categories for text annotation and motion categorization. In planned studies, we will ask participants to view motion primitives and describe the complex motion sequences as combinations of motion primitives. Through these perceptual studies we can determine the features of body motion sequences that are salient for specific emotion categories and hope to include these labeled motion primitives into a mathematical analysis [similar to the analysis done for emotional aspects of walking performed in 4].

Initial results
We have created a framework for virtual storyteller [5], where initial text can be automatically annotated for emotions. These annotations will trigger emotionally enhanced facial and body animation syntheses and a text-to-speech module. Though each component of the framework is subject to improvement, this is an important step towards a fully automated yet controllable virtual actor. We further investigated perception of emotional expressions in stimuli with conflicting emotions expressed through avatar’s facial expressions and speech.  The vocalized emotion in the pre-recorded natural speech influenced participants’ emotion perception more than the facial expression. However, for the synthesized speech, facial expression influences participant’s emotional perception more than the vocalized emotion [2, 3]. Finally, we have prepared the stimulus and designed the experiments for analyzing emotional expression in body language.

Initial conclusion
We have so far taken a computational linguistics approach to automatically labeling emotion in fairytale texts. Further we have investigated technical methods for automatically triggering emotional expression in body, face and voice. Finally, we are now conducting experiments on the perceived emotions in body language.

References
Volkova EP (2010) PETaLS: perception of emotions in text - a linguistic simulation. Master thesis. Tübingen, Germany: Universität Tübingen.
Volkova E , Linkenauger S , Alexandrova I , Bülthoff HH  and Mohler B  (2011) integration of visual and auditory stimuli in the perception of emotional expression in virtual characters, 34th European Conference on Visual Perception (ECVP 2011), 34(120).
Volkova E , Mohler B , Linkenauger S , Alexandrova I  and Bülthoff HH  (October-2011): Contribution of Prosody in Audio-visual Integration to Emotional Perception of Virtual Characters, 12th International Multisensory Research Forum (IMRF 2011), 12(263).
C. L. Roether, L. Omlor, and M. A. Giese, “Features in the recognition of emotions from dynamic bodily expression,” Dynamics of Visual Motion Processing, no. 2009, pp. 313–340, 2010.
Alexandrova IV , Volkova EP , Kloos U , Bülthoff HH  and Mohler BJ  (2010) virtual storyteller in immersive virtual environments using fairy tales annotated for emotion states, Virtual Environments 2010, 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Eurographics Association, Goslar, Germany, 65-68.

Curriculum Vitae

Education

  • PhD student at Graduate School of Neural & Behavioural Sciences, International Max Planck Research School, Oct 2010 — present
  • M.A. International Studies in Computational Linguistics (ISCL), University of Tübingen, Tübingen, Germany, Oct 2008 — Oct 2010.
  • B.A. International Studies in Computational Linguistics (ISCL), University of Tübingen, Tübingen, Germany, Oct 2005 — Oct 2008. B.A.
  • Theory and Methods of Teaching Foreign Languages and Cultures, Tver State University, Tver, Russian Federation, Sep 2002 — Sep 2005.

Employment

  • Research assistant at Max Planck Institute for Biological Cybernetics, Human Perception, Cognition and Action Department, Perception and Action in Virtual Environments Group Supervision: Prof. Dr. Heinrich H. Büthoff, Dr. Betty J. Mohler. Tübingen, Germany, May 2008 — October 2010
  • Research assistant at Chair of Descriptive and Theoretical Linguistics, Supervision: Prof. Dr. Sigrid Beck University of Tübingen, Tübingen, Germany, Oct 2007 — April 2008
  • Tutor in Phonetics and Phonology class, Supervision: Prof. Dr. Hubert Truckenbrodt University of Tübingen, Tübingen, Germany, April 2007 — Oct 2007
Go to Editor View