Ekaterina Volkova


Picture of Volkova, Ekaterina

Ekaterina Volkova

Position: PhD Student  Unit: Alumni Bülthoff

My main research interest lies in human emotional perception. Fields where this can be applied or studied are machine learning, computer graphics, human-computer interaction,  artificial intelligence and neuroscience.

The general aim of my PhD research is (a) to investigate  perception of emotional body language through behavioral studies using reaction time, written reports, video recordings and brain imaging technologies; (b) to apply the knowledge to the generation of  emotional body language in virtual agents.

Both aspects contribute to each other. On the one hand, the more we study human emotional perception and response, the more we know about the parameters the virtual characters need to conform to in order to be appealing to human nature, trigger empathy and emotional response. On the other hand, the fact that virtual characters can be controlled by many parameters makes them a helpful tool for creating stimuli for more fine-grained and versatile research of human emotional perception.

My current research foci are:

  1. Crucial properties of biological motion in emotional story telling scenarios: Particular motion characteristics allow us to recognize emotions through body language, they impact emotions expressed though meaning, voice and face. By using motion capture, human evaluation and statistical analysis I can implement animation algorithms for various emotional categories, dimensions and contexts.
  2. Empathy towards virtual agents: We easily distinguish between real humans and virtual agents. However, this does not prevent us to experience emotional states toward virtual characters if the contexts demands it (c.f. cartoon animations with captivating emotion-rich plots). By conducting behavioural and brain imaging studies I want to find out whether and in which aspects does empathy towards real humans and virtual characters differ.

A virtual human able to express naturalistic yet controllable emotions could be a powerful tool to study human emotional perception. To implement a virtual actor we first need to know, what emotions the virtual actor should express and at what point of time [1], secondly, how to express them through the avatar. Facial animation and text-to-speech technology are rather advanced [2,3], yet emotional body language and its simulation are largely understudied.

The goal of my project is to analyze emotional body language in story-telling scenarios and investigate the features in the body motion that cause participants to label the animation as a particular emotion.

Through a series of experiments we show participants visual (static and animated) and auditory stimulus and ask them to label the emotional expression in this stimuli.  Further, we use motion capture techniques to record human motion from amateur actors. In perceptual studies we use ten emotion categories for text annotation and motion categorization. In planned studies, we will ask participants to view motion primitives and describe the complex motion sequences as combinations of motion primitives. Through these perceptual studies we can determine the features of body motion sequences that are salient for specific emotion categories and hope to include these labeled motion primitives into a mathematical analysis [similar to the analysis done for emotional aspects of walking performed in 4].

Initial results
We have created a framework for virtual storyteller [5], where initial text can be automatically annotated for emotions. These annotations will trigger emotionally enhanced facial and body animation syntheses and a text-to-speech module. Though each component of the framework is subject to improvement, this is an important step towards a fully automated yet controllable virtual actor. We further investigated perception of emotional expressions in stimuli with conflicting emotions expressed through avatar’s facial expressions and speech.  The vocalized emotion in the pre-recorded natural speech influenced participants’ emotion perception more than the facial expression. However, for the synthesized speech, facial expression influences participant’s emotional perception more than the vocalized emotion [2, 3]. Finally, we have prepared the stimulus and designed the experiments for analyzing emotional expression in body language.

Initial conclusion
We have so far taken a computational linguistics approach to automatically labeling emotion in fairytale texts. Further we have investigated technical methods for automatically triggering emotional expression in body, face and voice. Finally, we are now conducting experiments on the perceived emotions in body language.

Volkova EP (2010) PETaLS: perception of emotions in text - a linguistic simulation. Master thesis. Tübingen, Germany: Universität Tübingen.
Volkova E , Linkenauger S , Alexandrova I , Bülthoff HH  and Mohler B  (2011) integration of visual and auditory stimuli in the perception of emotional expression in virtual characters, 34th European Conference on Visual Perception (ECVP 2011), 34(120).
Volkova E , Mohler B , Linkenauger S , Alexandrova I  and Bülthoff HH  (October-2011): Contribution of Prosody in Audio-visual Integration to Emotional Perception of Virtual Characters, 12th International Multisensory Research Forum (IMRF 2011), 12(263).
C. L. Roether, L. Omlor, and M. A. Giese, “Features in the recognition of emotions from dynamic bodily expression,” Dynamics of Visual Motion Processing, no. 2009, pp. 313–340, 2010.
Alexandrova IV , Volkova EP , Kloos U , Bülthoff HH  and Mohler BJ  (2010) virtual storyteller in immersive virtual environments using fairy tales annotated for emotion states, Virtual Environments 2010, 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Eurographics Association, Goslar, Germany, 65-68.


  • PhD student at Graduate School of Neural & Behavioural Sciences, International Max Planck Research School, Oct 2010 — present
  • M.A. International Studies in Computational Linguistics (ISCL), University of Tübingen, Tübingen, Germany, Oct 2008 — Oct 2010.
  • B.A. International Studies in Computational Linguistics (ISCL), University of Tübingen, Tübingen, Germany, Oct 2005 — Oct 2008. B.A.
  • Theory and Methods of Teaching Foreign Languages and Cultures, Tver State University, Tver, Russian Federation, Sep 2002 — Sep 2005.



  • Research assistant at Max Planck Institute for Biological Cybernetics, Human Perception, Cognition and Action Department, Perception and Action in Virtual Environments Group Supervision: Prof. Dr. Heinrich H. Büthoff, Dr. Betty J. Mohler. Tübingen, Germany, May 2008 — October 2010
  • Research assistant at Chair of Descriptive and Theoretical Linguistics, Supervision: Prof. Dr. Sigrid Beck University of Tübingen, Tübingen, Germany, Oct 2007 — April 2008
  • Tutor in Phonetics and Phonology class, Supervision: Prof. Dr. Hubert Truckenbrodt University of Tübingen, Tübingen, Germany, April 2007 — Oct 2007

References per page: Year: Medium:

Show abstracts

Articles (6):

Volkova E, de la Rosa S, Bülthoff HH and Mohler B (December-2014) The MPI Emotional Body Expressions Database for Narrative Scenarios PLoS ONE 9(12) 1-28.
Volkova EP, Mohler BJ, Dodds TJ, Tesch J and Bülthoff HH (June-2014) Emotion categorization of body expressions in narrative scenarios Frontiers in Psychology 5(623) 1-11.
Ruddle RA, Volkova E and Bülthoff HH (May-2013) Learning to Walk in Virtual Reality ACM Transactions on Applied Perception 10(2:11) 1-11.
Ruddle RA, Volkova E, Mohler B and Bülthoff HH (May-2011) The effect of landmark and body-based sensory information on route knowledge Memory & Cognition 39(4) 686-699.
Volkova E (February-2009) Experiment for the color idioms project Slovo i Tekst 9 157-164.
Volkova E (October-2008) Idioms with basic color terms in the English and Russian languages Vestnik Tverskogo Gosudarstvennogo Universiteta 17(13) 202-213.

Conference papers (4):

Volkova E and Mohler BJ (May-27-2014) On-line Annotation System and New Corpora for Fine Grained Sentiment Analysis of Text, 5th International Workshop on Emotion, Social Signals, Sentiment & Linked Open Data (ES³LOD 2014), Satellite of LREC 2014 ELRA, 74-81.
Ruddle RA, Volkova E and Bülthoff HH (June-2011) Walking improves your cognitive map in environments that are large-scale and large in extent, ACM SIGCHI Conference on Human Factors in Computing Systems (CHI 2012), ACM Transactions on Computer-Human Interaction, 18(2:10), 1-22.
Alexandrova IV, Volkova EP, Kloos U, Bülthoff HH and Mohler BJ (October-2010) Virtual Storyteller in Immersive Virtual Environments Using Fairy Tales Annotated for Emotion States In: Virtual Environments 2010, , 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Eurographics Association, Goslar, Germany, 65-68.
Volkova EP, Mohler BJ, Meurers D, Gerdemann D and Bülthoff HH (June-2010) Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text, NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, Association for Computational Linguistics, Morristown, NJ, USA, 98-106.

Technical reports (1):

Volkova-Volkmar E: Report on experiment supported by the Visionair project: Effect of Visual and Auditory Feedback Modulation on Embodiment and Emotional State in VR, University College London, (January-2015).

Posters (8):

Volkova EK, Mohler BJ, Dodds T, Tesch J and Bülthoff HH (August-2013): Perception of emotional body expressions in narrative scenarios, ACM Symposium on Applied Perception (SAP '13), Dublin, Ireland.
Wellerdiek AC, Leyrer M, Volkova E, Chang D-S and Mohler B (August-2013): Recognizing your own motions on virtual avatars: is it me or not?, ACM Symposium on Applied Perception (SAP '13), Dublin, Ireland.
Volkova EP, Mohler BJ and Bülthoff HH (May-11-2013): Display size of biological motion stimulus influences performance in a complex emotional categorisation task, 13th Annual Meeting of the Vision Sciences Society (VSS 2013), Naples, FL, USA, Journal of Vision, 13(9) 195.
Volkova E, Mohler B, Linkenauger S, Alexandrova I and Bülthoff HH (October-17-2011): Contribution of Prosody in Audio-visual Integration to Emotional Perception of Virtual Characters, 12th International Multisensory Research Forum (IMRF 2011), Fukuoka, Japan, i-Perception, 2(8) 774.
Volkova E, Linkenauger S, Alexandrova I, Bülthoff HH and Mohler B (September-2011): Integration of Visual and Auditory Stimuli in the Perception of Emotional Expression in Virtual Characters, 34th European Conference on Visual Perception, Toulouse, France, Perception, 40(ECVP Abstract Supplement) 138.
Volkova E (October-2010): Virtual Storytelling of Fairy Tales: Towards Simulation of Emotional Perception of Text, 11th Conference of Junior Neuroscientists of Tübingen (NeNa 2010), Heiligkreuztal, Germany.
Wallraven C, Schultze M, Mohler B, Volkova E, Alexandrova I, Vatakis A and Pastra K (September-2010): Understanding Objects and Actions: a VR Experiment, 2010 Joint Virtual Reality Conference of EuroVR - EGVE - VEC (JVRC 2010), Stuttgart, Germany.
Volkova EP, Alexandrova IV, Bülthoff HH and Mohler BJ (August-2010): Virtual storytelling of fairy tales: Towards simulation of emotional perception of text, 33rd European Conference on Visual Perception, Lausanne, Switzerland, Perception, 39(ECVP Abstract Supplement) 31.

Theses (2):

Volkova E: Perception of Emotional Body Expressions in Narrative Scenarios and Across Cultures, Eberhard-Karls-Universität Tübingen, (November-2014). PhD thesis
Volkova EP: PETaLS: Perception of Emotions in Text - a Linguistic Simulation, Eberhards-Karls-Universität Tübingen, Germany, (October-2010). Diplom thesis

Talks (4):

Volkova EP, Mohler BJ and Bülthoff HH (November-2012) Abstract Talk: Motion Capture of Emotional Body Language in Narrative Scenarios, 13th Conference of the Junior Neuroscientists of Tübingen (NeNA 2012), Schramberg, Germany 9.
Volkova E and Mstislavski A (June-3-2012) Abstract Talk: ePETaLS: Online Annotation Tool for Emotional Text Labelling, 22. Tagung der Computerlinguistik-Studierenden (TaCoS 2012), Trier, Germany.
Volkova EP (June-2011): PETaLS: Perception of Emotions in Text - a Linguistic Simulation, 21. Tagung der Computerlinguistik Studierenden (TaCoS 2011), Giessen, Germany.
Volkova EP (June-2010): Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text, 20. Tagung der Computerlinguistik Studierenden (TaCoS 2010), Zürich, Switzerland.

Export as:
BibTeX, XML, Pubman, Edoc, RTF
Last updated: Monday, 22.05.2017