This file was created by the Typo3 extension sevenpack version 0.7.14 --- Timezone: CEST Creation date: 2017-05-23 Creation time: 22-41-41 --- Number of references 94 proceedings CunninghamIS2011 International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging 2011 8 139 We would like to welcome all of you to the Seventh Annual Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe). In addition to being colocated with ACM SIGGRAPH 2011, this year represents a milestone in collaboration. For the first time, CAe is being held as a joint together with the International ACM Symposium on Non-Photorealistic Animation and Rendering (NPAR) and the Eurographics Symposium on Sketch- Based Interfaces and Modeling (SBIM). This not only increases the audience and contributors, but allows for more effective cross-fertilization with researchers who have similar ideas, but rather different goals or methods. Computational Aesthetics core goal remains the bridging of the analytic and synthetic. This is achieved by integrating aspects of computer science, philosophy, psychology, and the fine, applied & performing arts. It seeks to facilitate both the analysis and the augmentation of creative behaviors. CAe also investigates the creation of tools that can enhance the expressive power of the fine and applied arts and furthers our understanding of aesthetic evaluation, perception and meaning. The symposium brings together individuals with technical experience of developing computer based tools to solve aesthetic problems and people with artistic/ design backgrounds who use these new tools. This year CAe received 39 submissions in total, 24 submissions in the technical papers track and 15 submissions in the art papers track. Due to the limited space in the joint symposium, we were only able to accepted 10 of the technical papers, resulting in an acceptance rate of roughly 42%. The technical submissions were distributed to an international program committee comprising 31 experts, resulting in at least 3 reviews for each paper. All of the accepted papers demonstrated both computational and aesthetic contributions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://dl.acm.org/citation.cfm?id=2030441&CFID=862470169&CFTOKEN=23378277 ACM Press
New York, NY, USA
Vancouver, BC, Canada Joint Symposia on Sketch-Based Interfaces and Modeling, Non-Photorealistic Animation and Computational Aesthetics (SBIM/NPAR/CAe 2011) 978-1-4503-0908-0 dwcDCunningham TIsenberg SNSpencer
proceedings 5753 Computational Aesthetics 2008 Proceedings of the International Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe 2008) 2008 159 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.computational-aesthetics.org/2008/ Eurographics Association
Aire-la-Ville, Switzerland
Biologische Kybernetik Max-Planck-Gesellschaft Lisboa, Portugal Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging en 978-3-905674-08-8 10.1111/j.1467-8659.2008.01294.x dwcDWCunningham VInterrante PBrown JMcCormack
proceedings 5755 Computational Aesthetics 2007 Proceedings of the Eurographics Workshop on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe 2007) 2007 6 177 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.computational-aesthetics.org/2007/ Eurographics Association
Aire-la-Ville, Switzerland
Biologische Kybernetik Max-Planck-Gesellschaft Banff, Canada Eurographics Workshop on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe '07) en 978-3-905673-43-2 dwcDWCunningham GWMeyer LNeumann ADunning RParicio
article CastilloWC2014 The semantic space for facial communication Computer Animation and Virtual Worlds 2014 5 25 3-4 225–233 We can learn a lot about someone by watching their facial expressions and body language. Harnessing these aspects of non-verbal communication can lend artificial communication agents greater depth and realism but requires a sound understanding of the relationship between cognition and expressive behaviour. Here, we extend traditional word-based methodology to use actual videos and then extract the semantic/cognitive space of facial expressions. We find that depending on the specific expressions used, either a four-dimensional or a two-dimensional space is needed to describe the variance in the stimuli. The shape and structure of the 4D and 2D spaces are related to each other and very stable to methodological changes. The results show that there is considerable variance between how different people express the same emotion. The recovered space can well capture the full range of facial communication and is very suitable for semantic-driven facial animation. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://onlinelibrary.wiley.com/doi/10.1002/cav.1593/pdf 10.1002/cav.1593 SCastillo walliCWallravem dwcDWCunningham article KaulardCBW2012 The MPI Facial Expression Database: A Validated Database of Emotional and Conversational Facial Expressions PLoS One 2012 3 7 3 1-18 The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0032321 10.1371/journal.pone.0032321 e32321 kascotKKaulard dwcDWCunningham hhbHHBülthoff walliCWallraven article IsenbergC2011 Computational Aesthetics 2011 in Vancouver, Canada, August 5–7, 2011, Sponsored by Eurographics, in Collaboration with ACM SIGGRAPH Computer Graphics Forum 2011 12 30 8 2457–2458 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2011.02076.x/pdf 10.1111/j.1467-8659.2011.02076.x TIsenberg dwcDCunningham article 5746 Perception-Motivated interpolation of image sequences ACM Transactions on Applied Perception 2011 1 8 2 1-28 We present a method for image interpolation that is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to that of image interpolation which is perceived as visually correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute convincing results. A user study confirms the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases perceived quality of the result. We compare the results to other methods and assess achievable quality for different types of scenes. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://dl.acm.org/citation.cfm?doid=1870076.1870079 Biologische Kybernetik Max-Planck-Gesellschaft en 10.1145/1870076.1870079 TStich CLinz walliCWallraven dwcDWCunningham MMagnor article 5915 Dynamic information for the recognition of conversational expressions Journal of Vision 2009 12 9 13:7 1-17 Communication is critical for normal, everyday life. During a conversation, information is conveyed in a number of ways, including through body, head, and facial changes. While much research has examined these latter forms of communication, the majority of it has focused on static representations of a few, supposedly universal expressions. Normal conversations, however, contain a very wide variety of expressions and are rarely, if ever, static. Here, we report several experiments that show that expressions that use head, eye, and internal facial motion are recognized more easily and accurately than static versions of those expressions. Moreover, we demonstrate conclusively that this dynamic advantage is due to information that is only available over time, and that the temporal integration window for this information is at least 100 ms long. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/9/13/7/Cunningham-2009-jov-9-13-7.pdf Biologische Kybernetik Max-Planck-Gesellschaft en 10.1167/9.13.7 dwcDWCunningham walliCWallraven article 5741 Categorizing art: Comparing humans and computers Computers and Graphics 2009 8 33 4 484-495 The categorization of art (paintings, literature) into distinct styles such as Expressionism, or Surrealism has had a profound influence on how art is presented, marketed, analyzed, and historicized. Here, we present results from human and computational experiments with the goal of determining to which degree such categories can be explained by simple, low-level appearance information in the image. Following experimental methods from perceptual psychology on category formation, naive, non-expert participants were first asked to sort printouts of artworks from different art periods into categories. Converting these data into similarity data and running a multi-dimensional scaling (MDS) analysis, we found distinct categories which corresponded sometimes surprisingly well to canonical art periods. The result was cross-validated on two complementary sets of artworks for two different groups of participants showing the stability of art interpretation. The second focus of this paper was on determining how far computational algorithms would be able to capture human performance or would be able in general to separate different art categories. Using several state-of-the-art algorithms from computer vision, we found that whereas low-level appearance information can give some clues about category membership, human grouping strategies included also much higher-level concepts. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6TYG-4W6Y829-1-S&_cdi=5618&_user=29041&_orig=search&_coverDate=08%2F31%2F2009&_sk=999669995&view=c&wchp=dGLbVzW-zSkWA&md5=26d1582bdc926c6b79e4abe6c4f3b637&ie=/sdarticle.pdf Biologische Kybernetik Max-Planck-Gesellschaft en 10.1016/j.cag.2009.04.003 walliCWallraven rolandRFleming dwcDWCunningham JRigau MFeixas MSbert article 4599 The contribution of different facial regions to the recognition of conversational expressions Journal of Vision 2008 6 8 8:1 1-23 The human face is an important and complex communication channel. Humans can, however, easily read in a face not only identity information, but also facial expressions with high accuracy. Here, we present the results of four psychophysical experiments in which we systematically manipulated certain facial areas in video sequences of nine conversational expressions to investigate recognition performance and its dependency on the motions of different facial parts. These studies allowed us to determine what information is {it necessary} and {it sufficient} to recognize the different facial expressions. Subsequent analyses of the face movements and correlation with recognition performance show that, for some expressions, one individual facial region can represent the whole expression. In other cases, the interaction of more than one facial area is needed to clarify the expression. The full set of results is used to develop a systematic description of the roles of different facial parts in the visual perception of conversational facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://journalofvision.org/8/8/1/ Biologische Kybernetik Max-Planck-Gesellschaft en 10.1167/8.8.1 manfredMNusseck dwcDWCunningham walliCWallraven hhbHHBülthoff article 5750 Visual Prediction as indicated by perceptual adaptation to temporal delays and discrete stimulation Behavioural and Brain Science 2008 4 31 2 203-204 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://journals.cambridge.org/action/displayIssue?jid=BBS&volumeId=31&issueId=02&iid=1878848# Biologische Kybernetik Max-Planck-Gesellschaft en 10.1017/S0140525X08003865 dwcDWCunningham article 3996 Evaluating the Perceptual Realism of Animated Facial Expressions ACM Transactions on Applied Perception 2008 1 4 4:4 1-20 The human face is capable of producing an astonishing variety of expressions—expressions for which sometimes the smallest difference changes the perceived meaning considerably. Producing realistic-looking facial animations that are able to transport this degree of complexity continues to be a challenging research topic in computer graphics. One important question that remains to be answered is: When are facial animations good enough? Here we present an integrated framework in which psychophysical experiments are used in a first step to systematically evaluate the perceptual quality of several different computer-generated animations with respect to real-world video sequences. The first experiment provides an evaluation of several animation techniques, exposing specific animation parameters that are important to achieve perceptual fidelity. In a second experiment we then use these benchmarked animation techniques in the context of perceptual research in order to systematically investigate the spatio-temporal characteristics of expressions. A third and final experiment uses the quality measures that were developed in the first two experiments to examine the perceptual impact of changing facial features to improve the animation techniques. Using such an integrated approach, we are able to provide important insights into facial expressions for both the perceptual and computer graphics community. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://portal.acm.org/citation.cfm?doid=1278760.1278764 Biologische Kybernetik Max-Planck-Gesellschaft en 10.1145/1278760.1278764 walliCWallraven mbreidtMBreidt dwcDWCunningham hhbHHBülthoff article 4592 Evaluation of Real-World and Computer-Generated Stylized Facial Expressions ACM Transactions on Applied Perception 2007 11 4 3:16 1-24 The goal of stylization is to provide an abstracted representation of an image that highlights specific types of visual information. Recent advances in computer graphics techniques have made it possible to render many varieties of stylized imagery efficiently making stylization into a useful technique, not only for artistic, but also for visualization applications. In this paper, we report results from two sets of experiments that aim at characterizing the perceptual impact and effectiveness of three different stylization techniques in the context of dynamic facial expressions. In the first set of experiments, animated facial expressions are stylized using three common techniques (brush, cartoon, and illustrative stylization) and investigated using different experimental measures. Going beyond the usual questionnaire approach, these experiments compare the techniques according to several criteria ranging from subjective preference to task-dependent measures (such as recognizability, intensity) allowing us to compare behavioral and introspective approaches. The second set of experiments use the same stylization techniques on real-world video sequences in order to compare the effect of stylization on natural and artificial stimuli. Our results shed light on how stylization of image contents affects the perception and subjective evaluation of both real and computer-generated facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://doi.acm.org/10.1145/1278387.1278390 Biologische Kybernetik Max-Planck-Gesellschaft en 10.1145/1278387.1278390 walliCWallraven hhbHHBülthoff JFischer dwcDWCunningham DBartz article 3768 Spatial updating in virtual reality: the sufficiency of visual information Psychological Research 2006 9 71 3 298-313 Robust and effortless spatial orientation critically relies on “automatic and obligatory spatial updating”, a largely automatized and reflex-like process that transforms our mental egocentric representation of the immediate surroundings during ego-motions. A rapid pointing paradigm was used to assess automatic/obligatory spatial updating after visually displayed upright rotations with or without concomitant physical rotations using a motion platform. Visual stimuli displaying a natural, subject-known scene proved sufficient for enabling automatic and obligatory spatial updating, irrespective of concurrent physical motions. This challenges the prevailing notion that visual cues alone are insufficient for enabling such spatial updating of rotations, and that vestibular/proprioceptive cues are both required and sufficient. Displaying optic flow devoid of landmarks during the motion and pointing phase was insufficient for enabling automatic spatial updating, but could not be entirely ignored either. Interestingly, additional physical motion cues hardly improved performance, and were insufficient for affording automatic spatial updating. The results are discussed in the context of the mental transformation hypothesis and the sensorimotor interference hypothesis, which associates difficulties in imagined perspective switches to interference between the sensorimotor and cognitive (to-be-imagined) perspective. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Riecke__06_PsychologicalResearch_onlinePublication__Spatial_Updating_in_Virtual_Reality_-_The_Sufficiency_of_Visual_Information_3768[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.springerlink.com/content/321247260x5446j0/fulltext.pdf Biologische Kybernetik Max-Planck-Gesellschaft en 10.1007/s00426-006-0085-z bernieBERiecke dwcDWCunningham hhbHHBülthoff article 3540 Manipulating video sequences to determine the components of conversational facial expressions ACM Transactions on Applied Perception 2005 7 2 3 251-269 Communication plays a central role in everday life. During an average conversation, information is exchanged in a variety of ways, including through facial motion. Here, we employ a custom, model-based image manipulation technique to selectively "freeze" portions of a face in video recordings in order to determine the areas that are sufficient for proper recognition of nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning, with different expressions using different areas. The results also show that already the combination of rigid head, eye, eyebrow, and mouth motions is sufficient to produce expressions that are as easy to recognize as the original, unmanipulated recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. This fusion of psychophysics and computer graphics techniques provides not only fundamental insights into human perception and cognitio n, but also yields the basis for a systematic description of what needs to move in order to produce realistic, recognizable conversational facial animations. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/cunningham-etal-ACM-tap-2005_3540[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://doi.acm.org/10.1145/1077399.1077404 Biologische Kybernetik Max-Planck-Gesellschaft en doi.acm.org/10.1145/1077399.1077404 dwcDCunningham kleinermMKleiner walliCWallraven hhbHBülthoff article 2867 The role of image size in the recognition of conversational facial expressions Computer Animation & Virtual Worlds 2004 7 15 3-4 305-310 Facial expressions can be used to direct the flow of a conversation as well as to improve the clarity of communication. The critical physical differences between expressions can, however, be small and subtle. Clear presentation of facial expressions in applied settings, then, would seem to require a large conversational agent. Given that visual displays are generally limited in size, the usage of a large conversational agent would reduce the amount of space available for the display of other information. Here, we examine the role of image size in the recognition of facial expressions. The results show that conversational facial expressions can be easily recognized at surprisingly small image sizes. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2867.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff doi.acm.org/10.1145/1077399.1077404 Biologische Kybernetik Max-Planck-Gesellschaft en doi.acm.org/10.1145/1077399.1077404 dwcDCunningham manfredMNusseck walliCWallraven hhbHHBülthoff article 2335 SLOT: A research platform for investigating multimodal communication Behavior Research Methods, Instruments, & Computers 2003 35 3 408-419 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft JPde Ruiter SRossignol LVuurpijl dwcDWCunningham WJMLevelt article 1827 Gaze-eccentricity effects on road position and steering Journal of Experimental Psychology: Applied 2002 12 8 4 247-258 The effects of gaze-eccentricity on the steering of an automobile were studied. Drivers performed an attention task while attempting to drive down the middle of a straight road in a simulation. Steering was biased in the direction of fixation and deviation from the center of the road was proportional to the gaze direction until saturation at approximately 15 degrees gaze-angle from straight ahead. This effect remains when the position of the head was controlled and a reverse-steering task was used. Furthermore, the effect was not dependent upon speed, but reversed when the forward movement of the driver was removed from the simulation. Thus, small deviations in a driver's gaze can lead to significant impairments of the ability to drive a straight course. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1827.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6X06-484NKPN-4-1&_cdi=7206&_user=29041&_pii=S1076898X03001692&_origin=&_coverDate=12%2F31%2F2002&_sk=999919995&view=c&wchp=dGLbVlW-zSkzS&md5=f176928f28f2589a996abf4c50b1924d&ie=/sdarticle.pdf Biologische Kybernetik Max-Planck-Gesellschaft 10.1037/1076-898X.8.4.247 wilWOReadinger astrosAChatziastros dwcDWCunningham hhbHHBülthoff hhbJECutting article 34 Driving in the future: Temporal visuomotor adaptation and generalization Journal of Vision 2001 11 1 2:3 88-98 Rapid and accurate visuomotor coordination requires tight spatial and temporal sensorimotor synchronization. The introduction of a sensorimotor or intersensory misalignment (either spatial or temporal) impairs performance on most tasks. For more than a century, it has been known that a few minutes of exposure to a spatial misalignment can induce a recalibration of sensorimotor spatial relationships, a phenomenon that may be referred to as spatial visuomotor adaptation. Here, we use a high-fidelity driving simulator to demonstrate that the sensorimotor system can adapt to temporal misalignments on very complex tasks, a phenomenon that we refer to as temporal visuomotor adaptation. We demonstrate that adapting on a single street produces an adaptive state that generalizes to other streets. This shows that temporal visuomotor adaptation is not specific to a single visuomotor transformation, but generalizes across a class of transformations. Temporal visuomotor adaptation is strikingly parallel to spatial visuomotor adaptation, and has strong implications for the understanding of visuomotor coordination and intersensory integration. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf34.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/1/2/3/ Biologische Kybernetik Max-Planck-Gesellschaft 10.1167/1.2.3 dwcDWCunningham astrosAChatziastros mvdhMvon der Heyde hhbHHBülthoff article 1240 Sensorimotor Adaptation to Violations of Temporal Contiguity Psychological Science 2001 11 12 6 532-535 Most events are processed by a number of neural pathways. These pathways often differ considerably in processing speed. Thus, coherent perception requires some form of synchronization mechanism. Moreover, this mechanism must be flexible, because neural processing speed changes over the life of an organism. Here we provide behavioral evidence that humans can adapt to a new intersensory temporal relationship (which was artificially produced by delaying visual feedback). The conflict between these results and previous work that failed to find such improvements can be explained by considering the present results as a form of sensorimotor adaptation. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1240.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.jstor.org/stable/40063683 Biologische Kybernetik Max-Planck-Gesellschaft dwcDWCunningham VABillock BHTsou article 1238 Perception of spatiotemporal random fractals: an extension of colorimetric methods to the study of dynamic texture Journal of the Optical Society of America A 2001 10 18 10 2404-2413 Recent work establishes that static and dynamic natural images have fractal-like 1/f spatiotemporal spectra. Artifical textures, with randomized phase spectra, and 1/f amplitude spectra are also used in studies of texture and noise perception. Influenced by colorimetric principles and motivated by the ubiquity of 1/f spatial and temporal image spectra, we treat the spatial and temporal frequency exponents as the dimensions characterizing a dynamic texture space, and we characterize two key attributes of this space, the spatiotemporal appearance map and the spatiotemporal discrimination function (a map of MacAdam-like just-noticeable-difference contours). http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-18-10-2404 Biologische Kybernetik Max-Planck-Gesellschaft 10.1364/JOSAA.18.002404 VABillock dwcDWCunningham PRHavig BHTsou article 1239 The central role of time in an identity decision theory Cahiers de Psychologie Cognitive 2001 8 20 3-4 209-213 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft dwcDWCunningham article 1242 Prism Adaptation to dynamic events. Perception and Psychophysics 1999 61(1) 161-176 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft DField TFShipley dwcDWCunningham article 1244 Interactions between spatial and spatiotemporal information in Spatiotemporal Boundary Formation. Perception and Psychophysics 1998 60(5) 839-851 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft dwcDWCunningham TFShipley PJKellman article 1243 The dynamic specification of surfaces and boundaries. Perception 1998 27(4) 403-416 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft dwcDWCunningham TFShipley PJKellman inproceedings CunninghamW2013 Understanding and Designing Perceptual Experiments 2013 5 - Humans and computer both have limited resources with which they must process the massive amount of information present in the natural world. For over 150 years, physiologists and psychologists have been performing experiments to elucidate what information humans and animals can detect as well as how they extract, represent and process that information. Recently, there has been an increasing trend of computer scientists performing similar experiments, although often with quite different goals. This tutorial will provide a basic background on the design and execution of perceptual experiments for the practicing computer scientist. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff https://diglib.eg.org/handle/10.2312/conf.EG2013.tutorials.t3 Gutierrez, D. , K. Myszkowski European Association for Computer Graphics
Geneve, Switzerland
Eurographics 2013: Tutorials Girona, Spain 34th Annual Conference of the European Association for Computer Graphics 10.2312/conf/EG2013/tutorials/t3 dwcDWCunningham walliCWallraven
inproceedings 656 Gaze-direction and steering effects while driving 2012 321-326 Instructions given to novices learning certain tasks of applied navigation often suggest that gazedirection (?line of sight?) should preview the path the operator desires to take (e.g., Bondurant & Blakemore, 1998; Motorcycle Safety Foundation, 1992; Morris, 1990), presumably because looking behavior can ultimately affect steering control through hand, arm, or leg movements that could lead to undesired path deviations. Here, we control participants? gaze-direction while driving an automobile in virtual reality, and find that gaze-eccentricity has a large, systematic effect on steering and lane-position. Moreover, even when head-position and postural effects of the driver are controlled, there remains a significant bias to drive in the direction of fixation, indicating the existence of a perceptual, and not merely motor, phenomenon. http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/VIV-2001-Readinger.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://store.lboro.ac.uk/browse/extra_info.asp?compid=1&modid=1&deptid=221&catid=216&prodid=841 Gale, A.C. , J. Bloomfield, G. Underwood, J. Wood Applied Vision Research Centre
Loughborough, UK
Vision in Vehicles IX Biologische Kybernetik Max-Planck-Gesellschaft Brisbane, Australia 9th International Conference Vision in Vehicles (VIV 2001) 978-0-9571266-1-9 wilWReadinger astrosAChatziastros dwcDCunningham hhbHHBülthoff JECutting
inproceedings 5944 Perzeptuell motivierte illustrative Darstellungsstile für komplexe Modelle 2009 9 311-316 Illustrationen werden erfolgreich in den Ingenieurwissenschaften, den Naturwissenschaften und der Medizin zur abstrahierten Darstellung von Objekten und Situationen verwendet. Typischerweise sind diese Illustrationen Zeichnungen, bei denen der Illustrator künstlerische Techniken zur Betonung relevanter Aspekte der Objekte einsetzt. Im Gegensatz dazu erzeugen Visualisierungen eine direkte, nicht abstrahierte visuelle Darstellung von Simulationen, gescannten Objekten oder modellierten Daten. Durch die inhärente Komplexität dieser Datensätze stellt sich die Interpretation dieser Daten jedoch oft als schwierig dar. Die illustrative Visualisierung hingegen versucht beide Ansätze zur einer abstrahierten Darstellung eines Datensatzes zu verbinden, in der die wesentlichen Charakteristika betont werden. Dieser Ansatz bekommt eine besondere Bedeutung bei sehr komplexen Modellen, die u.U. aus vielen einzelnen Objekten bestehen, wie z.B. einzelne Bauteile einer Maschine, oder segmentierten Organen aus einem CT- oder MRT-Datensatz eines Menschen. Während im Allgemeinen die illustrative Visualisierung einer bessere Betonung ausgewählter und relevanter Informationen als die traditionelle Visualisierung erreicht, so stellen viele nah beieinander gelegene Objekte eine Herausforderung dar, da sie klar von einander getrennt werden müssen. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/MenschUndComputer2009-Salah_5944[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www2.hu-berlin.de/mc2009/ Wandke, H. , S. Kain, D. Struve Logos
Berlin, Germany
Grenzenlos frei? Biologische Kybernetik Max-Planck-Gesellschaft Gesellschaft für Informatik Berlin, Germany Workshop der Tagung Mensch & Computer 2009 de 978-3-8325-2181-3 ZSalah dwcDWCunningham DBartz
inproceedings 5936 The interaction between motion and form in expression recognition 2009 9 41-44 Faces are a powerful and versatile communication channel. Physically, facial expressions contain a considerable amount of information, yet it is clear from stylized representations such as cartoons that not all of this information needs to be present for efficient processing of communicative intent. Here, we use a high-fidelity facial animation system to investigate the importance of two forms of spatial information (connectivity and the number of vertices) for the perception of intensity and the recognition of facial expressions. The simplest form of connectivity is point light faces. Since they show only the vertices, the motion and configuration of features can be seen but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. The results of two experiments are presented. In the first, the presence of motion was manipulated. In the second, the size of the images was varied. Overall, dynamic expressions performed better than static expressions and were largely impervious to the elimination of shape or connectivity information. Decreasing the size of the image had little effect until a critical size was reached. These results add to a growing body of evidence that shows the critical importance of dynamic information for processing of facial expressions: As long as motion information is present, very little spatial information is required. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.apgv.org/ Mania, K. , B. E. Riecke, S. N. Spencer, B. Bodenheimer, C. O Sullivan ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Chania, Crete, Greece 6th Symposium on Applied Perception in Graphics and Visualization (APGV 2009) en 978-1-60558-743-1 10.1145/1620993.1621002 dwcDWCunningham walliCWallraven
inproceedings 5740 Aesthetic appraisal of art: from eye movements to computers 2009 5 137-144 By looking at a work of art, an observer enters into a dialogue. In this work, we attempt to analyze this dialogue with both behavioral and computational tools. In two experiments, observers were asked to look at a large number of paintings from different art periods and to rate their visual complexity, or their aesthetic appeal. During these two tasks, their eye movements were recorded. The complexity and aesthetic ratings show clear preferences for certain artistic styles and were based on both low-level and high-level criteria. Eye movements reveal the time course of the aesthetic dialogue as observers try to interpret and understand the painting. Computational analyses of both the ratings (using measures derived from information theory) and the eye tracking data (using two models of saliency) showed that our computational tools are already able to explain some properties of this dialogue. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/CAe2009-Wallraven_5740[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.cs.rug.nl/svcg/cae2009/pmwiki.php/Main/Program Deussen, O. , D. W. Fellner, N. A. Dodgson Eurographics
Aire-La-Ville, Switzerland
Computational Aesthetics 2009 Biologische Kybernetik Max-Planck-Gesellschaft European Association for Computer Graphics Victoria, BC, Canada Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging en 978-3-905674-17-0 10.2312/COMPAESTH/COMPAESTH09/137-144 walliCWallraven dwcDWCunningham JRigau MFeixas MSbert
inproceedings 5164 Perception-motivated interpolation of image sequences 2008 8 97-106 We present a method for image interpolation which is able to create high-quality, perceptually convincing transitions between recorded images. By implementing concepts derived from human vision, the problem of a physically correct image interpolation is relaxed to an image interpolation that is perceived as physically correct by human observers. We find that it suffices to focus on exact edge correspondences, homogeneous regions and coherent motion to compute such solutions. In our user study we confirm the visual quality of the proposed image interpolation approach. We show how each aspect of our approach increases the perceived quality of the interpolation results, compare the results obtained by other methods and investigate the achieved quality for different types of scenes. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://apgv.local/archive/apgv08/ Creem-Regehr, S. H., K. Myszkowski ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Los Angeles, CA, USA 5th Symposium on Applied Perception in Graphics and Visualization (APGV 2008) en 978-1-59593-981-4 10.1145/1394281.1394299 TStich CLinz walliCWallraven dwcDWCunningham MMagnor
inproceedings 5163 Perceptual and Computational Categories in Art 2008 6 131-138 The categorization of art (paintings, literature) into distinct styles such as expressionism, or surrealism has had a profound influence on how art is presented, marketed, analyzed, and historicized. Here, we present results from several perceptual experiments with the goal of determining whether such categories also have a perceptual foundation. Following experimental methods from perceptual psychology on category formation, naive, non-expert participants were asked to sort printouts of artworks from different art periods into categories. Converting these data into similarity data and running a multi-dimensional scaling (MDS) analysis, we found distinct perceptual categories which did in some cases correspond to canonical art periods. Initial results from a comparison with several computational algorithms for image analysis and scene categorization are also reported. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/CAE2008-Wallraven_5163[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://diglib.eg.org/handle/10.2312/COMPAESTH.COMPAESTH08.131-138 Cunningham, D.W. , V. Interrante, P. Brown, J. McCormack Eurographics Association
Aire-la-Ville, Switzerland
Biologische Kybernetik Max-Planck-Gesellschaft Lisboa, Portugal Computational Aesthetics 2008: Eurographics Workshop on Computational Aesthetics (CAe 2008) en 978-3-905674-08-8 10.2312/COMPAESTH/COMPAESTH08/131-138 walliCWallraven dwcDWCunningham rolandRFleming
inproceedings 5161 The Role of Perception for Computer Graphics 2008 4 65-86 Traditionally, computer graphics strived to achieve the technically best representation of the scenario or scene. For rendering, this lead to the preeminence of representations based on the physics of light interacting with different media and materials. Research in virtual reality has focused on interactivity and therefore on real-time rendering techniques that improve the immersion of users in the virtual environments. In contrast, visualization has focused on representations that that maximizes the information content. In most cases, such representations are not physically-based, requiring instead more abstract approaches. Recently, the increasing integration of the extensive knowledge and methods from perception research into computer graphics has fundamentally altered both fields, offering not only new research questions, but also new ways of solving existing issues. In rendering, for example, the integration can lead to the targeted allocation of computing resources to aspects of a scene that matter mos t for human observers. In visualization, the manner in which information is presented is now often driven by knowledge of low-level cues (e.g., pre-attentive features). Assumptions about how to best present information are evaluated by a psychophysical experiment. This same trend towards perceptually driven research has perhaps had the longest tradition in virtual reality, where the user’s response to specific interaction and rendering techniques is examined using a variety of methods. Against this backdrop of an increasing importance of perceptual research in all areas related to computer generated imagery, we provide a state of the art report on the current state of perception in computer graphics. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Eurographics08-Bartz_5161[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.ics.forth.gr/eg2008/home.php Drettakis, D. Blackwell
Oxford, United Kingdom
Eurographics 2008 Biologische Kybernetik Max-Planck-Gesellschaft European Association for Computer Graphics Creta, Greece 29th Annual Conference of the European Association for Computer Graphics (EG 2008) en 10.2312/egst.20081045 DBartz dwcDWCunningham JFischer walliCWallraven
inproceedings 5497 Perception of Prominence Intensity in audio-visual Speech 2007 9 1-6 Multimodal prosody carries a wide variety of information Here, we investigated the roles of visual and the auditory information in the production and perception of different emphasis intensities. In a series of video recordings, the intensity, location, and syntactic category of the emphasized word were varied. Physical analyses demonstrated that each speaker produced different emphasis intensities, with a high degree of individual variation in information distribution. In the first psychophysical experiment, observers easily distinguished between the different intensities. Interestingly, the pattern of perceived intensity was remarkably similar across speakers, despite the individual variations in the use of different visual and acoustic modalities. The second experiment presented the recordings visually, acoustically, and audiovisually. Overall, while the audio only condition was very similar to the audiovisual condition, there was a clear influence of visual information. Weak visual information lead to a weaker audiovisual intensity, while stong visual information enhanced audiovisual intensity. http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/AVSP-2007-Nusseck.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Vroomem , J. , M. Swerts, E. Krahmer Biologische Kybernetik Max-Planck-Gesellschaft Hilvarenbeek, Netherlands International Conference on Auditory-Visual Speech Processing 2007 (AVSP2007) en manfredMNusseck dwcDWCunningham JPDRuiter hhbHHBülthoff inproceedings 4465 Psychophysical investigation of facial expressions using computer animated faces 2007 7 11-18 The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/apgv07-11_4465[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.apgv.org/archive/apgv07/ Wallraven, C. , V. Sundstedt ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 4th Symposium on Applied Perception in Graphics and Visualization (APGV 2007) en 978-1-59593-670-7 10.1145/1272582.1272585 griesserRTGriesser dwcDWCunningham walliCWallraven hhbHHBülthoff
inproceedings 4467 Perceptual reparameterization of material properties 2007 6 89-96 The recent increase in both the range and the subtlety of computer graphics techniques has greatly expanded the possibilities for synthesizing images. In many cases, however, the relationship between the parameters of an algorithm and the resulting perceptual effect is not straightforward. Since the ability to produce specific, intended effects is a natural pre-requisite for many scientific and artistic endeavors, this is a strong drawback. Here, we demonstrate a generalized method for determining both the qualitative and quantitative mapping between parameters and perception. Multidimensional Scaling extracts the metric structure of perceived similarity between the objects, as well as the transformation between similarity space and parameter space. Factor analysis of semantic differentials is used to determine the aesthetic structure of the stimulus set. Jointly, the results provide a description of how specific parameter changes can produce specific semantic changes. The method is demonstrated using two datasets. The first dataset consisted of glossy objects, which turned out to have a 2D similarity space and five primary semantic factors. The second dataset, transparent objects, can be described with a non-linear, 1D similarity map and six semantic factors. In both cases, roughly half of the factors represented aesthetic aspects of the stimuli, and half the low-level material properties. Perceptual reparameterization of computer graphics algorithms (such as those dealing with the representation of surface properties) offers the potential to improve their accessibility. This will not only allow easier generation of specific effects, but also enable more intuitive exploration of different image properties. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.eg.org/EG/DL/WS/COMPAESTH/COMPAESTH07 Cunningham, D. W., G. W. Meyer, L. Neumann, A. Dunning, R. Paricio Eurographics Association
Aire-la-Ville, Switzerland
Computational Aesthetics 2007 Biologische Kybernetik Max-Planck-Gesellschaft Banff, Alberta, Canada Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging (CAe '07) en 978-3-905673-43-2 10.2312/COMPAESTH/COMPAESTH07/089-096 dwcDWCunningham walliCWallraven rolandRWFleming WStrasser
inproceedings 3984 The Evaluation of Stylized Facial Expressions 2006 7 85-92 Stylized rendering aims to abstract information in an image making it useful not only for artistic but also for visualization purposes. Recent advances in computer graphics techniques have made it possible to render many varieties of stylized imagery efficiently. So far, however, few attempts have been made to characterize the perceptual impact and effectiveness of stylization. In this paper, we report several experiments that evaluate three different stylization techniques in the context of dynamic facial expressions. Going beyond the usual questionnaire approach, the experiments compare the techniques according to several criteria ranging from introspective measures (subjective preference) to task-dependent measures (recognizability, intensity). Our results shed light on how stylization of image contents affects the perception and subjective evaluation of facial expressions. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/p85-wallraven_3984[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.apgv.org/archive/apgv06/ Fleming, R.W. , S. Kim ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Boston, MA, USA 3rd Symposium on Applied Perception in Graphics and Visualization (APGV 2006) en 1-59593-429-4 10.1145/1140491.1140509 walliCWallraven JFischer dwcDWCunningham DBartz hhbHHBülthoff
inproceedings 3876 Measuring the Discernability of Virtual Objects in Conventional and Stylized Augmented Reality 2006 5 53-61 In augmented reality, virtual graphical objects are overlaid over the real environment of the observer. Conventional augmented reality systems normally use standard real-time rendering methods for generating the graphical representations of virtual objects. These renderings contain the typical artifacts of computer generated graphics, e.g., aliasing caused by the rasterization process and unrealistic, manually configured illumination models. Due to these artifacts, virtual objects look artifical and can easily be distinguished from the real environment. A different approach to generating augmented reality images is the basis of stylized augmented reality [FBS05c]. Here, similar types of artistic or illustrative stylization are applied to the virtual objects and the camera image of the real enviroment. Therefore, real and virtual image elements look significantly more similar and are less distinguishable from each other. In this paper, we present the results of a psychophysical study on the effectiveness of stylized augmented reality. In this study, a number of participants were asked to decide whether objects shown in images of augmented reality scenes are virtual or real. Conventionally rendered as well as stylized augmented reality images and short video clips were presented to the participants. The correctness of the participants' responses and their reaction times were recorded. The results of our study show that an equalized level of realism is achieved by using stylized augmented reality, i.e., that it is significantly more difficult to distinguish virtual objects from real objects. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Fischer-2006-Measuring_3876[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.adetti.pt/events/EGVE06/ Hubbold, R. ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Lisboa, Portugal 12. Eurographics Symposium on Virtual Environments (EGVE 06) en 3-905673-33-9 10.2312/EGVE/EGVE06/053-061 JFischer dwcDCunningham DBartz walliCWallraven hhbHHBülthoff WStrasser
inproceedings 3913 A psychophysical examination of Swinging Rooms, Cylindrical Virtual Reality setups, and characteristic trajectories 2006 3 111-118 Virtual Reality (VR) is increasingly being used in industry, medicine, entertainment, education, and research. It is generally critical that the VR setups produce behavior that closely resembles real world behavior. One part of any task is the ability to control our posture. Since postural control is well studied in the real world and is known to be strongly influenced by visual information, it is an ideal metric for examining the behavioral fidelity of VR setups. Moreover, VR-based experiments on postural control can provide fundamental new insights into human perception and cognition. Here, we employ the "swinging room paradigm" to validate a specific VR setup. Furthermore, we systematically examined a larger range of room oscillations than previously studied in any single setup. We also introduce several new methods and analyses that were specifically designed to optimize the detection of synchronous swinging between the observer and the virtual room. The results show that the VR setup has a very high behavioral fidelity and that increases in swinging room amplitude continue to produce increases in body sway even at very large room displacements (+/- 80 cm). Finally, the combination of new methods proved to be a very robust, reliable, and sensitive way of measuring body sway. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://conferences.computer.org/vr/2006/ IEEE
Piscataway, NJ, USA
Biologische Kybernetik Max-Planck-Gesellschaft Alexandria, VA, USA IEEE Virtual Reality Conference 2006 en 1-4244-0224-7 10.1109/VR.2006.14 dwcDWCunningham nusseckH-GNusseck teufelHTeufel walliCWallraven hhbHHBülthoff
inproceedings 3541 Psychophysical evaluation of animated facial expressions 2005 8 17-24 The human face is capable of producing an astonishing variety of expressions - expressions for which sometimes the smallest difference changes the perceived meaning noticably. Producing realistic-looking facial animations that are able to transport this degree of complexity continues to be a challenging research topic in computer graphics. One important question that remains to be answered is: When are facial animations good enough? Here we present an integrated framework in which psychophysical experiments are used in a first step to systematically evaluate the perceptual quality of computer-generated animations with respect to real-world video sequences. The result of the first experiment is an evaluation of several animation techniques in which we expose specific animation parameters that are important for perceptual fidelity. In a second experiment we then use these benchmarked animations in the context of perceptual research in order to systematically investigate the spatio-temporal characteristics of ex pressions. Using such an integrated approach, we are able to provide insights into facial expressions for both the perceptual and computer graphics community. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/wallraven-etal-apgv-2005_[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://portal.acm.org/citation.cfm?id=1080405 Bülthoff, H.H., T. Troscianko ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft ACM Special Interest Group on Computer Graphics and Interactive Techniq La Coruña, Spain 2nd Symposium on Applied Perception in Graphics and Visualization (APGV 2005) en 1-59593-139-2 10.1145/1080402.1080405 walliCWallraven mbreidtMBreidt dwcDCunningham hhbHHBülthoff
inproceedings 5979 Human-Centred Fidelity Metrics for Virtual Environment Simulations 2005 3 308 It is increasingly important to provide fidelity mecrics for rendered images and interactive virtual environments (VEs) targeting transfer of training in real-world task situations. Computational metrics which aim to predict the degree of fidelity of a rendered image can be based on psychophysical observations. For interactive simulations, psychophysical investigations can be carried out into the degree of similarity between the original and a synthetic simulation. Psychophysics comprises a collection of methods used to conduct non-invasive experiments on humans, the purpose of which is to study mappings between events in an environment and levels of sensory responses to those events. This tutorial will present the techniques and principles towards conducting psychophysical studies, for assessing image quality as well as fidelity of a VE simulation and how results from such studies contribute to VE system design as well as to computational image quality metrics. Methods based on experiments for measuring the perceptual equivalence between a real scene and a computer simulation of the same scene will be reported. These methods are presented through the study of human vision and include using photorealistic computer graphics to depict complex environments and works of art. In addition, physical and psychophysical fidelity issues in the assessment of virtual environments will be emphasised. Specifications for correct matching between the psychophysical characteristics of the displays and the human users’ sensory and motor systems will he discussed as well as some examples of the consequences when systems fail to be physically well matched to their users. Complete experimental cycles will be described from the initial idea and design, to pilot study, experimental redesign, data collection, analysis and post-experiment lessons learned. Examples will include research on spatial orientation in Virtual Reality, assessing fidelity of flight simulators, fidelity of simulation of humans and clothing and measuring perceptual sensitivity to latency. This tutorial requires no fundamental prerequisites. It would help if the attendee had some knowledge of experimental design, and of some personal experience of computer graphics and simulation systems. However, the course will be self-contained. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1492813 Fröhlich, B. IEEE Computer Society
Piscataway, NJ, USA
Biologische Kybernetik Max-Planck-Gesellschaft Bonn, Germany IEEE Conference on Virtual Reality (VR '05) en 0-7803-8929-8 10.1109/VR.2005.1492813 BAdelstein hhbHHBülthoff dwcDWCunningham KMania NMourkoussis ESwan NThalmann TTroscianko
inproceedings 3058 Multi-viewpoint video capture for facial perception research 2004 12 55-60 In order to produce realistic-looking avatars, computer graphics has traditionally relied solely on physical realism. Research on cognitive aspects of face perception, however, can provide insights into how to produce believable and recognizable faces. In this paper, we describe a method for automatically manipulating video recordings of faces. The technique involves the use of a custom-built multi-viewpoint video capture system in combination with head motion tracking and a detailed 3D head shape model. We illustrate how the technique can be employed in studies on dynamic facial expression perception by summarizing the results of two psychophysical studies which provide suggestions for creating recognizable facial expressions. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf3058.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Thalmann, N. , M. D. Thalmann Biologische Kybernetik Max-Planck-Gesellschaft Zermatt, Switzerland Workshop on Modelling and Motion Capture Techniques for Virtual Environments (CAPTECH 2004) en kleinermMKleiner walliCWallraven mbreidtMBreidt dwcDWCunningham hhbHHBülthoff inproceedings 2791 The Perceptual Influence of Spatiotemporal Noise on the Reconstruction of Shape from Dynamic Occlusion 2004 9 407-414 When an object moves, it covers and uncovers texture in the background. This pattern of change is sufficient to define the object’s shape, velocity, relative depth, and degree of transparency, a process called Spatiotemporal Boundary Formation (SBF). We recently proposed a mathematical framework for SBF, where texture transformations are used to recover local edge segments, estimate the figure’s velocity and then reconstruct its shape. The model predicts that SBF should be sensitive to spatiotemporal noise, since the spurious transformations will lead to the recovery of incorrect edge orientations. Here we tested this prediction by adding a patch of dynamic noise (either directly over the figure or a fixed distance away from it). Shape recognition performance in humans decreased to chance levels when noise was placed over the figure but was not affected by noise far away. These results confirm the model’s prediction and also imply that SBF is a local process. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2791.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://link.springer.com/content/pdf/10.1007%2F978-3-540-28649-3_50.pdf Rasmussen, C.E. , H.H. Bülthoff, M. Giese, B. Schölkopf Springer
Berlin, Germany
Lecture Notes in Computer Science ; 3175 Pattern Recognition Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 26th Annual Symposium of the German Association for Pattern Recognition (DAGM 2004) 978-3-540-22945-2 10.1007/978-3-540-28649-3_50 tmcookeTCooke dwcDWCunningham hhbHHBülthoff
inproceedings 2865 The components of conversational facial expressions 2004 8 143-149 Conversing with others is one of the most central of human behaviours. In any conversation, humans use facial motion to help modify what is said, to control the flow of a dialog, or to convey complex intentions without saying a word. Here, we employ a custom, image-based, stereo motion-tracking algorithm to track and selectively "freeze" portions of an actor or actress's face in video recordings in order to determine the necessary and sufficient facial motions for nine conversational expressions. The results show that most expressions rely primarily on a single facial area to convey meaning, with different expressions using different facial areas. The results also show that the combination of rigid head, eye, eyebrow, and mouth motion is sufficient to produce versions of these expressions that are as easy to recognize as the original recordings. Finally, the results show that the manipulation technique introduced few perceptible artifacts into the altered video sequences. The use of advanced computer graphics techniques provided a means to systematically examine real facial expressions. This provides not only fundamental insights into human perception and cognition, but also yields the basis for a systematic description of what needs to be animated in order to produce realistic, recognizable facial expressions. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2865.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://portal.acm.org/citation.cfm?id=1012578 Interrante, V. , A. McNamara, H.H. Bülthoff, H.E. Rushmeier ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Los Angeles, CA, USA 1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004) en 1-58113-914-4 10.1145/1012551.1012578 dwcDWCunningham kleinermMKleiner walliCWallraven hhbHHBülthoff
inproceedings 2808 Using facial texture manipulation to study facial motion perception 2004 8 180 Manipulated still images of faces have often been used as stimuli for psychophysical research on human perception of faces and facial expressions. In everyday life, however, humans are usually confronted with moving faces. We describe an automated way of performing manipulations on facial video recordings and how it can be applied to investigate human dynamic face perception. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2808.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.kyb.tuebingen.mpg.de/bu/people/kleinerm/apgv04/ Interrante, V. , A. McNamara, H.H. Bülthoff, H.E. Rushmeier ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Los Angeles, CA, USA 1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004) en 1-58113-914-4 10.1145/1012551.1012602 kleinermMKleiner aschwanASchwaninger dwcDWCunningham babsyBKnappmeyer
inproceedings 2866 View dependence of complex versus simple facial motions 2004 8 181 The viewpoint dependency of complex facial expressions versus simple facial motions was analyzed. The MPI Tübingen Facial Expression Database was used for the psychophysical investigation of view dependency. The ANOVA revealed at best marginally significant effects of viewpoint or type of expression on recognition accuracy. It was observed that humans were able to recognize facial motions in a largely viewpoint invariant manner, which supports the theoretical model of face recognition. It was also suggested that in order to be recognized, computer generated facial expressions should 'look good' from all viewpoints. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/apgv04-181_2866[0].pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://portal.acm.org/citation.cfm?id=1012603 Interrante, V. , A. McNamara, H.H. Bülthoff, H.E. Rushmeier ACM Press
New York, NY, USA
Biologische Kybernetik Max-Planck-Gesellschaft Los Angeles, CA, USA 1st Symposium on Applied Perception in Graphics and Visualization (APGV 2004) en 1-58113-914-4 10.1145/1012551.1012603 walliCWallraven dwcDWCunningham mbreidtMBreidt hhbHHBülthoff
inproceedings 2320 Combining 3D Scans and Motion Capture for Realistic Facial Animation 2003 9 63-66 We present ongoing work on the development of new methods for highly realistic facial animation. One of the main contributions is the use of real-world, high-precision data for both the timing of the animation and the deformation of the face geometry. For animation, a set of morph shapes acquired through a 3D scanner is linearly morphed according to timing extracted from point tracking data recorded with an optical motion capture system. http://www.kyb.tuebingen.mpg.defileadmin/user_upload/files/publications/Eurographics-2003-Breidt.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://diglib.eg.org/handle/10.2312/egp20031009 Flores, J. , P. Cano Eurographics Association
Aire-la-Ville, Switzerland
Eurographics 2003 Biologische Kybernetik Max-Planck-Gesellschaft Granada, Spain 24th Annual Conference of the European Association for Computer Graphics 10.2312/egp.20031009 mbreidtMBreidt walliCWallraven dwcDWCunningham hhbHHBülthoff
inproceedings 2096 The inaccuracy and insincerity of real faces 2003 9 7-12 Since conversation is a central human activity, the synthesis of proper conversational behavior for Virtual Humans will become a critical issue. Facial expressions represent a critical part of interpersonal communication. Even with the most sophisticated, photo-realistic head model, an avatar who's behavior is unbelievable or even uninterpretable will be an inefficient or possibly counterproductive conversational partner. Synthesizing expressions can be greatly aided by a detailed description of which facial motions are perceptually necessary and sufficient. Here, we recorded eight core expressions from six trained individuals using a method-acting approach. We then psychophysically determined how recognizable and believable those expressions were. The results show that people can identify these expressions quite well, although there is some systematic confusion between particular expressions. The results also show that people found the expressions to be less than convincing. The pattern of confusions and believability ratings demonstrates that there is considerable variation in natural expressions and that even real facial expressions are not always understood or believed. Moreover, the results provide the ground work necessary to begin a more fine-grained analysis of the core components of these expressions. As some initial results from a model-based manipulation of the image sequences shows, a detailed description of facial expressions can be an invaluable aid in the synthesis of unambiguous and believable Virtual Humans. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2096.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Hamza, H.M. Acta Press
Anaheim, CA, USA
Biologische Kybernetik Max-Planck-Gesellschaft Benalmádena, Spain 3rd IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2003) 0-88986-382-2 dwcDWCunningham mbreidtMBreidt kleinermMKleiner walliCWallraven hhbHHBülthoff
inproceedings 2022 How believable are real faces? Towards a perceptual basis for conversational animation 2003 5 23-29 Regardless of whether the humans involved are virtual or real, well-developed conversational skills are a necessity. The synthesis of interface agents that are not only understandable but also believable can be greatly aided by knowledge of which facial motions are perceptually necessary and sufficient for clear and believable conversational facial expressions. Here, we recorded several core conversational expressions (agreement, disagreement, happiness, sadness, thinking, and confusion) from several individuals, and then psychophysically determined the perceptual ambiguity and believability of the expressions. The results show that people can identify these expressions quite well, although there are some systematic patterns of confusion. People were also very confident of their identifications and found the expressions to be rather believable. The specific pattern of confusions and confidence ratings have strong implications for conversational animation. Finally, the present results provide the information necessary to begin a more fine-grained analysis of the core components of these expressions. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2022.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1199300 IEEE
Los Alamitos, CA, USA
Biologische Kybernetik Max-Planck-Gesellschaft New Brunswick, NJ, USA 16th International Conference on Computer Animation and Social Agents (CASA 2003) 0-7695-1934-2 10.1109/CASA.2003.1199300 dwcDWCunningham mbreidtMBreidt kleinermMKleiner walliCWallraven hhbHHBülthoff
inproceedings 2033 Visuomotor adaptation: Dependency on motion trajectory 2002 11 177-182 The present contribution studies the rapid adaptation process of the visuomotor system to optical transformations (here: shifting the image horizon-tally via prism goggles). It is generally believed that this adaptation consists primarily of recalibrating the transformation between visual and proprioceptive perception. According to such a purely perceptual account of adaptation, the exact path used to reach the object should not be important. If, however, it is the transformation from perception to action that is being altered, then the adapta-tion should depend on the motion trajectory. In experiments with a variety of different motion trajectories we show that visuomotor adaptation is not merely a perceptual recalibration. The structure of the motion (starting position, trajec-tory, end position) plays a central role, and even the weight load seems to be important. These results have strong implications for all models of visuomotor adaptation. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/Dynamic-Perception-2002-Cunningham.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Würtz, R.P. , M. Lappe AKA
Berlin, Germany
Dynamic Perception Biologische Kybernetik Max-Planck-Gesellschaft Bochum, Germany Workshop of GI Section 1.0.4 "Image Understanding" and the European Networks MUHCI and ECOVISION 3-89838-032-7 CKaernbach LMunka dwcDWCunningham
inproceedings 1245 Spatiotemporal Stereopsis 1993 8 279-283 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Stavros, V.S. , J.B. Pittenger Erlbaum
Hillsdale, NJ, USA
Studies in Perception and Action II Biologische Kybernetik Max-Planck-Gesellschaft Vancouver, Canada Seventh International Conference on Event Perception and Action 0-8058-1405-1 TFShipley dwcDWCunningham PJKellman
inbook BulthoffCW2017 Dynamic Aspects of Face Processing in Humans 2011 575-596 In this chapter, we will focus on the role of motion in identity and expression recognition in human, and its developmental and neurophysiological aspects. Based on results from literature, we make it clear that there is some form of characteristic facial information that is only available over time, and that it plays an important role in the recognition of identity, expression, speech, and gender; and that the addition of dynamic information improves the recognizability of expressions and identity, and can compensate for the loss of static information. Moreover, at least several different types of motion seem to exist, they play different roles, and a simple rigid/nonrigid dichotomy is neither sufficient nor appropriate to describe these motions. Additional research is necessary to determine what the dynamic features for face processing are. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://link.springer.com/content/pdf/10.1007%2F978-0-85729-932-1_22.pdf Li, S.Z. , A.K. Jain Springer
London, UK
Handbook of Face Recognition 978-0-85729-931-4 10.1007/978-0-85729-932-1_22 hhbHHBülthoff dwcDWCunningham walliCWallraven
inbook 5749 What visual discrimination of fractal textures can tell us about discrimination of camouflaged targets 2009 12 99-112 Most natural images have 1/fβ Fourier image statistics, a signature which is mimicked by fractals and which forms the basis for recent applications of fractals to camouflage. To distinguish a fractal camouflaged target (with 1/fβ* statistics) from a 1/fβ natural background (or another target), the exponents of target and background (or other target) must differ by a critical amount (dβ=β-β*), which varies depending on experimental circumstances. The same constraint applies for discriminating between friendly and enemy camouflaged targets. Here, we present data for discrimination of both static and dynamic fractal images, and data on how discrimination varies as a function of experimental methods and circumstances. The discrimination function has a minimum near β=1.6, which typifies images with less high spatial frequency content than the vast majority of natural images (β near 1.1). This implies that discrimination between fractal camouflaged objects is somewhat more difficult when the camouflaged objects are sufficiently similar in statistics to the statistics of natural images (as any sensible camouflage scheme should be), compared to the less natural β value of 1.6. This applies regardless of the β value of the background, which has implications for fratricide; friendlies and hostiles will be somewhat harder to tell apart for naturalistically camouflaged images, even when friendlies and hostiles are both visible against their backgrounds. The situation is even more perverse for “active camouflage”. Because of perceptual system nonlinearities (stochastic resonance), addition of dynamic noise to targets can actually enhance target detection and identification under some conditions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.ashgate.com/isbn/9780754677673 Andrews, D. H., T. Hull Ashgate
Farnham, UK
Human Factors in Defense Human Factors Issues in Combat Identification Biologische Kybernetik Max-Planck-Gesellschaft en 978-0-7546-9515-8 VABillock dwcDWCunningham BHTsou
inbook SchwaningerWCC2006 Processing of facial identity and expression: a psychophysical, physiological, and computational perspective 2006 321–343 A deeper understanding of how the brain processes visual information can be obtained by comparing results from complementary fields such as psychophysics, physiology, and computer science. In this chapter, empirical findings are reviewed with regard to the proposed mechanisms and representations for processing identity and emotion in faces. Results from psychophysics clearly show that faces are processed by analyzing component information (eyes, nose, mouth, etc.) and their spatial relationship (configural information). Results from neuroscience indicate separate neural systems for recognition of identity and facial expression. Computer science offers a deeper understanding of the required algorithms and representations, and provides computational modeling of psychological and physiological accounts. An interdisciplinary approach taking these different perspectives into account provides a promising basis for better understanding and modeling of how the human brain processes visual information for recognition of identity and emotion in faces. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.sciencedirect.com/science/article/pii/S0079612306560182 Anders, S. , G. Ende, M. Junghofer, J. Kissler, D. Wildgruber Elsevier
Amsterdam, The Netherlands
Progress in Brain Research ; 156 Understanding Emotions 978-0-444-52182-8 10.1016/S0079-6123(06)56018-2 aschwanASchwaninger walliCWallraven dwcDWCunningham SDChiller-Glaus
inbook 1241 Perception of occluding and occluded objects over time: Spatiotemporal segmentation and unit formation 2001 557-585 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.sciencedirect.com/science/article/pii/S0166411501800388 Shipley, T.F. , P.J. Kellman Elsevier
Amsterdam, The Netherlands
Advances in Psychology ; 130 From Fragments to Objects: Segmentation and Grouping in Vision Biologische Kybernetik Max-Planck-Gesellschaft 0-444-50506-7 10.1016/S0166-4115(01)80038-8 TFShipley dwcDWCunningham
techreport 634 Temporal adaptation and the role of temporal contiguity in spatial behavior 2000 12 85 Rapid and accurate interaction with the world requires that proper spatial and temporal alignment between sensory modalities be maintained. The introduction of a misalignment (either spatial or temporal) impairs performance on most spatial tasks. For over a century, it has been known that a few minutes of exposure to a spatial misalignment can induce a recalibration of intersensory spatial relationships, a phenomenon called Spatial Adaptation. Here, we present evidence that the sensorimotor system can also adapt to intersensory temporal misalignments, a phenomena that we call Temporal Adaptation. Temporal Adaptation is striking parallel to Spatial Adaptation, and has strong implications for the understanding of spatial cognition and intersensory integration. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf634.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft Max Planck Institute for Biological Cybernetics, Tübingen, Germany dwcDWCunningham astrosAChatziastros mvdhMvon der Heyde hhbHHBülthoff techreport 1547 Sensorimotor adaptation to violations of temporal contiguity 2000 10 83 Most events are processed by a number of neural pathways. These pathways often differ considerably in processing speed. Thus, coherent perception requires some form of synchronization mechanism. Moreover, this mechanism must be flexible, since neural processing speed changes over the life of an organism. Here we provide behavioral evidence that humans can adapt to a new intersensory temporal relationship (which was artificially produced by delaying visual feedback). The conflict between these results and previous work that failed to find such improvements can be explained by considering the present results as a form of sensorimotor adaption. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1547.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft Max Planck Institute for Biological Cybernetics, Tübingen, Germany dwcDWCunningham VABillock BHTsou poster AubreyCMRSW2012 Sensitivity to backchannels in conversational expressions 2012 5 22 22 Facial expressions are one of the key modes of inter-personal communication for humans. Current research has almost exclusively focused on the so-called universal expressions (anger, disgust, fear, happy, sad, and surprise). Whereas these expressions are clearly important from an evolutionary point of view, their frequency of occurrence in daily life is rather low. We have recently begun investigating the processing of higher frequency, conversational expressions (e.g., agree, thinking, looking tired), with particular focus on the so-called backchannel expressions, that is, facial expressions a listener makes in reaction to a speaker. These expressions are believed to be crucial for controlling conversational flow. As there is no existing database of conversations, we recorded a large audio-visual corpus of conversations between pairs of people at Cardiff University. Two preliminary experiments have been conducted to empirically determine the sensitivity to changes in the backchannel. In the first experiment, eleven clips from several conversations were extracted. Each of the eleven main channels (“speaker”) was paired with four plausible and the real backchannel ("listener"). Participants were asked to choose the most appropriate backchannel. The second experiment examined sensitivity to temporal offsets in backchannel communication (one speaker-listener pair was shown at a time; the correct backchannel was always used, but its temporal onset was systematically varied). Results show that on average participants can correctly identify the correct backchannel sequence (41% correct; chance performance is 20%) and that people can tell if the back channel is early or late. Interestingly, it seems to be easier to judge lateness than earliness. In summary, the results conclusively show that -- despite the considerable difficulty of the task -- people are remarkably sensitive to the content and the timing of backchannel information. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.theava.net/abstracts/ava2012.pdf Cambridge, UK 2nd Joint AVA/BMVA Meeting on Biological and Machine Vision AAAubrey dwcDWCunningham DMarshall PLRosin ashinAShin walliCWallraven poster 6739 Laying the foundations for an in-depth investigation of the whole space of facial expressions Journal of Vision 2010 5 10 7 606 Facial expressions form one of the most important and powerful communication systems of human social interaction. They express a large range of emotions but also convey more general, communicative signals. To date, research has mostly focused on the static, emotional aspect of facial expression processing, using only a limited set of “generic” or “universal” expression photographs, such as a happy or sad face. That facial expressions carry communicative aspects beyond emotion and that they transport meaning in the temporal domain, however, has so far been largely neglected. In order to enable a deeper understanding of facial expression processing with a focus on both emotional and communicative aspects of facial expressions in a dynamic context, it is essential to first construct a database that contains such material using a well-controlled setup. We here present the novel MPI facial expression database, which contains 20 native German participants performing 58 expressions based on pre-defined context scenarios, making it the most extensive database of its kind to date. Three experiments were performed to investigate the validity of the scenarios and the recognizability of the expressions. In Experiment 1, 10 participants were asked to freely name the facial expressions that would be elicited given the scenarios. The scenarios were effective: 82% of the answers matched the intended expressions. In Experiment 2, 10 participants had to identify 55 expression videos of 10 actors. We found that 34 expressions could be identified reliably without any context. Finally, in Experiment 3, 20 participants had to group the 55 expression videos of 10 actors based on similarity. Out of the 55 expressions, 45 formed consistent groups, which highlights the impressive variety of conversational expressions categories we use. Interestingly, none of the experiments found any advantage for the universal expressions, demonstrating the robustness with which we interpret conversational facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/content/10/7/606 Biologische Kybernetik Max-Planck-Gesellschaft Naples, FL, USA 10th Annual Meeting of the Vision Sciences Society (VSS 2010) en 10.1167/10.7.606 kascotKKaulard walliCWallraven dwcDWCunningham hhbHHBülthoff poster 5954 Going beyond universal expressions: investigating the visual perception of dynamic facial expressions Perception 2009 8 38 ECVP Abstract Supplement 83 Investigations of facial expressions have focused almost exclusively on the six so-called universal expressions. During everyday interaction, however, a much larger set of facial expressions is used for communication. To examine this mostly unexplored space, we developed a large video database for emotional and conversational expressions: native German participants performed 58 expressions based on pre-defined context scenarios. Three experiments were performed to investigate the validity of the scenarios and the recognizability of the expressions. In Experiment 1, ten participants were asked to freely name the facial expressions that would be elicited given the scenarios. The scenarios were effective: 82% of the answers matched the intended expressions. In Experiment 2, ten participants had to identify 55 expression videos of ten actors, presented successively. We found that 20 expressions could be identified reliably without any context. Finally, in Experiment 3, twenty participants had to group the 55 expression videos based on similarity while allowing for repeated comparisons. Out of the 55 expressions, 45 formed a consistent group, respectively, showing that visual comparison facilitates the recognition of conversational expressions. Interestingly, none of the experiments found any advantage for the universal expressions, demonstrating the robustness with which we interpret conversational facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/38/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft Regensburg, Germany 32nd European Conference on Visual Perception en 10.1177/03010066090380S101 kascotKKaulard walliCWallraven dwcDWCunningham hhbHHBülthoff poster 3875 Virtual or Real? Judging The Realism of Objects in Stylized Augmented Environments 2006 3 9 119 In augmented reality, virtual graphical objects are overlaid over the real environment of the observer. Conventional augmented reality systems use standard computer graphics methods for generating the graphical representations of virtual objects. These renderings contain the typical artefacts of computer generated graphics, e.g., aliasing caused by the rasterization process and unrealistic, manually configured illumination models. Due to these artefacts, virtual objects look artificial and can easily be distinguished from the real environment. Recently, a different approach to generating augmented reality images was presented. In stylised augmented reality, similar types of artistic or illustrative stylisation are applied to the virtual objects and the camera image of the real environment [1]. Therefore, real and virtual image elements look more similar and are less distinguishable from each other. In this poster, we describe the results of a psychophysical study on the effectiveness of stylised augmented reality. A number of participants were asked to decide whether objects shown in images of augmented reality scenes are virtual or real. Conventionally rendered as well as stylised augmented reality images and short video clips were presented to the participants. The correctness of the participants&amp;amp;amp;#8217; responses and their reaction times were recorded. The results of our study clearly show that an equalized level of realism is achieved by using stylised augmented reality, i.e., that it is distinctly more difficult to discriminate virtual objects from real objects. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk06/abstract.php?_load_id=fischer01 Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 9th Tübingen Perception Conference (TWK 2006) en JFischer dwcDCunningham DBartz walliCWallraven hhbHHBülthoff WStrasser poster 3645 Perceptual validation of facial animation: The role of prior experience Perception 2005 8 34 ECVP Abstract Supplement 204 Facial expressions play a complex and important role in communication. A complete investigation of how facial expressions are recognised requires that different expressions be systematically and subtly manipulated. For this purpose, we recently developed a photo-realistic facial animation system that uses a combination of facial motion capture and high-resolution 3-D face scans. In order to determine if the synthetic expressions capture the subtlety of natural facial expressions, we directly compared recognition performance for video sequences of real-world and animated facial expressions (the sequences will be available on our website). Moreover, just as recognition of an incomplete or degraded object can be improved through prior experience with a complete, undistorted version of that object, it is possible that experience with the real-world video sequences may improve recognition of the synthesised expressions. Therefore, we explicitly investigated the effects of presentation order. More specifically, half of the participants saw all of the video sequences followed by the animation sequences, while the other half experienced the opposite order. Recognition of five expressions (agreement, disagreement, confusion, happiness, thinking) was measured with a six-alternative, non-forced-choice task. Overall, recognition performance was significantly higher ( p < 0.0001) for the video sequences (93%) than for the animations (73%). A closer look at the data showed that this difference is largely based on a single expression: confusion. As expected, there was an order effect for the animations ( p < 0.02): seeing the video sequences improved recognition performance for the animations. Finally, there was no order effect for the real videos ( p > 0.14). In conclusion, the synthesised expressions supported recognition performance similarly to real expressions and have proven to be a valuable tool in understanding the perception of facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/34/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft A Coruña, Spain 28th European Conference on Visual Perception en 10.1177/03010066050340S101 manfredMNusseck dwcDWCunningham walliCWallraven hhbHHBülthoff poster 2535 Local Processing in Spatiotemporal Boundary Formation 2004 2 65 Patterns of abrupt changes in a scene, such as the dynamic occlusion of texture elements (causing their apppearance and disappearance), can give rise to the perception of the edges of the occluder via a process called Spatiotemporal Boundary Formation (SBF). It has previously been shown that SBF can be disrupted by very small amounts of dynamic noise spread globally throughout a scene. We recently developed a mathematical model of SBF in which groups of local changes are used to extract edges, which are then combined into a gure and used to estimate the gure's motion. The model implies that SBF relies on local processing and predicts that SBF should be impaired when noise is added near the edges of the gure, but not when it is added far from the edges. This prediction was tested in a shape-identication task in which the location of noise is varied. Indeed, performance was not impaired by noise far from the gure, but was markedly disrupted by noise near the gure, supporting the notion that changes are integrated locally rather than globally during SBF. In the second part of this project, the mathematical model of SBF was implemented in software. Reichardt-based motion detectors were used to lter the experimental stimuli and provide the input to the software implementation. Three simple geometrical gures, similar to those used in the psychophysical experiment, were reconstructed using this method, demonstrating one way in which a mid-level visual mechanism such as SBF could connect low-level mechanisms such as change detection to higher-level mechanisms such as shape detection. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2535.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk04/index.php Heinrich H. Bülthoff, Hanspeter A. Mallot, Rolf D. Ulrich, Felix A. Wichmann Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 7th Tübingen Perception Conference (TWK 2004) tmcookeTCooke dwcDWCunningham walliCWallraven poster MunkaKC2003 Visuomotor Adaptation: Dependency on Motion Trajectory 2003 2 6 128 In order to pick up an object, its visual location must be converted into the appropriate motor commands. Introducing a discrepancy between the seen and felt locations of the object (e.g., via prism goggles) initially impairs the ability to touch it. The sensory system rapidly adapts to the discrepancy, however, returning perception and performance to near normal. Subsequent removal of the discrepancy leads to a renewed performance decrement - a negative aftere ect (NAE). It is generally believed that the process of adaptation consists primarily of \recalibrating" the transformation between the visual and proprioceptive perception of spatial location (Bedford, The psychology of learning and motivation, 1999). According to such a purely perceptual account of adaptation, the movement to reach the object is not important. If, however, the transformation from perception to action is altered, then it will be dependent on motion - i.e. changing motion parameters will reduce or eliminate the NAE (see also Martin et al., Brain, 1996). According to our hypothesis spatial visuomotor information is distributively stored and changed by prism adaptation and it is not based on a centrally organized spatial information system. We conducted seven experiments consisting of four blocks each, in which participants had to touch a cross presented at eye level on a touch screen. In the rst block the participants were introduced and familiarized with the experiment. Blocks two and four were pre and post tests to measure the NAE produced during the di erent experimental conditions in block 3 in which the participants were wearing prism goggles: we tested the e ects of di erent trajectories, di erent starting points, weight, vertical generalization and di erent types of feedback. A total transfer from an adapted to a non-adapted condition didn't turn up in any of our experiments, although the trajectories where highly identical in some of them. It rather seems that newly learned spatial information in prism adaptation experiments is stored and retrieved distributively for di erent extremities, for di erent trajectories and for di erent stress/strain conditions (e.g. weight). Furthermore, transfer seems to become weaker with bigger di erences in location. Therefore we conclude that no visual \recalibration" is taking place but a relearning of distributetively organized parameters of visuomotor coordination. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk03/ Tübingen, Germany 6. Tübinger Wahrnehmungskonferenz (TWK 2003) LMunka CKaernbach dwcDWCunningham poster 2014 Moving the Thatcher Illusion 2002 11 21 Inverting the eyes and the mouth within the facial context produces a bizarre facial expression when the face is presented upright but not when the face is inverted (Thatcher illusion, Thompson, 1980). In the present study we investigated whether this illusion is part-based or holistic and whether motion increases bizarreness. Static upright Thatcher faces were rated more bizarre than the eyes and mouth presented in isolation suggesting an important role of context and holistic processing. As expected, inverted facial stimuli were perceived much less bizarre. Interestingly, isolated parts were more bizarre than the whole thatcherized face when inverted. Adding motion to the smiling thatcherized faces increased bizarreness in all conditions (parts vs. whole, upright vs. inverted). These results were replicated in a separate experiment with talking instead of smiling faces and are discussed within an integrative model of face processing. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.opam.net/archive/opam2002/OPAM2002Abstracts.pdf Biologische Kybernetik Max-Planck-Gesellschaft Max Planck Institute for Biological Cybernetics Kansas City, KS, USA 10th Annual Workshop on Object Perception and Memory (OPAM 2002) en aschwanASchwaninger dwcDWCunningham kleinermMKleiner poster 1771 A relative encoding approach to modeling Spatiotemporal Boundary Formation Journal of Vision 2002 11 2 7 704 When a camouflaged animal sits in front of the appropriate background, the animal is effectively invisible. As soon as the animal moves, however, it is easily visible despite the fact that there is still no static shape information. Its shape is perceived solely by the pattern of changes over time. This process, referred to as Spatiotemporal Boundary Formation (SBF), can be initiated by a wide range of texture transformations, including changes in the visibility, shape, or color of individual texture elements. Shipley and colleagues have gathered a wealth of psychophysical data on SBF, and have presented a mathematical proof of how the orientation of local edge segments (LESs) can be recovered from as few as 3 element changes (Shipley and Kellman, 1997). Here, we extend this proof to the extraction of global form and motion. More specifically, we present a model that recovers the orientation of the LESs from a dataset consisting of the relative spatiotemporal location of the element changes. The recovered orientations of as few as 2 LESs can then be used to extract the global motion, which is then used to determine the relative spatiotemporal location and minimal length of the LESs. Computational simulations show that the model captures the major psychophysical aspects of SBF, including a dependency on the spatiotemporal density of element changes, a sensitivity to spurious changes, an ability to extract more than one figure at a time, and a tolerance for a non-constant global motion. Unlike Shipley and Kellman's earlier proof, which required that pairs of element changes be represented as local motion vectors, the present model merely encodes the relative spatiotemporal locations of the changes. This usage of a relative encoding scheme yields several emergent properties that are strikingly similar to the perception of aperture viewed figures (Anorthoscopic perception). This offering the possibility of unifying the two phenomena within a single mathematical model. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/content/2/7/704 Biologische Kybernetik Max-Planck-Gesellschaft Sarasota, FL, USA Second Annual Meeting of the Vision Sciences Society (VSS 2002) 10.1167/2.7.704 dwcDWCunningham grafABAGraf hhbHHBülthoff poster 2237 Searching for gender-from-motion Perception 2002 8 31 ECVP Abstract Supplement 120 Biological motion contains many forms of information. Observers are usually able to tell 'what' action is being performed (eg walking versus running), 'how' it is being performed (eg quickly versus slowly), and by 'whom' (eg a young versus an old actor). We used visual search to explore the perception of gender-from-motion. In the first experiment, we used computer-animated, fully rendered human figures in which the structural and dynamic information for gender were factorially combined. In separate blocks, observers were asked to locate a figure walking with a male or female gait among distractors having the same form but opposite motion. In the second experiment, point-light walkers moved along random paths in a 3-D virtual environment. Observers were asked to locate a figure walking with a male or female gait among distractors with the opposite motion. In both experiments, the set size was varied between 1 and 4, and targets were present on 50% of the trials. The results suggest that (a) visual search can be used to explore gender-from-motion, (b) extraction of gender-from-motion is fairly inefficient (search slopes often exceed 500 ms item-1), and (c) there appears to be an observer-gender by target - gender interaction, with male observers producing lower RTs for female targets and vice versa. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/31/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft Glasgow, UK 25th European Conference on Visual Perception 10.1177/03010066020310S101 dwcDWCunningham ianIMThornton nikoNFTroje hhbHHBülthoff poster 1449 Visuell-motorische Adaption unter optischen Transformationen Experimentelle Psychologie 2002 3 44 139 Ich sitze vor meinem Schreibtisch und sehe auf ihm einen Stift. Ich möchte ihn in die Hand nehmen. Dazu müßte ich wissen, wo er ist. "Du siehst doch, wo er ist." So einfach ist das nicht. Mir liegt ein verzerrtes Netzhautbild vor, das sich ständig ändert, während mein Blick über den Schreibtisch wandert. Ich muß Blickrichtung, Kopfstellung und Körperhaltung berücksichtigen, um eine erfolgreiche Greifhandlung durchzuführen. Das gelingt uns so natürlich, daß wir die Schwierigkeit der Aufgabe leicht unterschätzen. Ein klassischer Untersuchungsansatz zum Studium der räumlichen Repräsentation ist das Adaptieren zu optischen Transformationen. Wir haben eine Prismenbrille verwendet, die das Bild seitlich um 19° verschiebt. Um die Frage nach einer zentralen versus einer distribuierten räumlichen Repräsentation zu klären, wurde dabei eine bestimmte Bewegung geübt, und dann der Transfer auf andere Bewegungen gemessen. Die Ergebnisse sprechen dafür, daß räumliches Wissen nicht nur auf die einzelnen motorischen Systeme, sondern sogar auf verschiedene Bewegungstrajektorien verteilt ist. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de https://www.teap.de/memory/ Baumann, M.; Keinath, A.; Krems, J. F. Biologische Kybernetik Max-Planck-Gesellschaft Chemnitz, Germany 44. Tagung Experimentell Arbeitender Psychologen (TeaP 2002) CKaernbach dwcDWCunningham poster 1007 A relative encoding model of spatiotemporal boundary formation 2002 2 5 77 When a camouflaged animal sits in front of the appropriate background, the animal is effectively invisible. As soon as the animal moves, however, it is easily visible despite the fact that at any given instant, there is no shape information. This process, referred to as Spatiotemporal Boundary Formation (SBF), can be initiated by a wide range of texture transformations, including changes in the visibility, shape, or color of individual texture elements. Shipley and colleagues have gathered a wealth of psychophysical data on SBF, and have presented a local motion vector model for the recovery of the orientation of local edge segments (LESs) from as few as three element changes (Shipley and Kellman, 1997). Here, we improve and extend this model to cover the extraction of global form and motion. The model recovers the orientation of the LESs from a dataset consisting of the relative spatiotemporal location of the element changes. The recovered orientations of as few as two LESs is then be used to extract the global motion, which is then used to determine the relative spatiotemporal location and minimal length of the LESs. To complete the global form, the LESs are connected in a manner similar to that used in illusory contours. Unlike Shipley and Kellman’s earlier model, which required that pairs of element changes be represented as local motion vectors, the present model merely encodes the relative spatiotemporal locations of the changes in any arbitrary coordinate system. Computational simulations of the model show that it captures the major psychophysical aspects of SBF, including a dependency on the spatiotemporal density of element changes and a sensitivity to spurious changes. Interestingly, the relative encoding scheme yields several emergent properties that are strikingly similar to the perception of aperture viewed figures (Anorthoscopic Perception). The model captures many of the important qaulities of SBF, and offers a framework within which additional aspects of SBF may be modelled. Moreover, the relative encoding approach seems to inherently encapsulate other phenomenon, offering the possibility of unifying several phenomena within a single mathematical model. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk02/ Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 5. Tübinger Wahrnehmungskonferenz (TWK 2002) dwcDWCunningham grafABAGraf hhbHHBülthoff poster 1388 Prism adaptation: Dependency on motion trajectory 2002 2 5 142 In order to pick up an object, its visual location must be converted into the appropriate motor commands. Introducing a discrepancy between the seen and felt location of the object (e.g., via prism goggles) initially impairs our ability to touch it. The sensory systems rapidly adapt to the discrepancy, however, returning perception and performance to near normal. Subsequent removal of the discrepancy leads to a renewed performance decrement -- a Negative Aftereffect (NAE). It is generally believed that this adaptation consists primarily of “recalibrating” the transformation between the visual and proprioceptive perception of spatial location (Bedford, 1999). According to such a purely perceptual account of adaptation, the exact path used to reach the object should not be important. If, however, it is the transformation from perception to action that is being altered, then changing the motion trajectory should reduce or eliminate the NAE. Starting with both hands on the desktop, the chin resting on a horizontal bar, participants (N=72) had to touch a cross presented at eye level on a touch screen 30 cm in front of them. Four trajectories were possible: reaching to the cross from below or (swinging the arm backwards) from above the bar, using either their left or their right hand. Reaching Accuracy without feedback was determined for all four trajectories before and after adaptation to a single trajectory with prism goggles (19° horizontal displacement). The NAE was 46mm (8.7°) for the adapted trajectory, 26mm negligable for both trajectories of the other hand. The NAE was larger for unfamiliar (above bar, or usage of non-preferred hand) than for familiar trajectories. Visuomotor adaptation is not merely a perceptual recalibration. Not only does the structure of the motion trajectory play a central role, but the familiarity of the trajectory also seems to be important. These results have strong implications for all models of visuomotor adaptation. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk02/ Bülthoff, H. H.; Gegenfurtner, K. R.; Mallot, H. A.; Ulrich, R. Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 5. Tübinger Wahrnehmungskonferenz (TWK 2002) LMunka CKaernbach dwcDWCunningham poster 1410 “You can tell by the way I use my walk …”: New studies of gender and gait Journal of Vision 2001 12 1 3 354 Johansson's (1973) point light walkers remain one of the most compelling demonstrations of how motion can determine the perception of form. Most studies of biological motion perception have presented isolated point-light figures in unstructured environments. Recently we have begun to explore the perception of human motion using more naturalistic displays and tasks. Here, we report new findings on the perception of gender using a visual search paradigm. Three-dimensional walking sequences were captured from human actors (4 male, 4 female) using a 7 camera motion capture system. These sequences were processed to produce point-light computer models which were displayed in a simple virtual environment. The figures appeared in a random location and walked on a random path within the bounds of an invisible virtual arena. Walkers could face and move in all directions, moving in both the frontal parallel plane and in depth. In separate blocks observers searched for a male or a female target among distractors of the opposite gender. Set size was varied from between 1 and 4. Targets were present on 50% of trials. Preliminary results suggest that both male and female observers can locate targets of the opposite gender faster than targets of the same gender. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/1/3/354/ Biologische Kybernetik Max-Planck-Gesellschaft Sarasota, FL, USA First Annual Meeting of the Vision Sciences Society (VSS 2001) 10.1167/1.3.354 ianIMThornton dwcDWCunningham nikoNFTroje hhbHHBülthoff poster 640 Do cause and effect need to be temporally continuous? Learning to compensate for delayed vestibular feedback Journal of Vision 2001 12 1 3 135 Delaying the presentation of information to one modality relative to another (an intersensory temporal offset) impairs performance on a wide range of tasks. We have recently shown, however, that a few minutes exposure to delayed visual feedback induces sensorimotor temporal adaptation, returning performance to normal. Here, we examine whether adaptation to delayed vestibular feedback is possible. Subjects were placed on a motion platform, and were asked to perform a stabilization task. The task was similar to balancing a rod on the tip of your finger. Specifically, the platform acted as if it were on the end of an inverted pendulum, with subjects applying an acceleration to the platform via a joystick. The more difficulty one has in stabilizing the platform the more it will oscillate, increasing the variability in the platform's position. The experiment was divided into 3 sections. During the Baseline section (5 minutes), subjects performed the task with immediate vestibular feedback. They then were presented with a Training section, consisting of 4 sessions (5 minutes each) during which vestibular feedback was delayed by 500 ms. Finally, subjects were presented with a Post-test (two minutes) with no feedback delay. Subjects performed rather well in the Baseline section (average standard deviation of platform tilt was 1.37 degrees). The introduction of the delay greatly impaired performance (8.81 degrees standard deviation in the 1st Training session), but performance rapidly showed significant improvement (5.59 degrees standard deviation during the last training section, p<0.04). Subjects clearly learned to compensate, at least partially, for the delayed vestibular feedback. Performance during the Post-test was worse than during Baseline (2.48 degrees standard deviation in tilt). This decrease suggests that the improvement seen during training might be the result of intersensory temporal adaptation. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf640.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/1/3/135/ Biologische Kybernetik Max-Planck-Gesellschaft Sarasota, FL, USA First Annual Meeting of the Vision Sciences Society (VSS 2001) 10.1167/1.3.135 dwcDWCunningham bjoernBWKreher mvdhMvon der Heyde hhbHHBülthoff poster 1012 Gaze-eccentricity effects on automobile driving performance, or going where you look Journal of Vision 2001 12 1 3 136 A large portion of current and previous research on locomotor and vehicle navigation tends to assume that people choose a goal or destination in the visual world and then generally look where they are going. There exists, however, considerable anecdotal evidence and observational data suggesting that humans also will tend to go where they are looking. Considering the amount of time a pedestrian or driver spends looking away from his precise heading point, this tendency has received relatively little experimental attention. The goal of the present research is to determine how gaze eccentricity affects drivers' abilities to steer a straight course. A high-performance virtual reality theater was used to simulate the motion of a car through a textured environment with the participant controlling direction of travel via a forced-feedback steering wheel. Participants (n=12) were asked to fixate a Landolt-C figure in one of 7 positions (0, +/- 15, 30, or 45 degrees from center) and drive down the center of a perfectly straight road. The Landolt-C was located just above the horizon in a fixed position on the viewing screen. Throughout each trial, the orientation of the figure varied randomly between 4 possible positions (0, 90, 180, and 270 degrees) and, in order to ensure fixation, subjects were required to respond to particular orientations. Lateral position of the driver was recorded for each of the different gaze eccentricities. Significant deviations from straight ahead were found for side of presentation when compared to fixation at 0 degrees (p<0.01). That is, when participants fixated to the left, for example, they systematically steered in the same direction. These results are compared to another study in which subjects' performance was measured while their head movements were restricted using a head-strap and chin-rest. The similar pattern of results in both conditions will be discussed in terms of the influence of retinal flow on the control of locomotion. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1012.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/content/1/3/136 Biologische Kybernetik Max-Planck-Gesellschaft Sarasota, FL, USA First Annual Meeting of the Vision Sciences Society (VSS 2001) 10.1167/1.3.136 wilWOReadinger astrosAChatziastros dwcDWCunningham hhbJECutting hhbHHBülthoff poster 630 No visual dominance for remembered turns - Psychophysical experiments on the integration of visual and vestibular cues in Virtual Reality Journal of Vision 2001 12 1 3 188 In most virtual reality (VR) applications turns are misperceived, which leads to disorientation. Here we focus on two cues providing no absolute spatial reference: optic flow and vestibular cues. We asked whether: (a) both visual and vestibular information are stored and can be reproduced later; and (b) if those modalities are integrated into one coherent percept or if the memory is modality specific. We used a VR setup including a motion simulator (Stewart platform) and a head-mounted display for presenting vestibular and visual stimuli, respectively. Subjects followed an invisible randomly generated path including heading changes between 8.5 and 17 degrees. Heading deviations from this path were presented as vestibular roll rotation. Hence the path was solely defined by vestibular (and proprioceptive) information. The subjects' task was to continuously adjust the roll axis of the platform to level position. They controlled their heading with a joystick and thereby maintained an upright position. After successfully following a vestibularly defined path twice, subjects were asked to reproduce it from memory. During the reproduction phase, the gain between the joystick control and the resulting visual and vestibular turns were independently varied. Subjects learned and memorized curves of the vestibularly defined virtual path and were able to reproduce the amplitudes of the turns. This demonstrates that vestibular signals can be used for spatial orientation in virtual reality. Since the modality with the bigger gain factor had a dominant effect on the reproduced turns, the integration of visual and vestibular information seems to follow a “max rule”, in which the larger signal is responsible for the perceived and memorized heading change. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf630.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.journalofvision.org/1/3/188/ Biologische Kybernetik Max-Planck-Gesellschaft Sarasota, FL, USA First Annual Meeting of the Vision Sciences Society (VSS 2001) 10.1167/1.3.188 mvdhMvon der Heyde bernieBERiecke dwcDWCunningham hhbHHBülthoff poster 655 Driving effects of retinal flow properties associated with eccentric gaze Perception 2001 8 30 ECVP Abstract Supplement 109 There has been growing evidence expressing the computational (and perhaps practical) difficulty of recovering heading from retinal flow when the observer is looking away from his path. Despite this, it is generally accepted that retinal-flow information plays a significant role in the control of locomotion. The experiments reported here attempt to address one meaningful behavioural consequence associated with this situation. Specifically, we consider eccentric gaze and its effects on automobile steering. In three conditions, we measured drivers' steering performance on a straight road, located in a textured ground plane, and presented in a 180 deg field-of-view projection theatre. Consistent with earlier findings, at eccentricities from 15 to 45 deg away from heading direction, subjects' lateral position on the road tended significantly towards their direction of gaze (p < 0.001), but eccentricities of as little as 5 deg from heading direction also significantly affected position on the road surface (p < 0.01). Furthermore, this effect was found to scale based on small (±5 deg) changes in gaze-movement angle, but not with speed of travel through the environment. We propose, therefore, a model of steering performance in such situations resulting from characteristics of the retinal flow immediately surrounding the point of fixation. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/30/1_suppl/1.full.pdf+html Biologische Kybernetik Max-Planck-Gesellschaft Kuşadasi, Turkey Twenty-fourth European Conference on Visual Perception 10.1177/03010066010300S101 wilWOReadinger astrosAChatziastros dwcDWCunningham hhbHHBülthoff poster 1246 Temporal Adaptation with a variable delay Perception 2001 8 30 ECVP Abstract Supplement 102 The consequences of an action almost always occur immediately. Delaying the consequences of an action (eg by delaying visual feedback) drastically impairs performance on a wide range of tasks. A few minutes of exposure to a delay can, however, induce sensorimotor temporal adaptation. Here we ask whether a stable delay is necessary for temporal adaptation. Specifically, we examined performance in a driving simulator (where subjects could control the direction but not the speed of travel). The delay was on average 250 ms, but fluctuated rapidly (36 Hz) and randomly between 50 and 450 ms. Overall, subjects were able to learn to drive a virtual car with a variable delay. In one experiment, we found that the adapted state also improved performance on untrained streets (generalisation). In a second experiment, performance with immediate feedback was measured both before and after delay training. We found a strong negative aftereffect (approximately 50% drop in performance from pre- to post-test). While some behavioural strategies (eg slow gradual changes in steering wheel angle) might mitigate the impact of a variable delay, these strategies do not totally eliminate the variability, particular for fast speeds and sharp corners. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/30/1_suppl/1.full.pdf+html Biologische Kybernetik Max-Planck-Gesellschaft Kuşadasi, Turkey Twenty-fourth European Conference on Visual Perception 10.1177/03010066010300S101 dwcDWCunningham hhbHHBülthoff poster 67 Gaze-direction effects on drivers' abilities to steer a straight course 2001 3 149 Applied navigation tasks, such as driving a car, present unique opportunities to study the human perception/action system. Traditionally, research into the control of locomotion has assumed that humans choose a destination and then generally look where they go. However, evidence from motorcycle and equitation manuals, for instance, suggests that the reciprocal behavior is also important. That is, even with a distinct goal in the environment, people tend to navigate in the direction they are looking, only occasionally checking on their progress toward a destination and making adjustments as necessary. Considering the implications for the performance and safety of drivers, the present study is designed to investigate effects of gaze-eccentricity on basic steering abilities. Using a 180-degree horizontal FoV projection theater, we simulated a car moving through a textured environment while participants used a forced-feedback steering-wheel to control direction of travel. During each trial, participants (n=12) were asked to fixate a Landolt-C figure which was displayed in one of 7 positions (0, +/- 15, 30, or 45 degrees from center) anchored on the screen, and drive down the center of a straight road. In order to ensure fixation, the orientation of the Landolt-C varied randomly between 4 positions (0, 90, 180, and 270 degrees) and participants were required to respond to particular orientations by pressing a button on the steering-wheel. The lateral position of the driver was measured during the trial. In this basic condition, significant deviations from straight ahead were found when conditions of eccentric gaze were compared to fixation at 0 degrees (p<0.001). Specifically, fixation to one side of the street systematically lead the driver to steer in that direction. These results are similar to the findings from another condition in which participants' head movements were restricted. The contribution of retinal flow to this pattern of data will be discussed, along with reports of driver experience and confidence. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf67.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk01/Psenso.htm Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 4. Tübinger Wahrnehmungskonferenz (TWK 2001) wilWOReadinger astrosAChatziastros dwcDWCunningham hhbJECutting hhbHHBülthoff poster 59 Temporal adaptation to delayed vestibular feedback 2001 3 151 In order to rapdily and accurately interact with the world, we need to perceive the consequences of our actions. It should not be surprising, then, that delaying the consequences of our actions, or delaying feedback about our actions, impairs performance on a wide range of tasks. We have recently shown that a few minutes exposure to delayed visual feedback induces sensorimotor temporal adaptation, returning performance to near normal levels. While visual feedback plays a large role in many tasks, there are some tasks for which vestibular perception is more critical. Here, we examine whether adaptation to delayed vestibular feedback is possible. To test for vestibular temporal adaptation, subjects were placed on a motion platform and were asked to perform a stabilization task. The task was similar to balancing a rod on the tip of your finger. Specifically, the platform acted as if it were on the end of an inverted pendulum. Subjects moved the platform by applying an acceleration to it via a joystick. The experiment was divided into 3 sections. During the Baseline section, which lasted 5 minutes, subjects performed the task with immediate vestibular feedback. They then were presented with a Training section, which consisted of 4 sessions (5 minutes each) during which vestibular feedback was delayed by 500 ms. Finally, subjects performance on the task with immediate feedback was remeasured during a 2 minute Post-test. The more difficulty one has in stabilizing the platform the more it will oscillate, increasing the variablilty in the platform's position and orientation. Accordingly, positional variance served as the primary measure of the subjects' performance. Subjects did rather well in the Baseline section (average standard deviation of platform tilt was 1.37 degrees). The introduction of the delay greatly impaired performance (8.81 degrees standard deviation in the 1st Training session), but performance rapidly showed significant improvement (5.59 degrees standard deviation during the last training session). Subjects clearly learned to compensate, at least partially, for the delayed vestibular feedback. Performance during the Post-test showed a negative aftereffect: The performance with a 500 ms delay worse during the Post-test than during Baseline (2.48 degrees versus 1.37 degreees), suggesting that the improvement seen during training was the result of intersensory temporal adaptation. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf59.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk01/Psenso.htm H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 4. Tübinger Wahrnehmungskonferenz (TWK 2001) dwcDWCunningham bjoernBWKreher mvdhMvon der Heyde hhbHHBülthoff poster 63 Visual-vestibular sensor integration follows a max-rule: results from psychophysical experiments in virtual reality 2001 3 142 Perception of ego turns is crucial for navigation and self-localization. Yet in most virtual reality (VR) applications turns are misperceived, which leads to disorientation. Here we focus on two cues providing no absolute spatial reference: optic flow and vestibular cues. We asked whether: (a) both visual and vestibular information are stored and can be reproduced later; and (b) if those modalities are integrated into one coherent percept or if the memory is modality specific. In the following experiment, subjects learned and memorized turns and were able to reproduce them even with different gain factors for the vestibular and visual feedback. We used a VR setup including a motion simulator (Stewart platform) and a head-mounted display for presenting vestibular and visual stimuli, respectively. Subjects followed an invisible randomly generated path including heading changes between 8.5 and 17 degrees. Heading deviations from this path were presented as vestibular roll rotation. Hence the path was solely defined by vestibular (and proprioceptive) information. One group of subjects' continuously adjusted the roll axis of the platform to level position. They controlled their heading with a joystick and thereby maintained an upright position. The other group was passively guided through the sequence of heading turns without any roll signal. After successfully following a vestibularly defined path twice, subjects were asked to reproduce it from memory. During the reproduction phase, the gain between the joystick control and the resulting visual and vestibular turns were independently varied by a factor of 1/sqrt(2), 1 or sqrt(2). Subjects from both groups learned and memorized curves of the vestibularly defined virtual path and were able to reproduce the amplitudes of the turns. This demonstrates that vestibular signals can be used for spatial orientation in virtual reality. Since the modality with the bigger gain factor had for both groups a dominant effect on the reproduced turns, the integration of visual and vestibular information seems to follow a "max rule", in which the larger signal is responsible for the perceived and memorized heading change. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf63.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk01/Psenso.htm H.H. Bülthoff, K.R. Gegenfurtner, H.A. Mallot, R. Ulrich Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 4. Tübinger Wahrnehmungskonferenz (TWK 2001) mvdhMvon der Heyde bernieBERiecke dwcDWCunningham hhbHHBülthoff poster 1995 Spatiotemporal discrimination thresholds for dynamic random fractal (1/f) textures Perception 2000 8 29 ECVP Abstract Supplement 104 Natural scenes are fractal in space (ie they have 1/f B spatial frequency spectra) and time (1/f A temporal spectra), and can be compellingly mimicked by fractal textures. If dynamic fractal texture statistics are used to describe natural scenes, then data on discriminability of such textures are required. The smallest detectable change was measured separately for 10 spatial (0.4 to 2.2) and 8 temporal exponents (static, and 0.2 to 1.4) with an adaptive staircase. Computational constraints limited each fractal to 64 frames (~ 2 s) of 64 × 64 pixel images. Spatial discriminations were easiest when the spatial exponent B was ~ 1.6 and were similar across all temporal exponents. Temporal discriminations were easiest when the temporal exponent A was ~ 0.8, and increased in difficulty as the spatial exponent increased. This similarity in spatial discrimination thresholds for static and dynamic fractals suggests that the spatial and temporal dimensions are independent in dynamic fractals (at least for spatial judgments), as is often assumed. The dependence of temporal judgments on the coarseness of the texture (ie on the spatial exponent) is understandable, as a 1 mm change in position is more noticeable for a 1 mm object than for a 100 m object. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf1995.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://pec.sagepub.com/content/29/1_suppl/1.full.pdf+html Biologische Kybernetik Max-Planck-Gesellschaft Groningen, Netherlands 23rd European Conference on Visual Perception (ECVP 2000) 10.1177/03010066000290S101 dwcDWCunningham VABillock BHTsou poster 114 Learning to drive with delayed visual feedback Investigative Ophthalmology & Visual Science 2000 5 41 4 S246 http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf114.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Biologische Kybernetik Max-Planck-Gesellschaft Fort Lauderdale, FL, USA Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 2000) dwcDWCunningham mvdhMvon der Heyde hhbHHBülthoff poster 164 Humans can extract distance and velocity from vestibular perceived acceleration Journal of Cognitive Neuroscience 2000 4 12 Supplement 77 Purpose: The vestibular system is known to measure accelerations for linear forward movements. Can humans integrate these vestibular signals to derive reliably distance and velocity estimates? Methods: Blindfolded naive volunteers participated in a psychophysical experiment using a Stewart-Platform motion simulator. The vestibular stimuli consisted of Gaussian-shaped translatory velocity profiles with a duration of less than 4 seconds. The full two-factorial design covered 6 peak accelerations above threshold and 5 distances up to 25cm with 4 repetitions. In three separate blocks, the subjects were asked to verbally judge on a scale from 1 to 100 traveled distance, maximum velocity and maximum acceleration. Results: Subjects perceived distance, velocity and acceleration quite consistently, but with systematic errors. The distance estimates showed a linear scaling towards the mean and were independent of accelerations. The correlation of perceived and real velocity was linear and showed no systematic influence of distances or accelerations. High accelerations were drastically underestimated and accelerations close to threshold were overestimated, showing a logarithmic dependency. Conclusions: Despite the fact that the vestibular system measures acceleration only, one can derive peak velocity and traveled distance from it. Interestingly, even though maximum acceleration was perceived non linear, velocity and distance was judged consistently linear. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf164.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://cognet.mit.edu/library/conferences/paper?paper_id=47341 Biologische Kybernetik Max-Planck-Gesellschaft San Francisco, CA, USA 7th Annual Meeting of the Cognitive Neuroscience Society mvdhMvon der Heyde bernieBERiecke dwcDWCunningham hhbHHBülthoff poster 113 Driving a virtual car with delayed visual feedback 2000 2 164 The consequences of an action usually occur immediately. One of the more important ramifications of this is that delaying visual feedback greatly impairs performance on a wide range of tasks. Cunningham et al. (ARVO 1999) have demonstrated that with practice, humans can perform equally well with delayed and immediate visual feedback on a simple obstacle avoidance task with abstract stimuli. Here, we examine the effects of training in more detail under more realistic conditions. Naive volunteers maneuvered a virtual car along a convoluted path in a high-fidelity virtual environment, which was projected onto a 180 deg. screen. Subjects drove at a constant speed, steering with a forced-feedback steering wheel. In Exp. 1, subjects were presented with 7 speeds in random order 5 times, using immediate visual feedback and a single path. Subsequently, subjects trained with a 280 ms delay, and then were presented with 5 trials at the fastest speed they had successfully completed in the first section. In Exp. 2, subjects were given 15 trials of practice using immediate feedback. Following this, subjects’ performance with 5 paths at 3 speeds was measured, then they trained on a new path, and finally they were presented with 5 new paths at the 3 speeds. In both experiments, training with delayed feedback improved performance accuracy with delayed feedback, and seemed to reduce the perceptual magnitude of the delay. In Exp. 1, the training also lowered performance with immediate feedback. In Exp. 2, the improved performance generalized to novel paths. These results are the main hallmarks for sensorimotor adaptation, and suggest that humans can adapt to intersensory temporal differences. Regardless of the underlying mechanism, however, it is clear that accurate control of vehicles at high speeds with delayed feedback can be learned. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf113.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk00/ H.H. Bülthoff, M. Fahle, K.R. Gegenfurtner, H.A. Mallot Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 3. Tübinger Wahrnehmungskonferenz (TWK 2000) dwcDWCunningham mvdhMvon der Heyde hhbHHBülthoff poster 165 Humans can separately perceive distance, velocity and acceleration from vestibular stimulation 2000 2 148 The vestibular system is known to measure changes in linear and angular position changes in terms of acceleration. Can humans judge these vestibular signals as acceleration and integrate them to reliably derive distance and velocity estimates? Twelve blindfolded naive volunteers participated in a psychophysical experiment using a Stewart-Platform motion simulator. The vestibular stimuli consisted of Gaussian-shaped translatory or rotatory velocity profiles with a duration of less than 4 seconds. The full two-factorial design covered 6 peak accelerations above threshold and 5 distances with 4 repetitions. In three separate blocks, the subjects were asked to verbally judge on a scale from 1 to 100 the distance traveled or the angle turned, maximum velocity and maximum acceleration. Subjects judged the distance, velocity and acceleration quite consistently, but with systematic errors. The distance estimates showed a linear scaling towards the mean response and were independent of accelerations. The correlation of perceived and real velocity was linear and showed no systematic influence of distances or accelerations. High accelerations were drastically underestimated and accelerations close to threshold were overestimated, showing a logarithmic dependency. Therefore, the judged acceleration was close to the velocity judgment. There was no significant difference between translational and angular movements. Despite the fact that the vestibular system measures acceleration only, one can derive peak velocity and traveled distance from it. Interestingly, even though maximum acceleration was perceived non-linearly, velocity and distance judgments were linear. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf165.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff http://www.twk.tuebingen.mpg.de/twk00/ H.H. Bülthoff, M. Fahle, K.R. Gegenfurtner, H.A. Mallot Biologische Kybernetik Max-Planck-Gesellschaft Tübingen, Germany 3. Tübinger Wahrnehmungskonferenz (TWK 2000) mvdhMvon der Heyde bernieBERiecke dwcDWCunningham hhbHHBülthoff poster 1247 Sensorimotor adaptation to temporally displaced feedback Investigative Ophthalmology & Visual Science 1999 5 40 4 585 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft Fort Lauderdale, FL, USA Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1999) dwcDWCunningham BHTsou poster 1248 Perception of spatiotemporal random fractal textures: Towards a colorimetry of dynamic texture Investigative Ophthalmology & Visual Science 1998 5 39 4 859 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft Fort Lauderdale, FL, USA Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1998) dwcDWCunningham PRHavig JSChen VABillock BHTsou poster 1249 The roles of spatial and spatiotemporal surface information in Spatiotemporal Boundary Formation. Investigative Opthalmoology and Visual Science Supplement 1997 38(4) S1005 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft dwcDWCunningham TFShipley PJKellman poster 1250 Spatiotemporal Boundary Formation: The role of global motion signals Investigative Ophthalmology & Visual Science 1996 4 37 3 172 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft Fort Lauderdale, FL, USA Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO 1996) dwcDWCunningham TFShipley PJKellman thesis 5751 Perceptual Graphics 2007 http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Biologische Kybernetik Max-Planck-Gesellschaft University of Tübingen, Tübingen, Germany habilitation en dwcDWCunningham conference KaulardWCB2009 Laying the foundations for an in-depth investigation of the whole space of facial expressions 2009 11 10 11 Compared to other species, humans have developed highly sophisticated communication systems for social interaction. One of the most important communication systems is based on facial expressions, which are both used for expressing emotions and conveying intentions. Starting already at birth, humans are trained to process faces and facial expressions, resulting in a high degree of perceptual expertise for face perception and social communication. To date, research has mostly focused on the emotional aspect of facial expression processing, using only a very limited set of „generic“ or „universal“ expressions, such as happiness or sadness. The important communicative aspect of facial expressions, however, has so far been largely neglected. Furthermore, the processing of facial expressions is influenced by dynamic information (e. g. Fox et al., 2009). However, almost all studies so far have used static expressions and thus were studying facial expressions in an ecologically less valid context (O’Toole et al., 2004). In order to enable a deeper understanding of facial expression processing it therefore seems crucial to investigate the emotional and communicative aspects of facial expressions in a dynamic context. For these investigations it is essential to first construct a database that contains such material using a well-controlled setup. In this talk, we will present the novel MPI facial expression database, which to our knowledge is the most extensive database of this kind up to date. Furthermore, we will briefly present psychophysical experiments with which we investigated the validity of our database, as well as the recognizability of a large set of facial expressions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk http://www.neuroschool-tuebingen-nena.de/ Ellwangen, Germany 10th Conference of Junior Neuroscientists of Tübingen (NeNa 2009) kascotKKaulard walliCWallraven dwcDWCunningham hhbHHBülthoff conference 5920 Motion and form interact in expression recognition: Insights from computer animated faces Perception 2009 8 38 ECVP Abstract Supplement 163 Faces are a powerful and versatile communication channel. Yet, little is known about which changes are important for expression recognition (for a review, see Schwaninger et al, 2006 Progress in Brain Research). Here, we investigate at what spatial and temporal scales expressions are recognized using five different expressions and three animation styles. In point-light faces, the motion and configuration of facial features can be inferred, but the higher-frequency spatial deformations cannot. In wireframe faces, additional information about spatial configuration and deformation is available. Finally, full-surface faces have the highest degree of static information. In our experiment, we also systematically varied the number of vertices and the presence of motion. Recognition accuracy (6AFC with &'acute;none-of-the-above' option) and perceived intensity (7-point scale) were measured. Overall, in contrast to static expressions, dynamic expressions performed better (72% versus 49%, 4.50 versus 3.94) and were largely impervious to geometry reduction. Interestingly, in both conditions, wireframe faces suffered the least from geometry reduction. On the one hand, this suggests that more information than motion of single vertices is necessary for recognition. On the other hand, it shows that the geometry reduction affects the full-surface face more than the abstracted versions. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk http://pec.sagepub.com/content/38/1_suppl.toc Biologische Kybernetik Max-Planck-Gesellschaft Regensburg, Germany 32nd European Conference on Visual Perception en 10.1177/03010066090380S101 dwcDCunningham walliCWallraven conference 4595 Perception of accentuation in audio-visual speech 2006 5 Introduction: In everyday speech, auditory and visual information are tightly coupled. Consistent with this, previous research has shown that facial and head motion can improve the intelligibility of speech (Massaro et al., 1996; Munhall et al., 2004; Saldana & Pisoni 1996). The multimodal nature of speech is particularly noticeable for emphatic speech, where it can be exceedingly difficult to produce the proper vocal stress patterns without producing the accompanying facial motion. Using a detection task, Swerts and Krahmer (2004) demonstrated that information about which word is emphasized exists in both the visual and acoustic modalities. It remains unclear as to what the differential roles of visual and auditory information are for the perception of emphasis intensity. Here, we validate a new methodology for acquiring, presenting, and studying verbal emphasis. Subsequently, we can use the newly established methodology to explore the perception and production of believable accentuation. Experiment: Participants were presented with a series of German sentences, in which a single word was emphasized. For each of the 10 base sentences, two factors were manipulated. First, the semantic category varied -- the accent bearing word was either a verb, an adjective, or a noun. Second, the intensity of the emphasis was varied (no, low, and high). The participants' task was to rate the intensity of the emphasis using a 7 point Likert scale (with a value of 1 indicating weak and 7 strong). Each of the 70 sentences were recorded from 8 Germans (4 male and 4 female), yielding a total of 560 trials. Results and Conclusion: Overall, the results show that people can produce and recognize different levels of accentuation. All "high" emphasis sentences were ranked as being more intense (5.2, on average) than the "low" emphasis sentences (4.1, on average). Both conditions were rated as more intense than the "no" emphasis sentences (1.9). Interestingly, "verb" sentences were rated as being more intense than either the "noun" or "adjective" sentences, which were remarkably similar. Critically, the pattern of intensity ratings was the same for each of the ten sentences strongly suggesting that the effect was solely due to the semantic role of the emphasized word. We are currently employing this framework to more closely examine the multimodal production and perception of emphatic speech. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk http://www.music.mcgill.ca/musictech/spcl/ENACTIVE/talks.html Biologische Kybernetik Max-Planck-Gesellschaft Montreal, Canada 2nd Enactive Workshop at McGill University en manfredMNusseck dwcDWCunningham walliCWallraven hhbHHBülthoff conference 2319 Facial Animation Based on 3D Scans and Motion Capture 2003 7 One of the applications of realistic facial animation outside the film industry is psychophysical research in order to understand the perception of human facial motion. For this, an animation model close to physical reality is important. Through the combination of high-resolution 3D scans and 3D motion capture, we aim for such a model and provide a prototypical example in this sketch. State-of-the art 3D scanning systems deliver very high spatial resolution but usually are too slow for real-time recording. Motion capture (mocap) systems on the other hand have fairly high temporal resolution for a small set of tracking points. The idea presented here is to combine these two in order to get high resolution data in both domains that is closely based upon real-world properties. While this is similar to previous work, for example [Choe et al. 2001] or [Pighin et al. 2002], the innovation of our approach lies in the combination of precision 3D geometry, high resolution motion tracking and photo-realistic textures. http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/pdf2319.pdf http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk http://www.siggraph.org/s2003/conference/sketches/40.html Biologische Kybernetik Max-Planck-Gesellschaft San Diego, CA, USA 30th International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH 2003) mbreidtMBreidt walliCWallraven dwcDWCunningham hhbHHBülthoff conference 652 Can we be forced off the road by the visual motion of snowflakes? Immediate and longer-term responses to visual perturbations Perception 2000 8 29 ECVP Abstract Supplement 118 Several sources of information have been proposed for the perception of heading. Here, we independently varied two such sources (optic flow and viewing direction) to examine the influence of perceived heading on driving. Participants were asked to stay in the middle of a straight road while driving through a snowstorm in a simulated, naturalistic environment. Subjects steered with a forced-feedback steering wheel in front of a large cylindrical screen. The flow field was varied by translating the snow field perpendicularly to the road, producing a second focus of expansion (FOE) with an offset of 15°, 30°, or 45°. The perceived direction was altered by changing the viewing direction 5°, 10°, or 15°. The onset time, direction, and magnitude of the two disturbances were pseudo-randomly ordered. The translating snow field caused participants to steer towards the FOE of the snow, resulting in a significant lateral displacement on the road. This might be explained by induced motion. Specifically, the motion of the snow might have been misperceived as a translation of the road. On the other hand, changes in viewing direction resulted in subjects steering towards the road's new vantage point. While the effect of snow persisted over repeated exposures, the viewing-direction effect attenuated. http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de http://www.kyb.tuebingen.mpg.de Department Bülthoff Abstract Talk http://pec.sagepub.com/content/29/1_suppl/1.full.pdf+html Biologische Kybernetik Max-Planck-Gesellschaft Groningen, Netherlands 23rd European Conference on Visual Perception (ECVP 2000) 10.1177/03010066000290S101 astrosAChatziastros dwcDWCunningham hhbHHBülthoff