Fademrecht2017 1 L Fademrecht Logos Verlag Berlin, Germany 2017-00-00 Humans are social beings that interact with others in their surroundings. In a public space, for example on a train platform, one can observe the wide array of social actions humans express in their daily lives. There are for instance people hugging each other, waving to one another or shaking hands. A large part of our social behavior consists of carrying out such social actions and the recognition of those actions facilitates our interactions with other people. Therefore, action recognition has become more and more popular as a research topic over the years. Actions do not only appear at our point of fixation but also in the peripheral visual field. The current Ph.D. thesis aims at understanding action recognition in the human central and peripheral vision. To this end, action recognition processes have been investigated under more naturalistic conditions than has been done so far. This thesis extends the knowledge about action recognition processes into more realistic scenarios and the far visual periphery. In four studies, life size action stimuli were used (I) to examine the action categorization abilities of central and peripheral vision, (II) to investigate the viewpoint-dependency of peripheral action representations, (III) to behaviorally measure the perceptive field sizes of action sensitive channels and (IV) to investigate the influence of additional actors in the visual scene on action recognition processes. The main results of the different studies can be summarized as follows. In Study I a high categorization performance for social actions throughout the visual field with a nonlinear performance decline towards the visual periphery was shown. Study II revealed a viewpoint-dependence of action recognition only in far visual periphery. In Study III large perceptive fields for action recognition were measured that decrease in size towards the periphery. And in Study IV no influence of a surrounding crowd of people on the recognition of actions in central vision and the visual periphery was shown. In sum, this thesis provides evidence that the abilities of peripheral vision have been underestimated and that peripheral vision might play a more important role in daily life than merely triggering gaze saccades to events in our environment. no notspecified http://www.kyb.tuebingen.mpg.de/ published 143 Action Recognition in the Visual Periphery 15017 15422 Saulton2017 1 A Saulton Logos Verlag Berlin, Germany 2017-00-00 Accurate information about body structure and posture is fundamental for effective control of our actions. It is often assumed that healthy adults have accurate representations of their body. Although people's abilities to visually recognize their own body size and shape are relatively good, the implicit spatial representation of their body is extremely distorted when measured in proprioceptive localization tasks. The aim of this thesis is to understand the nature of spatial distortions of the body model measured in those localization tasks. We especially investigate the perceptual-cognitive components contributing to distortions of implicit representation of the human hand and compare those distortions with the one found on objects in similar tasks. no notspecified http://www.kyb.tuebingen.mpg.de/ published 93 Understanding the nature of the body model underlying position sense 15017 15422 BurchCFSW2016 28 M Burch L Chuang B Fischer A Schmidt D Weiskopf FademrechtBd2017 3 L Fademrecht I Bülthoff S de la Rosa 2017-06-00 135 10–15 Vision Research Recognizing actions of others across the whole visual field is required for social interactions. In a previous study, we have shown that recognition is very good even when life size avatars who were facing the observer carried out actions (e.g. waving) were presented very far away from the fovea (Fademrecht, Bülthoff, & de la Rosa, 2016). We explored the possibility whether this remarkable performance was owed to life size avatars facing the observer, which - according to some social cognitive theories (e.g. Schilbach et al., 2013) - could potentially activate different social perceptual processes as profile facing avatars. Participants therefore viewed a life-size stick-figure avatar that carried out motion-captured social actions (greeting actions: handshake, hugging, waving; attacking actions: slapping, punching and kicking) in frontal and profile view. Participants' task was to identify the actions as 'greeting' or as 'attack' or to assess the emotional valence of the actions. While recognition accuracy for frontal and profile views did not differ, reaction times were significantly faster in general for profile views (i. e. the moving avatar was seen profile on) than for frontal views (i.e. the action was directed toward the observer). Our results suggest that the remarkable well action recognition performance in the visual periphery was not owed to a more socially engaging front facing view. Although action recognition seems to depend on viewpoint, action recognition in general remains remarkable accurate even far into the visual periphery. no notspecified http://www.kyb.tuebingen.mpg.de/ published -10 Action recognition is viewpoint-dependent in the visual periphery 15017 15422 SpetterMBLvSSPVH2017 3 MS Spetter R Malekshahi N Birbaumer M Lührs AH van der Veer K Scheffler S Spuckti H Preissl R Veit M Hallschmid 2017-05-00 112 188–195 Appetite Obese subjects who achieve weight loss show increased functional connectivity between dorsolateral prefrontal cortex (dlPFC) and ventromedial prefrontal cortex (vmPFC), key areas of executive control and reward processing. We investigated the potential of real-time functional magnetic resonance imaging (rt-fMRI) neurofeedback training to achieve healthier food choices by enhancing self-control of the interplay between these brain areas. We trained eight male individuals with overweight or obesity (age: 31.8 ± 4.4 years, BMI: 29.4 ± 1.4 kg/m2) to up-regulate functional connectivity between the dlPFC and the vmPFC by means of a four-day rt-fMRI neurofeedback protocol including, on each day, three training runs comprised of six up-regulation and six passive viewing trials. During the up-regulation runs of the four training days, participants successfully learned to increase functional connectivity between dlPFC and vmPFC. In addition, a trend towards less high-calorie food choices emerged from before to after training, which however was associated with a trend towards increased covertly assessed snack intake. Findings of this proof-of-concept study indicate that overweight and obese participants can increase functional connectivity between brain areas that orchestrate the top-down control of appetite for high-calorie foods. Neurofeedback training might therefore be a useful tool in achieving and maintaining weight loss. no notspecified http://www.kyb.tuebingen.mpg.de/ published -188 Volitional regulation of brain responses to food stimuli in overweight and obese subjects: a real-time fMRI feedback study 15017 18821 15017 15422 SaultonBdD2017 3 A Saulton HH Bülthoff S de la Rosa TJ Dodds 2017-04-00 4 12 1 12 PLoS One Cultural differences in spatial perception have been little investigated, which gives rise to the impression that spatial cognitive processes might be universal. Contrary to this idea, we demonstrate cultural differences in spatial volume perception of computer generated rooms between Germans and South Koreans. We used a psychophysical task in which participants had to judge whether a rectangular room was larger or smaller than a square room of reference. We systematically varied the room rectangularity (depth to width aspect ratio) and the viewpoint (middle of the short wall vs. long wall) from which the room was viewed. South Koreans were significantly less biased by room rectangularity and viewpoint than their German counterparts. These results are in line with previous notions of general cognitive processing strategies being more context dependent in East Asian societies than Western ones. We point to the necessity of considering culturally-specific cognitive processing strategies in visual spatial cognition research. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 Cultural differences in room size perception 15017 15422 NestmeyerRBF2016 3 T Nestmeyer P Robuffo Giordano HH Bülthoff A Franchi 2017-04-00 4 41 989–1011 Autonomous Robots This paper presents a novel decentralized control strategy for a multi-robot system that enables parallel multi-target exploration while ensuring a time-varying connected topology in cluttered 3D environments. Flexible continuous connectivity is guaranteed by building upon a recent connectivity maintenance method, in which limited range, line-of-sight visibility, and collision avoidance are taken into account at the same time. Completeness of the decentralized multi-target exploration algorithm is guaranteed by dynamically assigning the robots with different motion behaviors during the exploration task. One major group is subject to a suitable downscaling of the main traveling force based on the traveling efficiency of the current leader and the direction alignment between traveling and connectivity force. This supports the leader in always reaching its current target and, on a larger time horizon, that the whole team realizes the overall task in finite time. Extensive Monte Carlo simulations with a group of several quadrotor UAVs show the scalability and effectiveness of the proposed method and experiments validate its practicability. no notspecified http://www.kyb.tuebingen.mpg.de/ published -989 Decentralized simultaneous multi-target exploration using a connected network of multiple robots 15017 15422 NooijPOHB2017 3 SAE Nooij P Pretto D Oberfeld H Hecht HH Bülthoff 2017-04-00 4 12 1 19 PLoS ONE This study investigated the role of vection (i.e., a visually induced sense of self-motion), optokinetic nystagmus (OKN), and inadvertent head movements in visually induced motion sickness (VIMS), evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent), and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent). In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48). Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS. no notspecified http://www.kyb.tuebingen.mpg.de/ published 18 Vection is the main contributor to motion sickness induced by visual yaw rotation: Implications for conflict and eye movement theories 15017 15422 BrooksT2017 3 J Brooks A Thaler 2017-04-00 Epub ahead Journal of Neurophysiology A reliable mechanism to predict the heaviness of an object is important for manipulating an object under environmental uncertainty. Recently, Cashaback et al. (Journal of Neurophysiol 117: 260-274, 2017) showed that for object lifting, the sensorimotor system uses a strategy that minimizes prediction error when the object's weight is uncertain. Previous research demonstrates that visually guided reaching is similarly optimised. Although this suggests a unified strategy of the sensorimotor system for object manipulation, the selected strategy appears to be task dependent and subject to change in response to the degree of environmental uncertainty. no notspecified http://www.kyb.tuebingen.mpg.de/ accepted 0 The sensorimotor system minimizes prediction error for object lifting when the object's weight is uncertain 15017 15422 HongLBS2016 3 A Hong DG Lee HH Bülthoff HI Son 2017-03-00 1 11 67–80 Journal on Multimodal User Interfaces Better situational awareness helps understand remote environments and achieve better performance in the teleoperation of multiple mobile robots (e.g., a group of unmanned aerial vehicles). Visual and force feedbacks are the most common ways of perceiving the environments accurately and effectively; however, accurate and adequate sensors for global localization are impractical in outdoor environments. Lack of this information hinders situational awareness and operating performance. In this paper, a visual and force feedback method is proposed for enhancing the situational awareness of human operators in outdoor multi-robot teleoperation. Using only the robots’ local information, the global view is fabricated from individual local views, and force feedback is determined by the velocity of individual units. The proposed feedback method is evaluated via two psychophysical experiments: maneuvering and searching tests using a human/hardware-in-the-loop system with simulated environments. In the tests, several quantitative measures are also proposed to assess the human operator’s maneuverability and situational awareness. Results of the two experiments show that the proposed multimodal feedback enhances only situational awareness of the operator. no notspecified http://www.kyb.tuebingen.mpg.de/ published -67 Multimodal feedback for teleoperation of multiple mobile robots in an outdoor environment 15017 15422 ZhaoB2016_4 3 M Zhao I Bülthoff 2017-02-00 Epub ahead Journal of Experimental Psychology: Learning, Memory, and Cognition Humans’ face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability—holistic face processing—remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers’ expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2017/ZhaoBulthoff_JEPLMC_2016.pdf published 0 Holistic Processing of Static and Moving Faces 15017 15422 NestidB2017 3 A Nesti K de Winkel HH Bülthoff 2017-01-00 1 12 1 14 PLoS ONE While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation 15017 15422 deWinkelKB2017 3 KN de Winkel M Katliar HH Bülthoff 2017-01-00 1 12 1 20 PLoS ONE A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions. no notspecified http://www.kyb.tuebingen.mpg.de/ published 19 Causal Inference in Multisensory Heading Estimation 15017 15422 Saultond2017 3 A Saulton S de la Rosa 2017-01-00 Journal of Experimental Psychology: Human Perception and Performance no notspecified http://www.kyb.tuebingen.mpg.de/ accepted 0 Conceptual biases explain distortion differences between hand and objects in localization tasks 15017 15422 KatliarFFDTB2017 7 M Katliar J Fischer G Frison M Diehl H Teufel HH Bülthoff Toulouse, France2017-07-12 20th World Congress of the International Federation of Automatic Control (IFAC WC 2017) In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that desired acceleration and rotational velocity references at a defined point in simulator’s cabin are tracked while satisfying constraints due to working space and allowed cable forces of the robot. Reference tracking performance and computation time of the algorithm are investigated in computer simulations. Furthermore, we investigate the maximum possible improvement of motion simulation fidelity that can be potentially achieved by employing a reference prediction algorithm. no notspecified http://www.kyb.tuebingen.mpg.de/ accepted 0 Nonlinear Model Predictive Control of a Cable-Robot-Based Motion Simulator 15017 15422 TognonYBF2017 7 M Tognon B Yüksel G Buondonno A Franchi Singapore2017-06-00 1 8 IEEE International Conference on Robotics and Automation (ICRA 2017) We present a control methodology for underactuated aerial manipulators that is both easy to implement on real systems and able to achieve highly dynamic behaviours. The method is composed by two parts, a nominal input/state generator that takes into account the full-body nonlinear and coupled dynamics of the system, and a decentralized feedback controller acting on the actuated degrees of freedom that confers the needed robustness to the closed-loop system. We show how to apply the method to Protocentric Aerial Manipulators (PAM) by first using their differential flatness property on the vertical 2D plane in order to generate dynamical input/state trajectories, then statically extending the 2D structure to the 3D, and finally closing the loop with a decentralized controller having the dual task of both ensuring the preservation of the proper static 3D immersion and tracking the dynamic trajectory on the vertical plane. We demonstrate that the proposed controller is able to precisely track dynamic trajectories when implemented on a standard hardware composed by a quadrotor and a robotic arm with servo-controlled joints (even if no torque control is available). Comparative experiments clearly show the benefit of using the nominal input/state generator, and also the fact that the use of just static gravity compensation might surprisingly perform worse, in dynamic maneuvers, than the case of no compensation at all. We complement the experiments with additional realistic simulations testing the applicability of the proposed method to slightly non-protocentric aerial manipulators. no notspecified http://www.kyb.tuebingen.mpg.de/ submitted 7 Dynamic Decentralized Control for Protocentric Aerial Manipulators 15017 15422 KarolusWCS2017 7 J Karolus PW Wozniak LL Chuang A Schmidt Denver, CO, USA2017-05-00 2998 3010 35th Annual ACM Conference on Human Factors in Computing Systems (CHI '17) We are often confronted with information interfaces designed in an unfamiliar language, especially in an increasingly globalized world, where the language barrier inhibits interaction with the system. In our work, we explore the design space for building interfaces that can detect the user's language proficiency. Specifically, we look at how a user's gaze properties can be used to detect whether the interface is presented in a language they understand. We report a study (N=21) where participants were presented with questions in multiple languages, whilst being recorded for gaze behavior. We identified fixation and blink durations to be effective indicators of the participants' language proficiencies. Based on these findings, we propose a classification scheme and technical guidelines for enabling language proficiency awareness on information displays using gaze data. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 Robust Gaze Features for Enabling Language Proficiency Awareness 15017 15422 FladDSBC2016 7 N Flad JC Ditz A Schmidt HH Bülthoff LL Chuang Baltimore, MD, USA2017-02-00 1 5 Second Workshop on Eye Tracking and Visualization (ETVIS 2016) Unrestricted gaze tracking that allows for head and body movements can enable us to understand interactive gaze behavior with large-scale visualizations. Approaches that support this, by simultaneously recording eye- and user-movements, can either be based on geometric or data-driven regression models. A data-driven approach can be implemented more flexibly but its performance can suffer with poor quality training data. In this paper, we introduce a pre-processing procedure to remove training data for periods when the gaze is not fixating the presented target stimuli. Our procedure is based on a velocity-based filter for rapid eye-movements (i.e., saccades). Our results show that this additional procedure improved the accuracy of our unrestricted gaze-tracking model by as much as 56 %. Future improvements to data-driven approaches for unrestricted gaze-tracking are proposed, in order to allow for more complex dynamic visualizations. no notspecified http://www.kyb.tuebingen.mpg.de/ published 4 Data-driven approaches to unrestricted gaze-tracking benefit from saccade filtering 15017 15422 AllsopGBC2016 7 J Allsop R Gray HH Bülthoff L Chuang Baltimore, MD, USA2017-02-00 55 59 Second Workshop on Eye Tracking and Visualization (ETVIS 2016) Previous research has rarely examined the combined influence of anxiety and cognitive load on gaze behavior and performance whilst undertaking complex perceptual-motor tasks. In the current study, participants performed an aviation instrument landing task in neutral and anxiety conditions, while performing a low or high cognitive load auditory n-back task. Both self-reported anxiety and heart rate increased from neutral conditions indicating that anxiety was successfully manipulated. Response accuracy and reaction time for the auditory task indicated that cognitive load was also successfully manipulated. Cognitiveloadnegativelyimpactedflightperformance and the frequency of gaze transitions between areas of interest. Performance was maintained in anxious conditions,with a concomitant decrease in n-back reaction time suggesting that this was due to an increase in mental effort. Analyses of individual responses to the anxiety manipulation revealed that changes in anxiety levels from neutral to anxiety conditions were positively correlated with changes in visual scanning entropy, which isa measure of the randomness of gaze behavior, but only when cognitive load was high. This finding lends support for an interactive effect of cognitive anxiety and cognitive load on attentional control. no notspecified http://www.kyb.tuebingen.mpg.de/ published 4 Effects of Anxiety and Cognitive Load on Instrument Scanning Behavior in a Flight Simulation 15017 15422 GerboniGVJFB2017 7 CA Gerboni S Geluardi J Venrooij A Joos W Fichter HH Bülthoff Grapevine, TX, USA2017-01-11 1 16 AIAA Modeling and Simulation Technologies Conference: Held at the AIAA SciTech Forum 2017 no notspecified http://www.kyb.tuebingen.mpg.de/ published 15 Development of model-following control laws for helicopters to achieve personal aerial vehicle's handling qualities 15017 15422 D039IntinoOGVPB2017 7 G D'Intino M Olivari S Geluardi J Venrooij L Pollini HH Bülthoff Grapevine, TX, USA2017-01-11 1 10 AIAA Modeling and Simulation Technologies Conference: Held at the AIAA SciTech Forum 2017 no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Experimental evaluation of haptic support systems for learning a 2-DoF tracking task 15017 15422 FladFBC2015 7 N Flad T Fomina HH Bülthoff LL Chuang Chicago, IL, USA2017-00-00 151 167 First Workshop on Eye Tracking and Visualization (ETVIS 2015) Eye-movements are typically measured with video cameras and image recognition algorithms. Unfortunately, these systems are susceptible to changes in illumination during measurements. Electrooculography (EOG) is another approach for measuring eye-movements that does not suffer from the same weakness. Here, we introduce and compare two methods that allow us to extract the dwells of our participants from EOG signals under presentation conditions that are too difficult for optical eye tracking. The first method is unsupervised and utilizes density-based clustering. The second method combines the optical eye-tracker’s methods to determine fixations and saccades with unsupervised clustering. Our results show that EOG can serve as a sufficiently precise and robust substitute for optical eye tracking, especially in studies with changing lighting conditions. Moreover, EOG can be recorded alongside electroencephalography (EEG) without additional effort. no notspecified http://www.kyb.tuebingen.mpg.de/ published 16 Unsupervised clustering of EOG as a viable substitute for optical eye-tracking 15017 15422 LockenBMGTDGACB2017 2 A Löcken SS Borojeni H Müller TM Gable S Triberti C Diels C Glatz I Alvarez L Chuang S Boll Springer International Publishing Cham, Switzerland 2017-02-00 325 348 Automotive User Interfaces: Creating Interactive Experiences in the Car Informing a driver of a vehicle’s changing state and environment is a major challenge that grows with the introduction of in-vehicle assistant and infotainment systems. Even in the age of automation, the human will need to be in the loop for monitoring, taking over control, or making decisions. In these cases, poorly designed systems could lead to needless attentional demands imparted on the driver, taking it away from the primary driving task. Existing systems are offering simple and often unspecific alerts, leaving the human with the demanding task of identifying, localizing, and understanding the problem. Ideally, such systems should communicate information in a way that conveys its relevance and urgency. Specifically, information useful to promote driver safety should be conveyed as effective calls for action, while information not pertaining to safety (therefore less important) should be conveyed in ways that do not jeopardize driver attention. Adaptive ambient displays and peripheral interactions have the potential to provide superior solutions and could serve to unobtrusively present information, to shift the driver’s attention according to changing task demands, or enable a driver to react without losing the focus on the primary task. In order to build a common understanding across researchers and practitioners from different fields, we held a “Workshop on Adaptive Ambient In-Vehicle Displays and Interactions” at the AutomotiveUI‘15 conference. In this chapter, we discuss the outcomes of this workshop, provide examples of possible applications now or in the future and conclude with challenges in developing or using adaptive ambient interactions. no notspecified http://www.kyb.tuebingen.mpg.de/ published 23 Towards Adaptive Ambient In-Vehicle Displays and Interactions: Insights and Design Guidelines from the 2015 AutomotiveUI Dedicated Workshop 15017 15422 TognonYBF2017_2 46 M Tognon B Yüksel G Buondonno A Franchi 2017-03-00 2017-03-00 Explicit Computations, Simulations and additional Results for the Dynamic Decentralized Control for Protocentric Aerial Manipulators Technical Attachment to: ”Dynamic Decentralized Control for Protocentric Aerial Manipulators” 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, May 2017 no notspecified Explicit Computations, Simulations and additional Results for the Dynamic Decentralized Control for Protocentric Aerial Manipulators Technical Attachment to: ”Dynamic Decentralized Control for Protocentric Aerial Manipulators” 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, May 2017 15017 15422 KimPYKPK2017 7 J Kim Y Park J Yeon J Kim J-Y Park S-P Kim Vancouver, BC, Canada2017-06-27 23rd Annual Meeting of the Organization for Human Brain Mapping (OHBM 2017) Introduction: Tactile sensation is essential for humans to manipulate objects by hands. During object manipulation, many different physical properties of an object are sensed and processed by the human somatosensory system, supporting exquisite perceptual sensitivities [1]. Tactile sensation of different physical properties can be depicted in the several tactile perceptual dimensions, including roughness, hardness, stickiness and warmth [2]. To date, a number of human neuroimaging studies have unveiled neural mechanisms underlying roughness [3] and warmth perception [4]. Yet, relatively little is known about how the human brain subserves the perception of tactile hardness. Previous studies have suggested that slowly adapting type-1 (SA1) afferents are primarily responsible for perceiving hardness from the surface of an object [5] and the Brodmann areas (BA) 3b and 1 may contribute to perceiving hardness [6]. However, it remains elusive how the different levels of hardness are represented in the human brain during the dexterous manipulation of an object. Therefore, this study aims to investigate neural responses to tactile stimuli with the same shape and surface texture but different levels of hardness when people grip on the object with their hand. Functional magnetic resonance imaging (fMRI) is used to identify brain regions related with tactile hardness. Methods: Twelve right-handed subjects (8 female, mean age 23.1 years old) participated in the study. Experimental protocols were approved by the ethical committee of Ulsan National Institute of Science and Technology (UNISTIRB-15-16-A). Tactile stimuli with the same shape (oval) were prepared and grouped into four sets according to their hardness levels (level 1 to 4). Participants first performed a behavioral task in which they were given a pair of stimuli with eyes closed and asked to report the degree of a difference in hardness between them. Afterward, participants performed the fMRI experimental task in which they repetitively gripped on and released a given object (used in the behavioral task) for fifteen seconds followed by a nine-second rest. There were also trials in which participants executed the same grip-and-release motion without objects as a control task. Functional images (T2*-weighted gradient EPI, covering the whole depth of somatosensory area, TR = 3,000 ms, voxel size = 2.0 × 2.0 × 2.0 mm3) were obtained during the fMRI task using a Siemens 3T scanner (Magnetom TrioTim). The functional image analysis was performed using the general linear model (GLM) in SPM8 with a canonical hemodynamic response function to estimate blood-oxygen-level-dependent (BOLD) responses to each stimulus. Results: The analysis of behavioral experimental data showed that participants could correctly find differences in hardness levels among stimuli. The GLM analysis for individuals revealed activations in the contralateral postcentral gyrus in most participants modulated with different levels of hardness (p<0.001 uncorrected). Also, a random-effect group analysis of fMRI data revealed a cluster in the Rolandic operculum activated by the perception of tactile hardness (p<0.001 uncorrected). In addition, the cluster size and maximum activation peak was increased as the hardness level increased. Conclusions: Our study demonstrated that brain regions over the postcentral gyrus (S1) and Rolandic operculum might be related to the perception of tactile hardness. We also observed that the degree of activation in these regions, reflected by the size of the activated area (cluster size) and the level of activation (maximum peak) was proportional to the level of tactile hardness. Our results suggest that neural assemblies in the contralateral S1 and Roland operculum may play a role in sensing tactile hardness during dexterous object manipulation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Investigation of cortical activity related to perception of tactile hardness 15017 15422 DelongGARCWN2017 7 P Delong A Giani M Aller T Rohe V Conrad M Watanabe U Noppeney Birmingham, UK2017-04-10 27 BNA 2017 Festival of Neuroscience (British Neuroscience Association) Information integration across the senses is fundamental for effective interactions with our environment. A controversial question is whether signals from different senses can interact in the absence of awareness. Models of global workspace would predict that unaware signals are confined to processing in low level sensory areas and thereby prevented from interacting with signals from other senses in higher order association areas. Yet, accumulating evidence suggests that multisensory interactions can emerge – at least to some extent- already at the primary cortical level [1]. These low level interactions may thus potentially mediate interactions between sensory signals in the absence of awareness. Combining the spatial ventriloquist illusion and dynamic continuous flash suppression (dCSF) [2] we investigated whether visual signals that observers did not consciously perceive can influence spatial perception of sounds. Importantly, dCFS obliterated visual awareness only on a fraction of trials allowing us to compare spatial ventriloquism for physically identical flashes that were judged visible or invisible. Our results show a stronger ventriloquist effect for visible than invisible flashes. Yet, a robust ventriloquist effect also emerged for flashes judged invisible. This ventriloquist effect for invisible flashes was even preserved in participants that were not better than chance when locating flashes they judged ‘invisible’. Collectively, our findings demonstrate that physically identical visual signals influence the perceived location of concurrent sounds depending on their subjective visibility. Even visual signals that participants are not aware of can alter sound perception. These results suggest that audiovisual signals are integrated into spatial representations to some extent in the absence of perceptual awareness. no notspecified http://www.kyb.tuebingen.mpg.de/ published -27 The invisible ventriloquist: can unaware flashes alter sound perception? 15017 1542215017 15421 delaRosa2017_2 7 S de la Rosa Wien, Austria2017-03-24 Second Biennial International Convention of Psychological Science (ICPS 2017) We report a novel high level adaptation aftereffect: the prolonged viewing of a fist-bump action causes participants to perceive an ambiguous morphed action as a punch. We show evidence that this visual adaptation effect is the result of a change in perception rather than a mere response bias. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Action adaptation: A new visual illusion that transforms a hug into a push 15017 15422 delaRosa2017 7 S de la Rosa Wien, Austria2017-03-24 Second Biennial International Convention of Psychological Science (ICPS 2017) Psychological experiments often require the recruitment of participants and the coordination of experimental equipment that is shared between experimenters (e.g. fMRI scanner). Here we present a free online tool that allows the rapid recruitment of participants and manages the booking of required equipment. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Banto: An online participant recruitment and equipment management tool 15017 15422 KeilmanndCM2017 10 F Keilmann S de la Rosa U Cress T Meilinger FademrechtdB2017 10 L Fademrecht S de la Rosa HH Bülthoff SchultzKPDBFBGB2017 10 J Schultz K Kaulard P Pilz K Dobs I Bülthoff A Fernandez-Cruz B Brockhaus J Gardner HH Bülthoff BulthoffN2017 10 I Bülthoff FN Newell delaRosa2017_3 10 S de la Rosa Meilinger2017 10 T Meilinger Nooij2017 10 SAE Nooij deWinkel2017 10 K de Winkel Hinterecker2017 10 T Hinterecker Chang2016_2 1 D-S Chang Rowohlt Polaris Reinbek bei Hamburg, Germany 2016-09-00 Was kann unser Hirn verraten? Warum begegnen wir Fremden mit Vorurteilen? Warum spielt Religion eine wichtige Rolle dabei, wie wir die Welt wahrnehmen? Warum sehen für Europäer Asiaten meist gleich aus? Warum wählen wir manchmal unfähige Politiker? Unser Gehirn sucht immer nach Erklärungen. Erklärungen, wie die Welt funktioniert, wie wir selbst funktionieren und wie andere Menschen funktionieren. Doch jedes Gehirn findet eben seine eigenen Antworten – warum das so ist und ob wir diesen Antworten immer trauen können, erfahren Sie in diesem Buch. no notspecified http://www.kyb.tuebingen.mpg.de/ published 252 Mein Hirn hat seinen eigenen Kopf: Wie wir andere und uns selbst wahrnehmen 15017 15422 Drop2016 1 FM Drop Logos Verlag Berlin, Germany 2016-00-00 Understanding how humans control a vehicle (cars, aircraft, bicycles, etc.) enables engineers to design faster, safer, more comfortable, more energy efficient, more versatile, and thus better vehicles. In a typical control task, the Human Controller (HC) gives control inputs to a vehicle such that it follows a particular reference path (e.g., the road) accurately. The HC is simultaneously required to attenuate the effect of disturbances (e.g., turbulence) perturbing the intended path of the vehicle. To do so, the HC can use a control organization that resembles a closed-loop feedback controller, a feedforward controller, or a combination of both. Previous research has shown that a purely closed-loop feedback control organization is observed only in specific control tasks, that do not resemble realistic control tasks, in which the information presented to the human is very limited. In realistic tasks, a feedforward control strategy is to be expected; yet, almost all previously available HC models describe the human as a pure feedback controller lacking the important feedforward response. Therefore, the goal of the research described in this thesis was to obtain a fundamental understanding of feedforward in human manual control. First, a novel system identification method was developed, which was necessary to identify human control dynamics in control tasks involving realistic reference signals. Second, the novel identification method was used to investigate three important aspects of feedforward through human-in-the-loop experiments which resulted in a control-theoretical model of feedforward in manual control. The central element of the feedforward model is the inverse of the vehicle dynamics, equal to the theoretically ideal feedforward dynamics. However, it was also found that the HC is not able to apply a feedforward response with these ideal dynamics, and that limitations in the perception, cognition, and action loop need to be modeled by additional model elements: a gain, a time delay, and a low-pass filter. Overall, the thesis demonstrated that feedforward is indeed an essential part of human manual control behavior and should be accounted for in many human-machine applications. no notspecified http://www.kyb.tuebingen.mpg.de/ published 300 Control-Theoretic Models of Feedforward in Manual Control 15017 15422 Geluardi2016 1 S Geluardi Logos Verlag Berlin, Germany 2016-00-00 The research described in this thesis was inspired by the results of the myCopter project, a European project funded by the European Commission in 2011. The myCopter project's aim was to identify new concepts for air transport that could be used to achieve a Personal Aerial Transport (PAT) system in the second half of the 21st century. Although designing a new vehicle was not among the project's goal, it was considered important to assess vehicle response types and handling qualities that Personal Aerial Vehicles (PAVs) should have to be part of a PAT. In this thesis it is proposed to consider civil light helicopters as possible PAVs candidates. The goal of the thesis is to investigate whether it is possible to transform civil light helicopters into PAVs through the use of system identification methods and control techniques. The transformation here is envisaged in terms of vehicle dynamics and handling qualities. To achieve this goal, three main steps are considered. The first step focuses on the identification of a Robinson R44 Raven II helicopter model in hover. The second step consists of augmenting the identified helicopter model to achieve response types and handling qualities defined for PAVs. The third step consists of assessing the magnitude of discrepancy between the two implemented augmented systems and the PAV reference model. An experiment is conducted for this purpose, consisting of piloted closed-loop control tasks performed in the MPI CyberMotion Simulator by participants without any prior flight experience. Results, evaluated in terms of objective and subjective workload and performance, show that both augmented control systems are able to resemble PAVs handling qualities and response types in piloted closed-loop control tasks. This result demonstrates that it is possible to transform helicopter dynamics into PAVs dynamics. no notspecified http://www.kyb.tuebingen.mpg.de/ published 198 Identification and augmentation of a civil light helicopter: transforming helicopters into Personal Aerial Vehicles 15017 15422 KonigSKGKWLEBWKNMBWBK2016 3 SU König F Schumann J Keyser C Goeke C Krause S Wache A Lytochkin M Ebert V Brunsch B Wahn K Kaspar SK Nagel T Meilinger HH Bülthoff T Wolbers C Büchel P König 2016-12-00 12 11 1 35 PLoS ONE Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 34 Learning New Sensorimotor Contingencies: Effects of Long-Term Use of Sensory Augmentation on the Brain and Conscious Perception 15017 15422 KimCCBK2016 3 J Kim YG Chung S-C Chung HH Bülthoff S-P Kim 2016-12-00 4 9 455 464 IEEE Transactions on Haptics As the use of wearable haptic devices with vibrating alert features is commonplace, an understanding of the perceptual categorization of vibrotactile frequencies has become important. This understanding can be substantially enhanced by unveiling how neural activity represents vibrotactile frequency information. Using functional magnetic resonance imaging (fMRI), this study investigated categorical clustering patterns of the frequency-dependent neural activity evoked by vibrotactile stimuli with gradually changing frequencies from 20 to 200 Hz. First, a searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions exhibiting neural activities associated with frequency information. We found that the contralateral postcentral gyrus (S1) and the supramarginal gyrus (SMG) carried frequency-dependent information. Next, we applied multidimensional scaling (MDS) to find low-dimensional neural representations of different frequencies obtained from the multi-voxel activity patterns within these regions. The clustering analysis on the MDS results showed that neural activity patterns of 20-100 Hz and 120-200 Hz were divided into two distinct groups. Interestingly, this neural grouping conformed to the perceptual frequency categories found in the previous behavioral studies. Our findings therefore suggest that neural activity patterns in the somatosensory cortical regions may provide a neural basis for the perceptual categorization of vibrotactile frequency. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Neural Categorization of Vibrotactile Frequency in Flutter and Vibration Stimulations: an fMRI Study 15017 15422 KimCCBK2016_2 3 J Kim YG Chung S-C Chung HH Bülthoff S-P Kim 2016-11-00 16 27 1232–1236 NeuroReport In this functional MRI study, we investigated how the human brain activity represents tactile location information evoked by pressure stimulation on fingers. Using the searchlight multivoxel pattern analysis, we looked for local activity patterns that could be decoded into one of four stimulated finger locations. The supramarginal gyrus (SMG) and the thalamus were found to contain distinct multivoxel patterns corresponding to individual stimulated locations. In contrast, the univariate general linear model analysis contrasting stimulation against resting phases for each finger identified activations mainly in the primary somatosensory cortex (S1), but not in SMG or in thalamus. Our results indicate that S1 might be involved in the detection of the presence of pressure stimuli, whereas the SMG and the thalamus might play a role in identifying which finger is stimulated. This finding may provide additional evidence for hierarchical information processing in the human somatosensory areas. no notspecified http://www.kyb.tuebingen.mpg.de/ published -1232 Decoding pressure stimulation locations on the fingers from human neural activation patterns 15017 15422 YukselF2016 3 B Yüksel A Franchi 2016-11-00 - In this paper we present the dynamic Lagrangian modeling, system analysis, and nonlinear control of a robot constituted by a planar-vtol (PVTOL) underactuated aerial vehicle equipped with a rigid-or an elastic-joint arm, which constitutes an aerial manipulator. For the design of the aerial manipulator, we first consider generic offsets between the center of mass (CoM) of the PVTOL, and the attachment point of the joint-arm. Later we consider a model in which these two points are the coinciding. It turns out to be that the choice of this attachment point is significantly affecting the capabilities of the platform. Furthermore, in both cases we consider the rigid-and elastic-joint arm configurations. For each of the resulting four cases we formally assess the presence of exact linearizing and differentially flat outputs and the possibility of using the dynamic feedback linearization (DFL) controller. Later we formalize an optimal control problem exploiting the differential flatness property of the systems, which is applied, as an illustrative example, to the aerial throwing task. Finally we provide extensive and realistic simulation results for comparisons between different robot models in different robotic tasks such as aerial grasping and aerial throwing, and a discussion on the applicability of computationally simpler controllers for the coinciding-point models to generic-point ones. Further exhaustive simulations on the trajectory tracking and the high-speed arm swinging capabilities are provided in a technical attachment. no notspecified http://www.kyb.tuebingen.mpg.de/ submitted 0 PVTOL Aerial Manipulators with a Rigid or an Elastic Joint: Analysis, Control, and Comparison 15017 15422 StegagnoCOBF2016 3 P Stegagno M Cognetti G Oriolo HH Bülthoff A Franchi 2016-10-00 5 32 1133 1151 IEEE Transactions on Robotics We present a decentralized algorithm for estimating mutual poses (relative positions and orientations) in a group of mobile robots. The algorithm uses relative-bearing measurements, which, for example, can be obtained from onboard cameras, and information about the motion of the robots, such as inertial measurements. It is assumed that all relative-bearing measurements are anonymous; i.e., each specifies a direction along which another robot is located but not its identity. This situation, which is often ignored in the literature, frequently arises in practice and remarkably increases the complexity of the problem. The proposed solution is based on a two-step approach: in the first step, the most likely unscaled relative configurations with identities are computed from anonymous measurements by using geometric arguments, while in the second step, the scale is determined by numeric Bayesian filtering based on the motion model. The solution is first developed for ground robots in SE (2) and then for aerial robots in SE (3). Experiments using Khepera III ground mobile robots and quadrotor aerial robots confirm that the proposed method is effective and robust w.r.t. false positives and negatives of the relative-bearing measuring process. no notspecified http://www.kyb.tuebingen.mpg.de/ published 18 Ground and Aerial Mutual Localization Using Anonymous Relative-Bearing Measurements 15017 15422 HintereckerKJ2016 3 T Hinterecker M Knauff PN Johnson-Laird 2016-10-00 10 42 1606 1620 Journal of Experimental Psychology: Learning, Memory, and Cognition We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both. Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning. no notspecified http://www.kyb.tuebingen.mpg.de/ published 14 Modality, probability, and mental models 15017 15422 MeilingerSB2016 3 T Meilinger M Strickrodt HH Bülthoff 2016-10-00 155 77–95 Cognition Two classes of space define our everyday experience within our surrounding environment: vista spaces, such as rooms or streets which can be perceived from one vantage point, and environmental spaces, for example, buildings and towns which are grasped from multiple views acquired during locomotion. However, theories of spatial representations often treat both spaces as equal. The present experiments show that this assumption cannot be upheld. Participants learned exactly the same layout of objects either within a single room or spread across multiple corridors. By utilizing a pointing and a placement task we tested the acquired configurational memory. In Experiment 1 retrieving memory of the object layout acquired in environmental space was affected by the distance of the traveled path and the order in which the objects were learned. In contrast, memory retrieval of objects learned in vista space was not bound to distance and relied on different ordering schemes (e.g., along the layout structure). Furthermore, spatial memory of both spaces differed with respect to the employed reference frame orientation. Environmental space memory was organized along the learning experience rather than layout intrinsic structure. In Experiment 2 participants memorized the object layout presented within the vista space room of Experiment 1 while the learning procedure emulated environmental space learning (movement, successive object presentation). Neither factor rendered similar results as found in environmental space learning. This shows that memory differences between vista and environmental space originated mainly from the spatial compartmentalization which was unique to environmental space learning. Our results suggest that transferring conclusions from findings obtained in vista space to environmental spaces and vice versa should be made with caution. no notspecified http://www.kyb.tuebingen.mpg.de/ published -77 Qualitative differences in memory for vista and environmental spaces are caused by opaque borders, not movement or successive presentation 15017 15422 VenrooijMMAvvB2016 3 J Venrooij M Mulder M Mulder DA Abbink MM van Paassen FCT van der Helm HH Bülthoff 2016-09-00 Epub ahead IEEE Transactions on Cybernetics Biodynamic feedthrough (BDFT) refers to the feedthrough of vehicle accelerations through the human body, leading to involuntary control device inputs. BDFT impairs control performance in a large range of vehicles under various circumstances. Research shows that BDFT strongly depends on adaptations in the neuromuscular admittance dynamics of the human body. This paper proposes a model-based approach of BDFT mitigation that accounts for these neuromuscular adaptations. The method was tested, as proof-of-concept, in an experiment where participants inside a motion simulator controlled a simulated vehicle through a virtual tunnel. Through evaluating tracking performance and control effort with and without motion disturbance active and with and without cancellation active, the effectiveness of the cancellation was evaluated. Results show that the cancellation approach is successful: the detrimental effects of BDFT were largely removed. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/Venrooij_2016_IEEETCyb_AdmittanceAdaptiveModelBasedApproachToMitigateBDFT.pdf published 0 Admittance-Adaptive Model-Based Approach to Mitigate Biodynamic Feedthrough 15017 15422 DobsBS2016 3 K Dobs I Bülthoff J Schultz 2016-09-00 34301 6 1 9 Scientific Reports Facial movements convey information about many social cues, including identity. However, how much information about a person’s identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Identity information content depends on the type of facial movement 15017 15422 Ahmad2016 3 A Ahmad HH Bülthoff 2016-09-00 83 275–286 Robotics and Autonomous Systems In this article we present an online estimator for multirobot cooperative localization and target tracking based on nonlinear least squares minimization. Our method not only makes the rigorous optimization-based approach applicable online but also allows the estimator to be stable and convergent. We do so by employing a moving horizon technique to nonlinear least squares minimization and a novel design of the arrival cost function that ensures stability and convergence of the estimator. Through an extensive set of real robot experiments, we demonstrate the robustness of our method as well as the optimality of the arrival cost function. The experiments include comparisons of our method with i) an extended Kalman filter-based online-estimator and ii) an offline-estimator based on full-trajectory nonlinear least squares. no notspecified http://www.kyb.tuebingen.mpg.de/ published -275 Moving-horizon Nonlinear Least Squares-based Multirobot Cooperative Perception 15017 15422 DropPVMB2016 3 FM Drop DM Pool MM van Paassen M Mulder HH Bülthoff 2016-09-00 Epub ahead IEEE Transactions on Cybernetics Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: “false-positive” feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Objective Model Selection for Identifying the Human Feedforward Response in Manual Control 15017 15422 NooijNBP2016 3 SAE Nooij A Nesti HH Bülthoff P Pretto 2016-08-00 8 234 2323–2337 Experimental Brain Research When in darkness, humans can perceive the direction and magnitude of rotations and of linear translations in the horizontal plane. The current paper addresses the integrated perception of combined translational and rotational motion, as it occurs when moving along a curved trajectory. We questioned whether the perceived motion through the environment follows the predictions of a self-motion perception model (e.g., Merfeld et al. in J Vestib Res 3:141–161, 1993; Newman in A multisensory observer model for human spatial orientation perception, 2009), which assume linear addition of rotational and translational components. For curved motion in darkness, such models predict a non-veridical motion percept, consisting of an underestimation of the perceived rotation, a distortion of the perceived travelled path, and a bias in the perceived heading (i.e., the perceived instantaneous direction of motion with respect to the body). These model predictions were evaluated in two experiments. In Experiment 1, seven participants were moved along a circular trajectory in darkness while facing the motion direction. They indicated perceived yaw rotation using an online tracking task, and perceived travelled path by drawings. In Experiment 2, the heading was systematically varied, and six participants indicated, in a 2-alternative forced-choice task, whether they perceived facing inward or outward of the circular path. Overall, we found no evidence for the heading bias predicted by the model. This suggests that the sum of the perceived rotational and translational components alone cannot adequately explain the overall perceived motion through the environment. Possibly, knowledge about motion dynamics and familiar stimuli combinations may play an important additional role in shaping the percept. no notspecified http://www.kyb.tuebingen.mpg.de/ published -2323 Perception of rotation, path, and heading in circular trajectories 15017 15422 GeussMS2016 3 MN Geuss MJ McCardell JK Stefanucci 2016-07-00 7 11 1 19 PLoS ONE Previous research has demonstrated an influence of one’s emotional state on estimates of spatial layout. For example, estimates of heights are larger when the viewer is someone typically afraid of heights (trait fear) or someone who, in the moment, is experiencing elevated levels of fear (state fear). Embodied perception theories have suggested that such a change in perception occurs in order to alter future actions in a manner that reduces the likelihood of injury. However, other work has argued that when acting, it is important to have access to an accurate perception of space and that a change in conscious perception does not necessitate a change in action. No one has yet investigated emotional state, perceptual estimates, and action performance in a single paradigm. The goal of the current paper was to investigate whether fear influences perceptual estimates and action measures similarly or in a dissociable manner. In the current work, participants either estimated gap widths (Experiment 1) or were asked to step over gaps (Experiment 2) in a virtual environment. To induce fear, the gaps were placed at various heights up to 15 meters. Results showed an increase in gap width estimates as participants indicated experiencing more fear. The increase in gap estimates was mirrored in participants’ stepping behavior in Experiment 2; participants stepped over fewer gaps when experiencing higher state and trait fear and, when participants actually stepped, they stepped farther over gap widths when experiencing more fear. The magnitude of the influence of fear on both perception and action were also remarkably similar (5.3 and 3.9 cm, respectively). These results lend support to embodied perception claims by demonstrating an influence on action of a similar magnitude as seen on estimates of gap widths. no notspecified http://www.kyb.tuebingen.mpg.de/ published 18 Fear Similarly Alters Perceptual Estimates of and Actions over Gaps 15017 15422 15017 MachullaDE2016 3 T-K Machulla M Di Luca MO Ernst 2016-07-00 7 42 1026 1038 Journal of Experimental Psychology: Human Perception and Performance Crossmodal judgments of relative timing commonly yield a nonzero point of subjective simultaneity (PSS). Here, we test whether subjective simultaneity is coherent across all pairwise combinations of the visual, auditory, and tactile modalities. To this end, we examine PSS estimates for transitivity: If Stimulus A has to be presented x ms before Stimulus B to result in subjective simultaneity, and B y ms before C, then A and C should appear simultaneous when A precedes C by z ms, where z = x + y. We obtained PSS estimates via 2 different timing judgment tasks—temporal order judgments (TOJs) and synchrony judgments (SJs)—thus allowing us to examine the relationship between TOJ and SJ. We find that (a) SJ estimates do not violate transitivity, and that (b) TOJ and SJ data are linearly related. Together, these findings suggest that both TOJ and SJ access the same perceptual representation of simultaneity and that this representation is globally coherent across the tested modalities. Furthermore, we find that (b) TOJ estimates are intransitive. This is consistent with the proposal that while the perceptual representation of simultaneity is coherent, relative timing judgments that access this representation can at times be incoherent with each other because of postperceptual response biases. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 The consistency of crossmodal synchrony perception across the visual, auditory, and tactile senses 15017 15422 15017 18824 NestiNLBP2016 3 A Nesti SAE Nooij M Losert HH Bülthoff P Pretto 2016-05-00 5 92 417 426 Simulation: Transactions of the Society for Modeling and Simulation International In driving simulation, simulator tilt is used to reproduce sustained linear acceleration. In order to feel realistic, this tilt is performed at a rate below the human tilt rate detection threshold, which is usually assumed constant. However, it is known that many factors affect the threshold, such as visual information, simulator motion in additional directions, or the driver’s active effort required for controlling the vehicle. Here we investigated the effect of these factors on the roll rate detection threshold during simulated curve driving. Ten participants reported whether they detected roll motion in multiple trials during simulated curve driving, while roll rate was varied over trials. Roll rate detection thresholds were measured under four conditions. In the first three conditions, participants were moved passively through a curve with the following: (i) roll only in darkness; (ii) combined roll/sway in darkness; (iii) combined roll/sway and visual information. In the fourth (iv) condition participants actively drove through the curve. The results showed that roll rate thresholds in simulated curve driving increase, that is, sensitivity decreases, when the roll tilt is combined with sway motion. Moreover, an active control task seemed to further increase the detection threshold, that is, impair motion sensitivity, but with large individual differences. We hypothesize that this is related to the level of immersion during the task. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Roll rate perceptual thresholds in active and passive curve driving simulation 15017 15422 KrupchankaK2016 3 D Krupchanka M Katliar 2016-05-00 3 42 600 607 Schizophrenia Bulletin Background: There is evidence of a positive association between insight and depression among patients with schizophrenia. Self-stigma was shown to play a mediating role in this association. We attempted to broaden this concept by investigating insight as a potential moderator of the association between depressive symptoms amongst people with schizophrenia and stigmatizing views towards people with mental disorders in their close social environment. Method: In the initial sample of 120 pairs, data were gathered from 96 patients with a diagnosis of “paranoid schizophrenia” and 96 of their nearest relatives (80% response rate). In this cross-sectional study data were collected by clinical interview using the following questionnaires: “The Scale to Assess Unawareness of Mental Disorder,” “Calgary Depression Scale for Schizophrenia,” and “Brief Psychiatric Rating Scale.” The stigmatizing views of patients’ nearest relatives towards people with mental disorders were assessed with the “Mental Health in Public Conscience” scale. Results: Among patients with schizophrenia depressive symptom severity was positively associated with the intensity of nearest relatives’ stigmatizing beliefs (“Nonbiological vision of mental illness,” τ = 0.24; P < .001). The association was moderated by the level of patients’ awareness of presence of mental disorder while controlling for age, sex, duration of illness and psychopathological symptoms. Conclusions: The results support the hypothesis that the positive association between patients’ depression and their nearest relatives’ stigmatizing views is moderated by patients’ insight. Directions for further research and practical implications are discussed. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 The Role of Insight in Moderating the Association Between Depressive Symptoms in People With Schizophrenia and Stigma Among Their Nearest Relatives: A Pilot Study 15017 15422 ZhaoBB2015 3 M Zhao HH Bülthoff I Bülthoff 2016-04-00 4 42 584 597 Journal of Experimental Psychology: Learning, Memory, and Cognition Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 A shape-based account for holistic face processing 15017 15422 SaultonMBD2016 3 A Saulton B Mohler HH Bülthoff TJ Dodds 2016-04-00 6:2 16 1 16 Journal of Vision The elongation of a figure or object can induce a perceptual bias regarding its area or volume estimation. This bias is notable in Piagetian experiments in which participants tend to consider elongated cylinders to contain more liquid than shorter cylinders of equal volume. We investigated whether similar perceptual biases could be found in volume judgments of surrounding indoor spaces and whether those judgments were viewpoint dependent. Participants compared a variety of computer-generated rectangular rooms with a square room in a psychophysical task. We found that the elongation bias in figures or objects was also present in volume comparison judgments of indoor spaces. Further, the direction of the bias (larger or smaller) depended on the observer's viewpoint. Similar results were obtained from a monoscopic computer display (Experiment 1) and stereoscopic head-mounted display with head tracking (Experiment 2). We used generalized linear mixed-effect models to model participants' volume judgments using a function of room depth and width. A good fit to the data was found when applying weight on the depth relative to the width, suggesting that participants' judgments were biased by egocentric properties of the space. We discuss how biases in comparative volume judgments of rooms might reflect the use of simplified strategies, such as anchoring on one salient dimension of the space. no notspecified http://www.kyb.tuebingen.mpg.de/ published 15 Egocentric biases in comparative volume judgments of rooms 15017 15422 15017 MeilingerW2016 3 T Meilinger K Watanabe 2016-04-00 4 11 1 22 PLoS ONE Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions. no notspecified http://www.kyb.tuebingen.mpg.de/ published 21 Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory 15017 15422 ScheerBC2016 3 M Scheer HH Bülthoff LL Chuang 2016-03-00 73 10 1 15 Frontiers in Human Neuroscience The current study investigates the demands that steering places on mental resources. Instead of a conventional dual-task paradigm, participants of this study were only required to perform a steering task while task-irrelevant auditory distractor probes (environmental sounds and beep tones) were intermittently presented. The event-related potentials (ERPs), which were generated by these probes, were analyzed for their sensitivity to the steering task’s demands. The steering task required participants to counteract unpredictable roll disturbances and difficulty was manipulated either by adjusting the bandwidth of the roll disturbance or by varying the complexity of the control dynamics. A mass univariate analysis revealed that steering selectively diminishes the amplitudes of early P3, late P3, and the re-orientation negativity (RON) to task-irrelevant environmental sounds but not to beep tones. Our findings are in line with a three-stage distraction model, which interprets these ERPs to reflect the post-sensory detection of the task-irrelevant stimulus, engagement, and re-orientation back to the steering task. This interpretation is consistent with our manipulations for steering difficulty. More participants showed diminished amplitudes for these ERPs in the ‘hard’ steering condition relative to the ‘easy’ condition. To sum up, the current work identifies the spatiotemporal ERP components of task-irrelevant auditory probes that are sensitive to steering demands on mental resources. This provides a non-intrusive method for evaluating mental workload in novel steering environments. no notspecified http://www.kyb.tuebingen.mpg.de/ published 14 Steering demands diminish the early-P3, late-P3 and RON components of the event-related potential of task-irrelevant environmental sounds 15017 15422 JungTWdBBM2016 3 E Jung K Takahashi K Watanabe S de la Rosa MV Butz HH Bülthoff T Meilinger 2016-03-00 217 7 1 9 Frontiers in Psychology People maintain larger distances to other peoples’ front than to their back. We investigated if humans also judge another person as closer when viewing their front than their back. Participants watched animated virtual characters (avatars) and moved a virtual plane towards their location after the avatar was removed. In Experiment 1, participants judged avatars, which were facing them as closer and made quicker estimates than to avatars looking away. In Experiment 2, avatars were rotated in 30 degree steps around the vertical axis. Observers judged avatars roughly facing them (i.e., looking max. 60 degrees away) as closer than avatars roughly looking away. No particular effect was observed for avatars directly facing and also gazing at the observer. We conclude that body orientation was sufficient to generate the asymmetry. Sensitivity of the orientation effect to gaze and to interpersonal distance would have suggested involvement of social processing, but this was not observed. We discuss social and lower-level processing as potential reasons for the effect. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/Frontiers-Psychol-2016-Jung.pdf published 8 The Influence of Human Body Orientation on Distance Judgments 15017 15422 delaRosa2016 3 S de la Rosa Y Ferstl HH Bülthoff 2016-03-00 23829 6 1 8 Scientific Reports A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Visual adaptation dominates bimodal visual-motor action adaptation 15017 15422 delaRosaEB2016 3 S de la Rosa M Ekramnia HH Bülthoff 2016-02-00 56 10 1 6 Frontiers in Human Neuroscience The ability to discriminate between different actions is essential for action recognition and social interaction. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g. left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g. when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently either categorized the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 Action Recognition and Movement Direction Discrimination Tasks Are Associated with Different Adaptation Patterns 15017 15422 FademrechtBd2016 3 L Fademrecht I Bülthoff S de la Rosa 2016-02-00 3:33 16 1 14 Journal of Vision Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 0° (fovea/fixation), 15°, 30°, 45°, and 60° of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 30° of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Action recognition in the visual periphery 15017 15422 ZhaoBB2015_2 3 M Zhao HH Bülthoff I Bülthoff 2016-02-00 2 27 213 222 Psychological Science Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific to faces or objects of expertise. Although current theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Nonface objects cannot elicit facelike holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. Moreover, weakening the saliency of Gestalt information in these patterns reduced holistic processing of them, which indicates that Gestalt information plays a crucial role in holistic processing. Therefore, holistic processing can be achieved not only via a top-down route based on expertise, but also via a bottom-up route relying merely on object-based information. The finding that facelike holistic processing can extend beyond the domains of faces and objects of expertise poses a challenge to current dominant theories. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Beyond Faces and Expertise: Facelike Holistic Processing of Nonface Objects in the Absence of Expertise 15017 15422 FranchiSO2015 3 A Franchi P Stegagno G Oriolo 2016-02-00 2 40 245 265 Autonomous Robots We present a control framework for achieving encirclement of a target moving in 3D using a multi-robot system. Three variations of a basic control strategy are proposed for different versions of the encirclement problem, and their effectiveness is formally established. An extension ensuring maintenance of a safe inter-robot distance is also discussed. The proposed framework is fully decentralized and only requires local communication among robots; in particular, each robot locally estimates all the relevant global quantities. We validate the proposed strategy through simulations on kinematic point robots and quadrotor UAVs, as well as experiments on differential-drive wheeled mobile robots. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/encirclement.pdf published 20 Decentralized Multi-Robot Encirclement of a 3D Target with Guaranteed Collision Avoidance 15017 15422 delaRosaSBSU2016 3 S de la Rosa FL Schillinger HH Bülthoff J Schultz K Uludag 2016-02-00 78 10 1 10 Frontiers in Human Neuroscience Mirror Neurons (MNs) are considered to be the supporting neural mechanism for action understanding. MNs have been identified in monkey’s area F5. The identification of MNs in the human homologue of monkeys’ area F5 (BA 44/45) has been proven methodologically difficult. Cross-modal fMRI adaptation studies supporting the existence of MNs restricted their analysis to a-priori candidate regions, whereas studies that failed to find evidence used non-object-directed actions. We tackled these limitations by using object-directed actions differing only in terms of their object directedness in combination with a cross-modal adaptation paradigm and a whole-brain analysis. Additionally, we tested voxels’ BOLD response patterns for several properties previously reported as typical mirror neuron response properties. Our results revealed 52 voxels in left inferior frontal gyrus (particularly BA 44/45), which respond to both motor and visual stimulation and exhibit cross-modal adaptation between the execution and observation of the same action. These results demonstrate that part of human inferior frontal gyrus (IFG), specifically BA 44/45, has BOLD response characteristics very similar to monkey’s area F5. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 fMRI adaptation between action observation and action execution reveals cortical areas with mirror neuron properties in human BA 44/45 15017 15422 15017 18821 DahlRBC2016 3 CD Dahl MJ Rasch I Bülthoff C-C Cheng 2016-02-00 20247 6 1 9 Scientific Reports A face recognition system ought to read out information about the identity, facial expression and invariant properties of faces, such as sex and race. A current debate is whether separate neural units in the brain deal with these face properties individually or whether a single neural unit processes in parallel all aspects of faces. While the focus of studies has been directed toward the processing of identity and facial expression, little research exists on the processing of invariant aspects of faces. In a theoretical framework we tested whether a system can deal with identity in combination with sex, race or facial expression using the same underlying mechanism. We used dimension reduction to describe how the representational face space organizes face properties when trained on different aspects of faces. When trained to learn identities, the system not only successfully recognized identities, but also was immediately able to classify sex and race, suggesting that no additional system for the processing of invariant properties is needed. However, training on identity was insufficient for the recognition of facial expressions and vice versa. We provide a theoretical approach on the interconnection of invariant facial properties and the separation of variant and invariant facial properties. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Integration or separation in the processing of facial properties: a computational view 15017 15422 MeilingerFSBB2015 3 T Meilinger J Frankenstein N Simon HH Bülthoff J-P Bresciani 2016-02-00 1 23 246 252 Psychonomic Bulletin & Review Reference frames in spatial memory encoding have been examined intensively in recent years. However, their importance for recall has received considerably less attention. In the present study, passersby used tags to arrange a configuration map of prominent city center landmarks. It has been shown that such configurational knowledge is memorized within a north-up reference frame. However, participants adjusted their maps according to their body orientations. For example, when participants faced south, the maps were likely to face south-up. Participants also constructed maps along their location perspective—that is, the self–target direction. If, for instance, they were east of the represented area, their maps were oriented west-up. If the location perspective and body orientation were in opposite directions (i.e., if participants faced away from the city center), participants relied on location perspective. The results indicate that reference frames in spatial recall depend on the current situation rather than on the organization in long-term memory. These results cannot be explained by activation spread within a view graph, which had been used to explain similar results in the recall of city plazas. However, the results are consistent with forming and transforming a spatial image of nonvisible city locations from the current location. Furthermore, prior research has almost exclusively focused on body- and environment-based reference frames. The strong influence of location perspective in an everyday navigational context indicates that such a reference frame should be considered more often when examining human spatial cognition. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/meilinger_et_al_2016_situated_maps_pre_final_version.pdf published 6 Not all memories are the same: Situational context influences spatial recall within one’s city of residency 15017 15422 SaultonLWBd2016 3 A Saulton MR Longo HY Wong HH Bülthoff S de la Rosa 2016-02-00 164 103–111 Acta Psychologica Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions. no notspecified http://www.kyb.tuebingen.mpg.de/ published -103 The role of visual similarity and memory in body model distortions 15017 15422 EsinsSSKB2016 3 J Esins J Schultz C Stemper I Kennerknecht I Bülthoff 2016-01-00 1 7 1 37 i-Perception Congenital prosopagnosia, the innate impairment in recognizing faces, is a very heterogeneous disorder with different phenotypical manifestations. To investigate the nature of prosopagnosia in more detail, we tested 16 prosopagnosics and 21 controls with an extended test battery addressing various aspects of face recognition. Our results show that prosopagnosics exhibited significant impairments in several face recognition tasks: impaired holistic processing (they were tested amongst others with the Cambridge Face Memory Test (CFMT)) as well as reduced processing of configural information of faces. This test battery also revealed some new findings. While controls recognized moving faces better than static faces, prosopagnosics did not exhibit this effect. Furthermore, prosopagnosics had significantly impaired gender recognition—which is shown on a groupwise level for the first time in our study. There was no difference between groups in the automatic extraction of face identity information or in object recognition as tested with the Cambridge Car Memory Test. In addition, a methodological analysis of the tests revealed reduced reliability for holistic face processing tests in prosopagnosics. To our knowledge, this is the first study to show that prosopagnosics showed a significantly reduced reliability coefficient (Cronbach’s alpha) in the CFMT compared to the controls. We suggest that compensatory strategies employed by the prosopagnosics might be the cause for the vast variety of response patterns revealed by the reduced test reliability. This finding raises the question whether classical face tests measure the same perceptual processes in controls and prosopagnosics. no notspecified http://www.kyb.tuebingen.mpg.de/ published 36 Face Perception and Test Reliabilities in Congenital Prosopagnosia in Seven Tests 15017 15422 MeilingerSFHLMB2016 3 T Meilinger J Schulte-Pelkum J Frankenstein G Hardiess N Laharnar HA Mallot HH Bülthoff 2016-01-00 76 7 1 7 Frontiers in Psychology Establishing verbal memory traces for non-verbal stimuli was reported to facilitate or inhibit memory for the non-verbal stimuli. We show that these effects are also observed in a domain not indicated before – wayfinding. Fifty-three participants followed a guided route in a virtual environment. They were asked to remember half of the intersections by relying on the visual impression only. At the other 50% of the intersections, participants additionally heard a place name, which they were asked to memorize. For testing, participants were teleported to the intersections and were asked to indicate the subsequent direction of the learned route. In Experiment 1, intersections’ names were arbitrary (i.e., not related to the visual impression). Here, participants performed more accurately at unnamed intersections. In Experiment 2, intersections’ names were descriptive and participants’ route memory was more accurate at named intersections. Results have implications for naming places in a city and for wayfinding aids. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/Frontiers-Psychol-2016-Meilinger.pdf published 6 How to best name a place? Facilitation and inhibition of route learning due to descriptive and arbitrary location labels 15017 15422 ReichenbachBBT2015 3 A Reichenbach J-P Bresciani HH Bülthoff A Thielscher 2016-01-00 Part A 124 869–875 NeuroImage The vestibular system constitutes the silent sixth sense: It automatically triggers a variety of vital reflexes to maintain postural and visual stability. Beyond their role in reflexive behavior, vestibular afferents contribute to several perceptual and cognitive functions and also support voluntary control of movements by complementing the other senses to accomplish the movement goal. Investigations into the neural correlates of vestibular contribution to voluntary action in humans are challenging and have progressed far less than research on corresponding visual and proprioceptive involvement. Here, we demonstrate for the first time with event-related TMS that the posterior part of the right medial intraparietal sulcus processes vestibular signals during a goal-directed reaching task with the dominant right hand. This finding suggests a qualitative difference between the processing of vestibular vs. visual and proprioceptive signals for controlling voluntary movements, which are pre-dominantly processed in the left posterior parietal cortex. Furthermore, this study reveals a neural pathway for vestibular input that might be distinct from the processing for reflexive or cognitive functions, and opens a window into their investigation in humans. no notspecified http://www.kyb.tuebingen.mpg.de/ published -869 Reaching with the sixth sense: Vestibular contributions to voluntary motor control in the human right parietal cortex 15017 15422 15017 18821 BreidtBC2016 7 M Breidt HH Bülthoff C Curio Rio de Janeiro, Brazil2016-11-00 1261 1268 19th International Conference on Intelligent Transportation Systems (ITSC 2016) Reliable and accurate car driver head pose estimation is an important function for the next generation of Advanced Driver Assistance Systems that need to consider the driver state in their analysis. For optimal performance, head pose estimation needs to be non-invasive, calibration-free and accurate for varying driving and illumination conditions. In this pilot study we investigate a 3D head pose estimation system that automatically fits a statistical 3D face model to measurements of a driver's face, acquired with a low-cost depth sensor on challenging real-world data. We evaluate the results of our sensor-independent, driver-adaptive approach to those of a state-of-the-art camera-based 2D face tracking system as well as a non-adaptive 3D model relative to own ground-truth data, and compare to other 3D benchmarks. We find large accuracy benefits of the adaptive 3D approach. Our system shows a median error of 5.99 mm for position and 2.12° for rotation while delivering a full 6-DOF pose with very little degradation from strong illumination changes or out-of-plane rotations of more than 50°. In terms of accuracy, 95% of all our results have a position error of less than 9.50 mm, and a rotation error of less than 4.41°. Compared to the 2D method, this represents a 59.7% reduction of the 95% rotation accuracy threshold, and a 56.1% reduction of the median rotation error. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Accurate 3D Head Pose Estimation under Real-World Driving Conditions: A Pilot Study 15017 15422 RienerJAPMCT2016 7 A Riener MP Jeon I Alvarez B Pfleging A Mirnig M Tschelgli L Chuang Ann Arbor, MI, USA2016-10-00 217 220 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '16) On July 1st 2016, the first automated vehicle fatality became headline news [9] and caused a nationwide wave of concern. Now we have at least one situation in which a controlled automated vehicle system failed to detect a life threatening situation. The question still remains: How can an autonomous system make ethical decisions that involve human lives? Control negotiation strategies require prior encoding of ethical conventions into decision making algorithms, which is not at all an easy task -- especially considering that actually coming up with ethically sound decision strategies in the first place is often very difficult, even for human agents. This workshop seeks to provide a forum for experts across different backgrounds to voice and formalize the ethical aspects of automotive user interfaces in the context of automated driving. The goal is to derive working principles that will guide shared decision-making between human drivers and their automated vehicles. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 1st Workshop on Ethically Inspired User Interfaces for Automated Driving 15017 15422 McCallBPBAMMTCT2016 7 R McCall M Baumann I Politis SS Borojeni I Alvarez A Mirnig A Meschtcherjakov M Tscheligi L Chuang J Terken Ann Arbor, MI, USA2016-10-00 233 236 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '16) This workshop will focus on the problem of occupant and vehicle situational awareness with respect to automated vehicles when the driver must take over control. It will explore the future of fully automated and mixed traffic situations where vehicles are assumed to be operating at level 3 or above. In this case, all critical driving functions will be handled by the vehicle with the possibility of transitions between manual and automated driving modes at any time. This creates a driver environment where, unlike manual driving, there is no direct intrinsic motivation for the driver to be aware of the traffic situation at all times. Therefore, it is highly likely that when such a transition occurs, the driver will not be able to transition either safely or within an appropriate period of time. This workshop will address this challenge by inviting experts and practitioners from the automotive and related domains to explore concepts and solutions to increase, maintain and transfer situational awareness in semi-automated vehicles. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 1st Workshop on Situational Awareness in Semi-Automated Vehicles 15017 15422 YukselSF2016 7 B Yüksel N Staub A Franchi Daejeon, Korea2016-10-00 1667 1672 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016) We present the dynamic modeling, analysis, and control design of a Planar-Vertical Take-Off and Landing (PVTOL) underactuated aerial vehicle equipped either with a rigid- or an elastic-joint arm. We prove that in both cases the system is exactly linearizable with a dynamic feedback and differentially flat for the same set of outputs (but different controllers). We compare the two cases with extensive and realistic simulations, which show that the rigid-joint case outperforms the elastic-joint case for aerial grasping tasks while the converse holds for link-velocity amplification tasks. We present preliminary experimental results using a actuated joint with variable stiffness (VSA) on a quadrotor platform. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/IROS-2016-Yueksel.pdf published 5 Aerial Robots with Rigid/Elastic-joint Arms: Single-joint Controllability Study and Preliminary Experiments 15017 15422 BorojeniCHB2016 7 SS Borojeni L Chuang W Heuten S Boll Ann Arbor, MI, USA2016-10-00 237 244 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '16) Take-over situations in highly automated driving occur when drivers have to take over vehicle control due to automation shortcomings. Due to high visual processing demand of the driving task and time limitation of a take-over maneuver, appropriate user interface designs for take-over requests (TOR) are needed. In this paper, we propose applying ambient TORs, which address the peripheral vision of a driver. Conducting an experiment in a driving simulator, we tested a) ambient displays as TORs, b) whether contextual information could be conveyed through ambient TORs, and c) if the presentation pattern (static, moving) of the contextual TORs has an effect on take-over behavior. Results showed that conveying contextual information through ambient displays led to shorter reaction times and longer times to collision without increasing the workload. The presentation pattern however, did not have an effect on take-over performance. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Assisting Drivers with Ambient Take-Over Requests in Highly Automated Driving 15017 15422 MasoneBS2016 7 C Masone HH Bülthoff P Stegagno Daejeon, Korea2016-10-00 1623 1630 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016) This paper addresses the problem of cooperative aerial transportation of an object using a team of quadrotors. The approach presented to solve this problem accounts for the full dynamics of the system and it is inspired by the literature on reconfigurable cable-driven parallel robots (RCDPR). Using the modelling convention of RCDPR it is derived a direct relation between the motion of the quadrotors and the motion of the payload. This relation makes explicit the available internal motion of the system, which can be used to automatically achieve additional tasks. The proposed method does not require to specify a priory the forces in the cables and uses a tension distribution algorithm to optimally distribute them among the robots. The presented framework is also suitable for online teleoperation. Physical simulations with a human-in-the-loop validate the proposed approach. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Cooperative transportation of a payload using quadrotors: A reconfigurable cable-driven parallel robot 15017 15422 YukselBF2016 7 B Yüksel G Buondonno A Franchi Daejeon, Korea2016-10-00 561 566 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016) In this paper we introduce a particularly relevant class of aerial manipulators that we name protocentric. These robots are formed by an underactuated aerial vehicle, a planar-Vertical Take-Off and Landing (PVTOL), equipped with any number of different parallel manipulator arms with the only property that all the first joints are attached at the Center of Mass (CoM) of the PVTOL, while the center of actuation of the PVTOL can be anywhere. We prove that protocentric aerial manipulators (PAMs) are differentially flat systems regardless the number of joints of each arm and their kinematic and dynamic parameters. The set of flat outputs is constituted by the CoM of the PVTOL and the absolute orientation angles of all the links. The relative degree of each output is equal to four. More amazingly, we prove that PAMs are differentially flat even in the case that any number of the joints are elastic, no matter the internal distribution between elastic and rigid joints. The set of flat outputs is the same but in this case the total relative degree grows quadratically with the number of elastic joints. We validate the theory by simulating object grasping and transportation tasks with unknown mass and parameters and using a controller based on dynamic feedback linearization. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/IROS-2016-Yueksel-2.pdf published 5 Differential flatness and control of protocentric aerial manipulators with any number of arms and mixed rigid-/elastic-joints 15017 15422 D039IntinoOGVIBP2016 7 G D'Intino M Olivari S Geluardi J Venrooij M Innocenti HH Bülthoff L Pollini Budapest, Hungary2016-10-00 002169 002174 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2016) Haptic guidance has previously been employed to improve human performance in control tasks. This paper presents an experiment to evaluate whether haptic feedback can be used to help humans learn a compensatory tracking task. In the experiment, participants were divided into two groups: the haptic group and the no-aid group. The haptic group performed a first training phase with haptic feedback and a second evaluation phase without haptic feedback. The no-aid group performed the whole experiment without haptic feedback. Results indicated that haptic group achieved better performance than the no aid group during the training phase. Furthermore, performance of haptic group did not worsen in the evaluation phase when the haptic feedback was turned off. On the other hand, the no-aid group needed more experimental trials to achieve similar performance to the haptic group. These findings indicate that haptic feedback helped participants learn the task quicker. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 Evaluation of Haptic Support System for Training Purposes in a Tracking Task 15017 15422 MiermeisterLBMSTKTPB2016 7 P Miermeister M Lächele R Boss C Masone C Schenk J Tesch M Kerger H Teufel A Pott HH Bülthoff Daejeon, Korea2016-10-00 3024 3029 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016) This paper introduces the CableRobot simulator, which was developed at the Max Planck Institute for Biological Cybernetics in cooperation with the Fraunhofer Institute for Manufacturing Engineering and Automation IPA. The simulator is a completely novel approach to the design of motion simulation platforms in so far as it uses cables and winches for actuation instead of rigid links known from hexapod simulators. This approach allows to reduce the actuated mass, scale up the workspace significantly, and provides great flexibility to switch between system configurations in which the robot can be operated. The simulator will be used for studies in the field of human perception research and virtual reality applications. The paper dicusses some of the issues arising from the usage of cables and provides a system overview regarding kinematics and system dynamics as well as giving a brief introduction into possible application use cases. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 The CableRobot Simulator: Large Scale Motion Platform Based on Cable Robot Technology 15017 15422 KarolusWC2016 7 J Karolus PW Woźniak LL Chuang Göteborg, Sweden2016-10-00 118 9th Nordic Conference on Human-Computer Interaction (NordiCHI '16) Humans are inherently skilled at using subtle physiological cues from other persons, for example gaze direction in a conversation. Personal computers have yet to explore this implicit input modality. In a study with 14 participants, we investigate how a user's gaze can be leveraged in adaptive computer systems. In particular, we examine the impact of different languages on eye movements by presenting simple questions in multiple languages to our participants. We found that fixation duration is sufficient to ascertain if a user is highly proficient in a given language. We propose how these findings could be used to implement adaptive visualizations that react implicitly on the user's gaze. no notspecified http://www.kyb.tuebingen.mpg.de/ published -118 Towards Using Gaze Properties to Detect Language Proficiency 15017 15422 BorojeniCLGB2016 7 SS Borojeni L Chuang A Löcken C Glatz S Boll Ann Arbor, MI, USA2016-10-00 213 215 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '16) Managing drivers’ distraction and directing their attention has been a challenge for automotive UI researchers both in industry and academia. The objective of this half-day tutorial is to provide an overview of methodologies for design, development, and evaluation of in-vehicle attention-directing user interfaces. The tutorial will introduce specifics and challenges of shifting drivers’ attention and managing distractions in semi- and highly automated driving context. The participants will be familiarized with methods for requirement elicitation, participatory design, setting up experiments, and evaluation of interaction concepts using tools such as eye-tracker and EEG/ERP. no notspecified http://www.kyb.tuebingen.mpg.de/ published 2 Tutorial on Design and Evaluation Methods for Attention Directing Cues 15017 15422 VenrooijCKPBS2016 7 J Venrooij D Cleij M Katliar P Pretto HH Bülthoff D Steffen FW Hoffmeyer H-P Schöner Paris, France2016-09-08 31 38 DSC 2016 Europe: Driving Simulation Conference & Exhibition This paper describes a driving simulation experiment, executed on the Daimler Driving Simulator (DDS), in which a filter-based and an optimization-based motion cueing algorithm (MCA) were compared using a newly developed motion cueing quality rating method. The goal of the comparison was to investigate whether optimization-based MCAs have, compared to filter-based approaches, the potential to improve the quality of motion simulations. The paper describes the two algorithms, discusses their strengths and weaknesses and describes the experimental methods and results. The MCAs were compared in an experiment where 18 participants rated the perceived motion mismatch, i.e., the perceived mismatch between the motion felt in the simulator and the motion one would expect from a drive in a real car. The results show that the quality of the motion cueing was rated better for the optimization-based MCA than for the filter-based MCA, indicating that there exists a potential to improve the quality of the motion simulation with optimization-based methods. Furthermore, it was shown that the rating method provides reliable and repeatable results within and between participants, which further establishes the utility of the method. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/DSC-2016-Venrooij.pdf published 7 Comparison between filter- and optimization-based motion cueing in the Daimler Driving Simulator 15017 15422 VenrooijOB2016 7 J Venrooij M Olivari HH Bülthoff Kyoto, Japan2016-08-00 120–125 13th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems (HMS 2016) Biodynamic feedthrough (BDFT) occurs when vehicle accelerations feed through the body of a human operator, causing involuntary limb motions, which in turn result in involuntary control inputs. Manual control of many different vehicles is known to be vulnerable to BDFT effects, such as that of helicopters, aircraft, electric wheelchairs and hydraulic excavators. This paper provides a brief review of BDFT literature, which serves as a basis for identifying the fundamental challenges that remain to be addressed in future BDFT research. One of these challenges, time-variant BDFT identification, is discussed in more detail. Currently, it is often assumed that BDFT dynamics are (quasi)linear and time-invariant. This assumption can only be justified when measuring BDFT under carefully crafted experimental conditions, which are very different from real-world situations. As BDFT dynamics depend on neuromuscular dynamics, they are typically time-varying. This paper investigates the suitability of a recently developed time-variant identification approach, based on a recursive least-squares algorithm, which has been successfully used to identify time-varying neuromuscular dynamics. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/IFAC-2016-Venrooij.pdf published -120 Biodynamic Feedthrough: Current Status and Open Issues 15017 15422 DropPMB2016 7 FM Drop DM Pool M Mulder HH Bülthoff Kyoto, Japan2016-08-00 7 12 13th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems (HMS 2016) The human controller (HC) can greatly improve target-tracking performance by utilizing a feedforward operation on the target signal, in addition to a feedback response. System identification methods are used to determine the correct HC model structure: purely feedback or a combined feedforward/feedback model. In this paper, we investigate three central issues that complicate this objective. First, the identification method should not require prior assumptions regarding the dynamics of the feedforward and feedback components. Second, severe biases might be introduced by high levels of noise in the data measured under closed-loop conditions. To address the first two issues, we will consider two identification methods that make use of linear ARX models: the classic direct method and the two-stage indirect method of van den Hof and Schrama (1993). Third, model complexity should be considered in the selection of the ‘best’ ARX model to prevent ‘false-positive’ feedforward identification. Various model selection criteria, that make an explicit trade-off between model quality and model complexity, are considered. Based on computer simulations with a HC model, we conclude that 1) the direct method provides more accurate estimates in the frequency range of interest, and 2) existing model selection criteria do not prevent false-positive feedforward identification. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 Constraints in Identification of Multi-Loop Feedforward Human Control Models 15017 15422 PolliniROBMPPNIB2016 7 L Pollini M Razzanelli M Olivari A Brandimarti M Maimeri P Pazzaglia G Pittiglio R Nuti M Innocenti HH Bülthoff Kyoto, Japan2016-08-00 78–83 13th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems (HMS 2016) Shared control is becoming widely used in many manual control tasks as a mean for improving performance and safety. Designing an effective shared control system requires extensive testing and knowledge of how operators react to the haptic sensations provided by the control device shared with the support system. Commercial general purpose haptic devices may be unfit to reproduce the operational situation typical of the control task under study, like car driving or airplane flying. Thus specific devices are needed for research on specific task; this market niche exists but is characterized by expensive products. This paper presents the development of a complete low cost haptic stick, of its initial characterization and inner loop and impedance control systems design, and finally proposes an evaluation with two test cases: pilot admittance identification with the classical tasks, and an entire haptic experiment. In particular this latter experiment tries to study what happens when a system failure happens in a pilot support system built using a classical embedded controller, compared to a system built following the haptic shared control paradigm. no notspecified http://www.kyb.tuebingen.mpg.de/ published -78 Design, Realization and Experimental Evaluation of a Haptic Stick for Shared Control Studies 15017 15422 SchenkMMB2016 7 C Schenk C Masone P Miermeister HH Bülthoff Ningbo, China2016-08-00 454 461 IEEE International Conference on Information and Automation (ICIA 2016) In this paper we study if approximated linear models are accurate enough to predict the vibrations of a cable of a Cable-Driven Parallel Robot (CDPR) for different pretension levels. In two experiments we investigated the damping of a thick steel cable from the Cablerobot simulator [1] and measured the motion of the cable when a sinusoidal force is applied at one end of the cable. Using this setup and power spectral density analysis we measured the natural frequencies of the cable and compared these results to the frequencies predicted by two linear models: i) the linearization of partial differential equations of motion for a distributed cable, and ii) the discretization of the cable using a finite elements model. This comparison provides remarkable insights into the limits of approximated linear models as well as important properties of vibrating cables used in CDPR. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Modeling and analysis of cable vibrations for a cable-driven parallel robot 15017 15422 DropDMB2016 7 FM Drop R De Vries M Mulder HH Bülthoff Kyoto, Japan2016-08-00 177–182 13th IFAC/IFIP/IFORS/IEA Symposium on Analysis, Design, and Evaluation of Human-Machine Systems (HMS 2016) In the manual control of a dynamic system, the human controller (HC) is often required to follow a visible and predictable reference path. Using the predictable aspect of a reference signal, through applying feedforward control, the HC can significantly improve performance as compared to a purely feedback control strategy. A proper definition of a signal’s predictability, however, is never given in literature. This paper investigates the predictability of a sum-of-sinusoids target signal, as a function of the number of sinusoid components and the fact whether the sinusoid frequencies are harmonic, or not. A human-in-the-loop experiment was done, with target signals varying for these two signal characteristics. A combined feedback-feedforward HC model was identified and parameters were estimated. It was found that for all experimental conditions, subjects used a feedforward strategy. Results further showed that subjects were able to perform better for harmonic signals as compared to non-harmonic signals, for signals with roughly the same frequency content. no notspecified http://www.kyb.tuebingen.mpg.de/ published -177 The Predictability of a Target Signal Affects Manual Feedforward Control 15017 15422 OdelgaSB2016 7 M Odelga P Stegagno HH Bülthoff Banff, Alberta, Canada2016-07-13 306 311 IEEE International Conference on Advanced Intelligent Mechatronics (AIM 2016) Equipped with four actuators, quadrotor Unmanned Aerial Vehicles belong to the family of underactuated systems. The lateral motion of such platforms is strongly coupled with their orientation and consequently it is not possible to track an arbitrary 6D trajectory in space. In this paper, we propose a novel quadrotor design in which the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. Since the velocity of the controlled tilt angles of the propellers does not appear directly in the derived dynamic model, the system cannot be static feedback linearized. Nevertheless, the system is linearizable at a higher differential order, leading to a dynamic feedback linearization controller. Simulations confirm the theoretical findings, highlighting the improved motion capabilities with respect to standard quadrotors. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 A fully actuated quadrotor UAV with a propeller tilting mechanism: Modeling and control 15017 15422 AhmadRB2016 7 A Ahmad E Ruff HH Bülthoff Heidelberg, Germany2016-07-00 1728 1734 19th International Conference on Information Fusion (FUSION 2016) In this article we present a new method for multi-robot cooperative target tracking based on dynamic baseline stereo vision. The core novelty of our approach includes a computationally light-weight scheme to compute the 3D stereo measurements that exactly satisfy the epipolar constraints and a covariance intersection (CI)-based method to fuse the 3D measurements obtained by each individual robot. Using CI we are able to systematically integrate the robot localization uncertainties as well as the uncertainties in the measurements generated by the monocular camera images from each individual robot into the resulting stereo measurements. Through an extensive set of simulation and real robot results we show the robustness and accuracy of our approach with respect to ground truth. The source code related to this article is publicly accessible on our website and the datasets are available on request. no notspecified http://www.kyb.tuebingen.mpg.de/ published 6 Dynamic baseline stereo vision-based cooperative target tracking 15017 15422 SoykaLSFRM2016 7 F Soyka M Leyrer J Smallwood C Ferguson BE Riecke BJ Mohler Anaheim, CA, USA2016-07-00 85 88 ACM Symposium on Applied Perception (SAP '16) Chronic stress is one of the major problems in our current fast paced society. The body reacts to environmental stress with physiological changes (e.g. accelerated heart rate), increasing the activity of the sympathetic nervous system. Normally the parasympathetic nervous system should bring us back to a more balanced state after the stressful event is over. However, nowadays we are often under constant pressure, with a multitude of stressful events per day, which can result in us constantly being out of balance. This highlights the importance of effective stress management techniques that are readily accessible to a wide audience. In this paper we present an exploratory study investigating the potential use of immersive virtual reality for relaxation with the purpose of guiding further design decisions, especially about the visual content as well as the interactivity of virtual content. Specifically, we developed an underwater world for head-mounted display virtual reality. We performed an experiment to evaluate the effectiveness of the underwater world environment for relaxation, as well as to evaluate if the underwater world in combination with breathing techniques for relaxation was preferred to standard breathing techniques for stress management. The underwater world was rated as more fun and more likely to be used at home than a traditional breathing technique, while providing a similar degree of relaxation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 Enhancing stress management techniques using virtual reality 15017 15017 15422 RoggenkamperPDvM2016 7 N Roggenkämper DM Pool FM Drop MM van Paassen M Mulder Washington, DC, USA2016-06-16 787 803 AIAA Modeling and Simulation Technologies Conference: Held at the AIAA Aviation Forum 2016 Fundamental research carried out by McRuer et al. 1, 2 in the 1960s still forms the basis for the mathematical representation of pilot-vehicle systems today. Expressing human skills in the same control engineering terms as the vehicle to be controlled enables scientists to quantitatively evaluate human operators' manual control behavior. Decades of research have not only proven the validity of functional models as accurate descriptions of human tracking behavior during compensatory tracking tasks, 2–5 but also the suitability of ... no notspecified http://www.kyb.tuebingen.mpg.de/ published 16 Objective ARX Model Order Selection for Multi-Channel Human Operator Identification 15017 15422 RajappaMBS2016 7 S Rajappa C Masone HH Bülthoff P Stegagno Stockholm, Sweden2016-05-00 2971 2977 IEEE International Conference on Robotics and Automation (ICRA 2016) In this paper we present a robust quadrotor controller for tracking a reference trajectory in presence of uncertainties and disturbances. A Super Twisting controller is implemented using the recently proposed gain adaptation law [1], [2], which has the advantage of not requiring the knowledge of the upper bound of the lumped uncertainties. The controller design is based on the regular form of the quadrotor dynamics, without separation in two nested control loops for position and attitude. The controller is further extended by a feedforward dynamic inversion control that reduces the effort of the sliding mode controller. The higher order quadrotor dynamic model and proposed controller are validated using a SimMechanics physical simulation with initial error, parameter uncertainties, noisy measurements and external perturbations. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/ICRA-2016-Rajappa.pdf published 6 Adaptive Super Twisting Controller for a Quadrotor UAV 15017 15422 LacheleVPB2016 7 J Lächele J Venrooij P Pretto HH Bülthoff West Palm Beach, FL, USA2016-05-00 3310 3316 72nd American Helicopter Society International Annual Forum (AHS 2016) In this paper we present the results of two experiments performed using a teleoperation setup where operators control a simulated quadrotor in a virtual environment while perceiving visual and inertial motion feedback. Participants of this study performed a series of precision hover tasks. The experiments focused on how different motion feedback definitions affect operator performance and control effort. In the first experiment the effect of including different components of the quadrotor motion in the motion feedback was studied (referred to as "vehicle-related" motion feedback). In the second experiment, the effect of including task-related information in the motion feedback, in the form of a roll motion representing the offset between the desired and actual quadrotor position, was investigated (referred to as "task-related" motion feedback). In both experiments the effects of degraded visual quality was investigated. For both vehicle-related lateral motion feedback and task-related roll motion feedback, we found a significant increase in operator performance. Vehicle-related roll motion feedback showed no effect on operator performance. Control effort, defined as the overall stick deflection during the trials, decreased in vehicle-state roll motion conditions and increased in task-related motion feedback. The results show the applicability and benefits of providing task-related motion feedback in teleoperation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 6 Effects of vehicle- and task-related motion feedback on operator performance in teleoperation 15017 15422 PicardiGOB2016 7 G Picardi S Geluardi M Olivari HH Bülthoff West Palm Beach, FL, USA2016-05-00 1770 1777 72nd American Helicopter Society International Annual Forum (AHS 2016) The aim of this study is to augment the uncertain dynamics of the helicopter in order to resemble the dynamics of a new kind of vehicle, the so called Personal Aerial Vehicle. To achieve this goal a two step procedure is proposed. First, the helicopter model dynamics is augmented with a PID-based dynamic controller. Such controller implements a model following on the nominal helicopter model without uncertainties. Then, an L1 adaptive controller is designed to restore the nominal responses of the augmented helicopter when variations in the identified parameters are considered. The performance of the adaptive controller is evaluated via Montecarlo simulations. The results show that the application of the adaptive controller to the augmented helicopter dynamics can significantly reduce the effects of uncertainty due to the identification of the helicopter model. For implementation reasons the adaptive controller was applied to a subset of the outputs of the system. However, the under actuation typical of helicopters makes the tracking of the nominal responses good also on the not directly adapted outputs. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 L1-based Model Following Control of an Identified Helicopter Model in Hover 15017 15422 OdelgaBS2016 7 M Odelga HH Bülthoff P Stegagno Stockholm, Sweden2016-05-00 2984 2990 IEEE International Conference on Robotics and Automation (ICRA 2016) In this paper, we present a collision-free indoor navigation algorithm for teleoperated multirotor Unmanned Aerial Vehicles (UAVs). Assuming an obstacle rich environment, the algorithm keeps track of detected obstacles in the local surroundings of the robot. The detection part of the algorithm is based on measurements from an RGB-D camera and a Bin-Occupancy filter capable of tracking an unspecified number of targets. We use the estimate of the robot’s velocity to update the obstacles state when they leave the direct field of view of the sensor. The avoidance part of the algorithm is based on the Model Predictive Control approach. By predicting the possible future obstacles states, it filters the operator commands to prevent collisions. The method is validated on a platform equipped with its own computational unit, which makes it selfsufficient in terms of external CPUs. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/ICRA-2016-Odelga.pdf published 6 Obstacle Detection, Tracking and Avoidance for a Teleoperated UAV 15017 15422 FlemingMRBB2016 7 R Fleming BJ Mohler J Romero MJ Black M Breidt Roma, Italy2016-02-00 333 343 11th Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) Advances in 3D scanning technology allow us to create realistic virtual avatars from full body 3D scan data. However, negative reactions to some realistic computer generated humans suggest that this approach might not always provide the most appealing results. Using styles derived from existing popular character designs, we present a novel automatic stylization technique for body shape and colour information based on a statistical 3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived appeal with two different experiments: One focuses on body shape alone, the other investigates the additional role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was most effective, increasing average appeal ratings by approximately 34%. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/GRAPP-2016-Fleming.pdf published 10 Appealing female avatars from 3D body scans: Perceptual effects of stylization 15017 1542215017 GerboniVNJF2016 7 CA Gerboni J Venrooij FM Nieuwenhuizen A Joos W Fichter HH Bülthoff San Diego, CA, USA2016-01-00 1002 1012 AIAA Modeling and Simulation Technologies Conference: Held at the AIAA SciTech Forum 2016 In this paper an augmentation strategy is implemented with the goal of making the behavior of an actual helicopter similar to that of a new class of aerial systems called Personal Aerial Vehicles (PAVs). PAVs are meant to be own by flight-naïve pilots, i.e., pilots with minimal flight experience. One feature required for achieving this goal, is to have a Translation Rate Command (TRC) response type in the hover and low-speed regime. In this paper, a TRC response type is obtained for a UH-60 helicopter simulation model in hover and low-speed regime through the implementation of nonlinear back stepping control. The responses of the rotorcraft with TRC response type are evaluated with the metrics defined in the Aeronautical Design Standard ADS-33. E-PRF. Simulations show the efficiency of the control scheme in tracking the reference velocities and the achievement of the requirements to have level 1 Handling Qualities (HQ) for the TRC response type. no notspecified http://www.kyb.tuebingen.mpg.de/ published 10 Control Augmentation Strategies for Helicopters used as Personal Aerial Vehicles in Low-speed Regime 15017 15422 OlivariVNPB2016 7 M Olivari J Venrooij FM Nieuwenhuizen L Pollini HH Bülthoff San Diego, CA, USA2016-01-00 385 399 AIAA Modeling and Simulation Technologies Conference: Held at the AIAA SciTech Forum 2016 Methods for identifying pilot's responses commonly assume time-invariant dynamics. However, humans are likely to vary their responses during realistic control scenarios. In this work an identification method is developed for estimating time-varying responses to visual and force feedback during a compensatory tracking task. The method describes pilot's responses with finite impulse response filters and use a Regularized Recursive Least Squares (RegRLS) algorithm to simultaneously estimate filter coefficients. The method was validated in a Monte-Carlo simulation study with different levels of remnant noise. With low levels of remnant noise, estimates were accurate and tracked the time-varying behaviour of the simulated responses. On the other hand, estimates showed high variability in case of large remnant noise. However, parameters of the RegRLS could be further optimized to improve robustness to large remnant noise. Taken together, these findings suggest that the novel RegRLS algorithm could be used to estimate time-varying pilot's responses in real human-in-the-loop experiments. no notspecified http://www.kyb.tuebingen.mpg.de/ published 14 Identifying Time-Varying Pilot Responses: A Regularized Recursive Least-Squares Algorithm 15017 15422 GerboniNB2016 7 CA Gerboni FM Nieuwenhuizen HH Bülthoff San Diego, CA, USA2016-01-00 1027 1040 AIAA Modeling and Simulation Technologies Conference: Held at the AIAA SciTech Forum 2016 The paper describes the implementation and validation of a nonlinear model of the UH-60 helicopter. The implemented model is based on a physical vehicle and includes various important subsystems in order to increase the model fidelity. The validation is carried out through a Handling Qualities (HQ) evaluation and a comparison with flight data. Various standardized tests have been performed in the time and frequency domain for hover and the forward flight condition. Results obtained have been analyzed according to the criteria defined by the Aeronautical Design Standard ADS-33E-PRF. The behavior of our helicopter model is very similar to flight test data of the UH-60 in hover and in forward flight, although some coupling effects are not well described. Overall the model provides a reliable basis for use in motion-base simulators and as framework for conducting studies on control augmentation systems. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Implementation and Validation of a 6 Degrees-of-Freedom Nonlinear Helicopter Model 15017 15422 MaimeriOBP2016 7 M Maimeri M Olivari HH Bülthoff L Pollini San Diego, CA, USA2016-01-00 373 384 AIAA Modeling and Simulation Technologies Conference: Held at the AIAA SciTech Forum 2016 External aids are required to increase safety and performance during the manual control of an aircraft. Automated systems allow to surpass the performance usually achieved by pilots. However, they suffer from several issues caused by pilot unawareness of the control command from the automation. Haptic aids can overcome these issues by showing their control command through forces on the control device. It is possible to design Haptic aids that allow pilots to improve performance compared with the baseline condition, even if these are usually outperformed by automation. It is not very well understood yet however, what happens to performance in the event of a failure of the Pilot support system. To investigate how and if a pilot can recovery its performance after a failure of the haptic or automated support system, a quantitative comparison is needed. An experiment was conducted in which pilots performed a compensatory tracking task with haptic aids and with automation. Half of the runs were affected by a failure of the support system, resulting in complete removal of the support action. The haptic aid and the automation were designed to be equivalent when the pilot was out-of-the-loop, i.e., to provide the same control command. Pilot performance and control effort were then evaluated with pilots in-the-loop and compared to a baseline condition without external aids. As expected pilots performance is better with the automated support system, than with Haptic when no failure happens. When a Failure happens, pilots experience a sudden decrease of performance in both cases, but loss of performance is much higher in the automation case. In addition and somehow surprisingly, after the initial loss of performance, pilots flying with the Haptic aid return approximately to the performance level they had just before the failure, while pilots flying with Automation cannot re-gain pre-failure levels of performance, at least in the time span of the experiment. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 On Effects of Failures in Haptic and Automated Pilot Support Systems 15017 15422 SpedicatoNBF2013 7 S Spedicato G Notarstefano HH Bülthoff A Franchi Singapore2016-00-00 95 112 16th International Symposium on Robotics Research (ISRR 2013) In this paper we design a nonlinear controller for aggressive maneuvering of a quadrotor. We take a maneuver regulation perspective. Differently from the classical trajectory tracking approach, maneuver regulation does not require following a timed reference state, but a geometric “path” with a velocity (and possibly orientation) profile assigned on it. The proposed controller relies on three main ideas. Given a desired maneuver, i.e., a set of state trajectories equivalent under time translations, the system dynamics is decomposed into dynamics longitudinal and transverse to the maneuver. A space-dependent version of the transverse dynamics is derived, by using the longitudinal state, i.e., the arc-length of the path, as an independent variable. Then the controller is obtained as a function of the arc-length consisting of two terms: a feed forward term, being the nominal input to apply when on the path at the current arc-length, and a feedback term exponentially stabilizing the state-dependent transverse dynamics. Numerical computations are presented to prove the effectiveness of the proposed strategy. The controller performances are tested in presence of uncertainty of the model parameters and input noise and saturations. The controller is also tested in a realistic simulation environment validated against an experimental test-bed. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2013/2013n-SpeNotBueFra-preprint.pdf published 17 Aggressive maneuver regulation of a quadrotor UAV 15017 15422 BulthoffWG2016 2 HH Bülthoff C Wallraven MA Giese Springer Berlin, Germany 2016-00-00 2095 2114 Springer Handbook of Robotics: Part G Robots that share their environment with humans need to be able to recognize and manipulate objects and users, perform complex navigation tasks, and interpret and react to human emotional and communicative gestures. In all of these perceptual capabilities, the human brain, however, is still far ahead of robotic systems. Hence, taking clues from the way the human brain solves such complex perceptual tasks will help to design better robots. Similarly, once a robot interacts with humans, its behaviors and reactions will be judged by humans – movements of the robot, for example, should be fluid and graceful, and it should not evoke an eerie feeling when interacting with a user. In this chapter, we present Perceptual Robotics as the field of robotics that takes inspiration from perception research and neuroscience to, first, build better perceptual capabilities into robotic systems and, second, to validate the perceptual impact of robotic systems on the user. no notspecified http://www.kyb.tuebingen.mpg.de/ published 19 Perceptual Robotics 15017 15422 YukselSF2016_2 46 B Yüksel N Staub A Franchi 2016-10-00 1 7 2016-10-00 Explicit Computations and Further Extensive Simulations for Rigid-or Elastic-joint Arm: Technical Attachment to: "Aerial Robots with Rigid/Elastic-joint Arms: Single-joint Controllability Study and Preliminary Experiment" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, South Korea, October 2016 no notspecified Explicit Computations and Further Extensive Simulations for Rigid-or Elastic-joint Arm: Technical Attachment to: "Aerial Robots with Rigid/Elastic-joint Arms: Single-joint Controllability Study and Preliminary Experiment" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, South Korea, October 2016 15017 15422 YukselBF2016_2 46 B Yüksel G Buondonno A Franchi 2016-10-00 2016-10-00 Protocentric Aerial Manipulators: Flatness Proofs and Simulations. Technical Attachment to: "Differential Flatness and Control of Protocentric Aerial Manipulators with Any Number of Arms and Mixed Rigid-/Elastic-Joints" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, South Korea, October 2016 no notspecified Protocentric Aerial Manipulators: Flatness Proofs and Simulations. Technical Attachment to: "Differential Flatness and Control of Protocentric Aerial Manipulators with Any Number of Arms and Mixed Rigid-/Elastic-Joints" 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, South Korea, October 2016 15017 15422 SymeonidouOVBC2016 7 ER Symeonidou M Olivari J Venrooij HH Bülthoff LL Chuang San Diego, CA, USA2016-11-13 46th Annual Meeting of the Society for Neuroscience (Neuroscience 2016) The oscillatory suppression of sensorimotor-mu power (i.e., 10-12 Hz) is a robust EEG correlate of motor control. Simply imagining voluntary limb movement can result in consistent suppression of mu-power, especially in contralateral electrode sites. This is typically exploited by neuroprostheses (e.g., BCI-controlled wheelchairs; Huang et al., 2012) that seek to restore movement to spinal-cord injury patients. In some examples, levels of mu-suppression have also been treated as an index of motor control effort (e.g., Mann et al., 1996). However, mu-suppression in contralateral sites can also be observed during passive limb movements, namely in the absence of voluntary control effort (Formaggio et al., 2013). In this study, we investigate whether patterns of oscillatory EEG activity across contralateral (C3) and ipsilateral (C4) sites discriminate for voluntary control and limb movement. In our study, EEG measurements were taken of ten participants who were required to either actively follow or resist the deflections of a control-loaded side-stick, this respectively required voluntary control in the presence and absence of limb movement. In contrast, they were also tested in conditions with passive or no limb movements, which respectively required them to simply hold on to a moving or stationary side-stick. A repeated-measures 2 x 2 x 2 ANOVA for the factors of electrode site (contralateral vs. ipsilateral), control (active vs. passive), and movement (movement vs stationary) revealed the following. To begin, there was a significant main effect of lateralized mu-suppression. Suppression of mu-power is larger in the contralateral site compared to the ipsilateral site (F(1,9)=5.10, p=0.05). More importantly, three significant interactions were found, movement x control (F(1,9)=13.1, p<0.01), electrode x movement (F(1,9)=5.78, p=0.04) and for electrode x control (F(1,9)=5.81, p=0.039). Limb movement resulted in selective mu-suppression of only the contralateral electrode. Voluntary control resulted in mu-suppression in both contralateral and ipsilateral electrodes, albeit to a lesser extent in the ipsilateral site. Overall, active resistance against side-stick deflections resulted in the largest levels of mu-suppression. The current results suggest that active voluntary resistance can result in high levels of mu-suppression that do not exhibit strong lateralization. This might go unnoticed in brain-computer-interface and experimental paradigms that estimate control effort by contrasting contralateral to ipsilateral mu-suppression. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 EEG oscillatory modulations (10-12 Hz) discriminate for voluntary motor control and limb movement 15017 15422 SaultonBd2016 7 A Saulton HH Bülthoff S de la Rosa San Diego, CA, USA2016-11-12 46th Annual Meeting of the Society for Neuroscience (Neuroscience 2016) Stored representations of body size and shape as derived from somatosensation are considered to be critical components of perception and action. Recent research has shown the presence of large hand distortions in proprioceptive localization tasks consisting of an overestimation of hand width and an underestimation of finger length. Those results were interpreted as reflecting specific somatosensory perceptual distortion bound to a body model underlying position sense. One important prerequisite to this interpretation is that measured localization task distortions actually stem from body representation. In this study, we re-examine hand distortions underlying positon sense and investigate whether these distortions are body specific or due to non-perceptual factors, e.g. conceptual knowledge. Participants made localization judgments regarding the spatial position of various landmarks on occluded items including their own hand. Our results show that larger hand distortions in localization tasks are likely to be induced by participants' incorrect conceptual knowledge about hand landmarks rather than proprioceptive or somatosensory influences. Moreover, we show that once we account for such incorrect conceptual knowledge, hand distortions in localization tasks are statistically similar to those of other objects. These results suggest that localization task distortions are not specific to the hand and call for caution when interpreting localization task distortions in terms of body specific effects. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Objects vs. hand: the effect of knuckle misconceptions on localization task distortions 15017 15422 Flad2016 7 N Flad Paris, France2016-10-07 1st Neuroergonomics Conference: The Brain at Work and in Everyday Life no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 When does the Brain Respond to Information during Visual Scanning? 15017 15422 MeilingerFBMSB2016 7 T Meilinger J Frankenstein J-P Bresciani B Mohler N Simon HH Bülthoff Leipzig, Germany2016-09-19 50. Kongress der Deutschen Gesellschaft für Psychologie (DGPs 2016) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Wie erinnern wir räumliches Wissen unseres Wohnortes? 15017 1542215017 SrismithZB2016 7 D Srismith M Zhao I Bülthoff Barcelona, Spain2016-09-01 313 39th European Conference on Visual Perception (ECVP 2016) People are good at recognising faces, particularly familiar faces. However, little is known about how precisely familiar faces are represented and how increasing familiarity improves the precision of face representation. Here we investigated the precision of face representation for two types of familiar faces: personally familiar faces (i.e. faces of colleagues) and visually familiar faces (i.e. faces learned from viewing photographs). For each familiar face, participants were asked to select the original face among an array of faces, which varied from highly caricatured (þ50%) to highly anticaricatured (50%) along the facial shape dimension. We found that for personally familiar faces, participants selected the original faces more often than any other faces. In contrast, for visually familiar faces, the highly anti-caricatured (50%) faces were selected more often than others, including the original faces. Participants also favoured anti-caricatured faces more than caricatured faces for both types of familiar faces. These results indicate that people form very precise representation for personally familiar faces, but not for visually familiar faces. Moreover, the more familiar a face is, the more its corresponding representation shifts from a region close to the average face (i.e. anti-caricatured) to its veridical location in the face space. no notspecified http://www.kyb.tuebingen.mpg.de/ published -313 Precise Representation of Personally, but not Visually, Familiar Faces 15017 15422 delaRosaFB2016_2 7 S de la Rosa Y Ferstl HH Bülthoff Barcelona, Spain2016-09-01 294 39th European Conference on Visual Perception (ECVP 2016) Accurately associating an action with its actor’s identity is fundamental for many – if not all- social cognitive functions. What are the visual processes supporting this ability? Previous research suggest separate neural substrates are supporting the recognition of facial identity and actions. Here we revisited this widely held assumption and examined the sensitivity of neural action recognition processes to facial identity using behavioral adaptation. We reasoned that if action recognition and facial identity were mediated by independent visual processes then action adaptation effects should not be modulated by the actor’s facial identity. We used action morphing and an augmented reality setup to examine the neural correlates of action recognition processes within an action adaptation paradigm under close-to-natural conditions. Contrary to the hypothesis that action recognition and facial identity are processed independently, we showed in three experiments that action adaptation effects in an action categorization tasks are modulated by facial identity and not by clothing. These findings strongly suggest that action recognition processes are sensitive to facial identity and thereby indicate a close link between actions and facial identity. Such identity sensitive action recognition mechanisms might support the fundamental social cognitive skill of associating an action with the actor’s identity. no notspecified http://www.kyb.tuebingen.mpg.de/ published -294 The face of actions: Evidence for neural action recognition processes being sensitive for facial identity 15017 15422 ZhaoB2016_2 7 M Zhao I Bülthoff Barcelona, Spain2016-08-29 27 39th European Conference on Visual Perception (ECVP 2016) Unlike most everyday objects, faces are processed holistically—they tend to be perceived as indecomposable wholes instead of a collection of independent facial parts. While holistic face processing has been demonstrated with a variety of behavioral tasks, it is predominantly observed with static faces. Here we investigated three questions about holistic processing of moving faces: (1) are rigidly moving faces processed holistically? (2) does rigid motion reduces the magnitudes of holistic processing? and (3) does holistic processing persist when study and test faces differ in terms of facial motion? Participants completed two composite face tasks (using a complete design), one with static faces and the other with rigidly moving faces. We found that rigidly moving faces are processed holistically. Moreover, the magnitudes of holistic processing effect observed for moving faces is similar to that observed for static faces. Finally, holistic processing still holds even when the study face is static and the test face is moving or vice versa. These results provide convincing evidence that holistic processing is a general face processing mechanism that applies to both static and moving faces. These findings indicate that rigid facial motion neither promotes partbased face processing nor eliminates holistic face processing. Funding: The study was supported by the Max Planck Society. no notspecified http://www.kyb.tuebingen.mpg.de/ published -27 Holistic Processing of Static and Rigidly Moving Faces 15017 15422 FademrechtBd2016_2 7 L Fademrecht HH Bülthoff S de la Rosa Barcelona, Spain2016-08-29 65 39th European Conference on Visual Perception (ECVP 2016) A central question in visual neuroscience concerns the degree to which visual representations of actions are used for action execution. Previously, we have shown that during simultaneous action observation and action execution, visual action recognition relies on visual but not motor processes. This research suggests a primacy of visual processes in social interaction scenarios. Here, we provide further evidence for visual processes dominating perception and action in social interactions. We examined the influence of visual processes on motor control. 16 participants were tested in a 3D virtual environment setup. Participants were visually adapted to an action (fist bump or punch) and subsequently categorized an ambiguous morphed action as either fist bump or punch in three experimental conditions. In the first condition participants responded via key press after having seen the entire test stimulus. In the second, participants responded with carrying out the complementary action after having seen the entire test stimulus. In the third (social interaction condition) participants carried out the complementary action while observing the test stimulus. We found an antagonistic bias of movement trajectories towards the non-adapted action (adaptation aftereffect) only in the social interaction condition. Our results highlight the importance of visual processes in social interactions. no notspecified http://www.kyb.tuebingen.mpg.de/ published -65 Visual processes dominate perception and action during social interactions 15017 15422 O039MalleyBM2016 7 M O'Malley HH Bülthoff T Meilinger Philadelphia, PA, USA2016-08-04 International Conference Spatial Cognition (SC 2016) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Spatial integration within environmental spaces: Testing predictions from mental walk and mental model 15017 15422 StrickrodtBM2016 7 M Strickrodt HH Bülthoff T Meilinger Philadelphia, PA, USA2016-08-02 International Conference Spatial Cognition (SC 2016) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Beyond the border: Separation of space influences memory structure of an object layout 15017 15422 HintereckerLZBBM2016 7 T Hinterecker C Leroy M Zhao M Butz HH Bülthoff T Meilinger Philadelphia, PA, USA2016-08-02 International Conference Spatial Cognition (SC 2016) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Gravity as a universal reference direction? Influences on spatial memory for vertical object locations 15017 15422 MolbertTMSBKZG2016_2 7 S Mölbert A Thaler B Mohler S Streuber MJ Black H-O Karnath S Zipfel KE Giel Tübingen, Germany2016-07-27 Virtual Environments: Current Topics in Psychological Research: VECTOR Workshop norexia nervosa (AN) is a serious eating disorder that goes along with underweight and high rates of psychological and physical comorbidity. Body image disturbance is a core symptom of AN, but as yet distinctive features of this disturbance unknown. This study uses individual 3D-avatars in virtual reality to investigate the following questions: (1) Do women with AN differ from controls in how accurately they perceive their body weight? (2) Do women with AN generally perceive bodies of their own shape differently than controls or only when viewing their own body? We investigate 25 women with AN and 25 healthy controls. Based on a 3D body scan, we create individual avatars for each participant. The avatar is manipulated to represent +/- 5%, 10%, 15% and 20% of the participant’s weight. Additionally, for the control task, we manipulate identity of the avatar using a standard texture. Avatars were presented on a stereoscopic life-size screen. In the two-alternative forced choice (2AFC) task, participants see each avatar 20 times for two seconds. After each presentation, they have to decide whether that was the correct or a manipulated avatar. In the Method of Adjustment (MoA) task, participants are asked to adjust each avatar to match both, the correct size and their ideal size. In the control task, participants memorize the body with standard texture and afterwards perform the same 2AFC and MoA tasks with respect to the memorized body. Additionally, eating pathology, body dissatisfaction and self-esteem are assessed. First results from 19 women with AN and 16 controls show a tendency of patients to be accurate or to underestimate their current body size as compared to controls. In the control task, both groups accurately memorized and estimated the avatar’s weight. Our preliminary results indicate that body image disturbance in AN is not due to a general deficit in body size perception, but limited to the own person and influenced by evaluation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Investigating Body Image Disturbance in Anorexia Nervosa Using Biometric Self-Avatars in Virtual Reality 15017 15017 15422 MeilingerSHCSFd2016 7 T Meilinger M Strickrodt T Hinterecker D-S Chang A Saulton L Fademrecht S de la Rosa Tübingen, Germany2016-07-27 Virtual Environments: Current Topics in Psychological Research: VECTOR Workshop The goal of social and spatial cognition is the understanding of human behavior when humans interact with their natural social and spatial environment. In contrast to this, many studies in the field examine social and spatial cognition under controlled but artificial conditions in which participants are passive observers rather than active agents. Here we present several projects in which we use virtual reality to increase the naturalness of the experimental testing conditions, while keeping the experimental set up under high experimental control. Due to the use of virtual reality and related techniques participants are able to naturally interact with their environment (e.g. walk through spaces, high five with an avatar) while we alter the visual stimuli in real-time in response to their behavior by means of motion tracking. Using this approach we combine experimental rigor with increased ecological validity to learn about the cognitive processes actualy taking place in life. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Using Virtual Reality to Examine Social and Spatial Cognition 15017 15422 FedorovG2016_2 7 LA Fedorov MA Giese Jeju, South Korea2016-07-00 89 Twenty-Fifth Annual Computational Neuroscience Meeting (CNS*2016) The visual perception of body motion can show interesting multi-stability. For example, a walking body silhouette (bottom inset Fig. 83A) is seen alternately as walking in two different directions [1]. For stimuli with minimal texture information, such as shading, this multi-stability disappears. Existing neural models for body motion perception [2–4] do not reproduce perceptual switching. Extending the model [2], we developed a neurodynamic model that accounts for this multi-stability (Fig. 83A). The core of the model is a two-dimensional neural field that consists of recurrently coupled neurons with selectivity for instantaneous body postures (‘snapshots’). The dimensions of the field encode the keyframe number θ and the view of the walker ϕ. The lateral connectivity of the field stabilizes two competing traveling pulse solutions that encode the perceived temporally changing action patterns (walking in the directions ±45°). The input activity of the field is generated by two visual pathways that recognize body postures from gray-level input movies. One pathway (‘silhouette pathway’) was adapted from [2] and recognizes shapes, mainly based on the contrast edges between the moving figure and the background. The second pathway is specialized for the analysis of luminance gradients inside the moving figure. Both pathways are hierarchical (deep) architectures, built from detectors that reproduce known properties of cortical neurons. Higher levels of the hierarchies extract more complex features with higher degree of position/scale invariance. The field activity is read out by two Motion Pattern (MP) neurons, which encode the two possible perceived walking directions. Testing the model with an unshaded silhouette stimulus, it produces randomly switching percepts that alternate between the walking directions (±45°) (Fig. 83B, C). Addition of shading cues disambiguates the percept and removes the bistability (Fig. 83D). The developed architecture accounts for the disambiguation by shape-from shading. no notspecified http://www.kyb.tuebingen.mpg.de/ published -89 A model for multi-stable dynamics in action recognition modulated by integration of silhouette and shading cues 15017 15422 HardcastleSBK2016 7 B Hardcastle DA Schwyn K Bierig HG Krapp London, UK2016-06-00 700 701 AVA Christmas Meeting 2015 The stabilization of gaze may involve multiple sensory systems. In blowflies, two visual pathways provide input to the gaze stabilization system: the high-resolution compound eyes and the simple dorsal ocelli. Individually, the corresponding pathways involved cover different dynamic input ranges, incur different processing delays, and suffer from different levels of sensor and processing noise. Information from multiple sensory pathways must be integrated in order to effect appropriate movements of the head to stabilize gaze; however, it is not entirely clear how this happens. Using high-speed videography, we investigated the combination of information from the two visual pathways at the behavioral output. We measured compensatory rotations of the head in response to a simulated roll rotation of a false-horizon around the fly, oscillating at up to 10 Hz. We found that the ocellar input reduces the response delay by an average of 5 ms but does not significantly affect the response gain or bandwidth. Our result suggests a nonlinear integration of compound eye and ocellar information. We are now performing intracellular recordings from elements along the visuomotor pathway likely to be involved in the integration of motion vision and ocellar signals, in response to the same visual stimulus used to evoke head movements in our behavioral experiments. This will allow us to study how signals affected by different processing delays along the two visual pathways are combined to ultimately reduce the delay of the behavioral output. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Integration of Multiple Visual Inputs in the Blowfly 15017 15422 YukselSBF2016 7 B Yüksel N Staub G Buondonno A Franchi Stockholm, Sweden2016-05-20 ICRA 2016 Workshop: Aerial Robotics Manipulation: from Simulation to Real-life An aerial manipulator is a flying robot, which can manipulate its environment through physical interaction. We have studied in our previous works the physical interaction of aerial vehicles using IDA-PBC and nonlinear external wrench observers [1, 2]. We also presented the design of a novel light-weight elastic joint-arm for quadrotors [3]. In this poster, we consider the type of aerial manipulators, in which the aerial robot is - equipped with rigid or elastic-joint arm - equipped with multiple manipulator arms with elastic or rigid actuators no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/ICRA-Workshop-Poster-2016-Yueksel.pdf published 0 Differential Flatness and Control of the Aerial Manipulators with Mixed Rigid/Elastic Joints: Controllability from Single Joint Arm to the Multiple Arms 15017 15422 ThalerGMGSBM2016 7 A Thaler MN Geuss SC Mölbert KE Giel S Streuber MJ Black BJ Mohler St. Pete Beach, FL, USA2016-05-18 1400 16th Annual Meeting of the Vision Sciences Society (VSS 2016) Previous research has suggested that inaccuracies in own body size estimation can largely be explained by a known error in perceived magnitude, called contraction bias (Cornelissen, Bester, Cairns, Tovée & Cornelissen, 2015). According to this, own body size estimation is biased towards an average reference body, such that individuals with a low body mass index (BMI) should overestimate their body size and high BMI individuals should underestimate their body size. However, previous studies have mainly focused on self-body size evaluation of patients suffering from anorexia nervosa. In this study, we tested healthy females varying in BMI to investigate whether personal body size influences accuracy of body size estimation and sensitivity to weight changes, reproducing a scenario of standing in front of a full length mirror. We created personalized avatars with a 4D full-body scanning system that records participants’ body geometry and texture, and altered the weight of the avatars based on a statistical body model. In two psychophysical experiments, we presented the stimuli on a stereoscopic, large-screen immersive display, and asked participants to respond to whether the body they saw was their own. Additionally, we used several questionnaires to assess participants’ self-esteem, eating behavior, and their attitudes towards their body shape and weight. Our results show that participants, across the range of BMI, veridically perceived their own body size, contrary to what is suggested by the contraction bias hypothesis. Interestingly, we found that BMI influenced sensitivity to weight changes in the positive direction, such that people with higher BMIs were more willing to accept bigger bodies as their own. BMI did not influence sensitivity to weight changes in the negative direction. no notspecified http://www.kyb.tuebingen.mpg.de/ published -1400 Investigating the influence of personal BMI on own body size perception in females using self-avatars 15017 1542215017 GeussMTM2016 7 M Geuss SC Mölbert A Thaler BJ Mohler St. Pete Beach, FL, USA2016-05-17 986 16th Annual Meeting of the Vision Sciences Society (VSS 2016) Our perception of our body, and its size, is important for many aspects of everyday life. Using a variety of measures, previous research demonstrated that people typically overestimate the size of their bodies (Longo & Haggard, 2010). Given that self-body size perception is informed from many different experiences, it is surprising that people do not perceive their bodies veridically. Here, we asked, whether different visual experiences of our bodies influence how large we estimate our body’s size. Specifically, participants estimated the width of four different body parts (feet, hips, shoulders, and head) as well as a noncorporeal object with No Visual Access, Self-Observation (1st person visual access), or looking through a Mirror (2nd person visual access) using a visual matching task. If estimates when given visual access (through mirror or 1st person perspective) differ from estimates made with no visual access, it would suggest that this method of viewing one’s body has less influence on how we represent the size of our bodies. Consistent with previous research, results demonstrated that in all conditions, each body part was overestimated. Interestingly, in the No Visual Access and Mirror conditions, the degree of overestimation was larger for upper body parts compared to lower body parts and there were no significant differences between the No Visual Access and Mirror conditions. There was, however, a significant difference between the Self-Observation condition and the other two conditions when estimating ones shoulder width. In the Self-Observation condition, participants were more accurate with estimating shoulder width. The similarity of results in the No Visual Access and Mirror conditions suggests that our representation of our body size may be partly based on experiences viewing one’s body in reflective surfaces. no notspecified http://www.kyb.tuebingen.mpg.de/ published -986 Body size estimations: the role of visual information from a first-person and mirror perspective 15017 1542215017 DobsBR2016 7 K Dobs I Bülthoff L Reddy St. Pete Beach, FL, USA2016-05-16 925 16th Annual Meeting of the Vision Sciences Society (VSS 2016) Integration of multiple sensory cues pertaining to the same object is essential for precise and accurate perception. The optimal strategy to estimate an object’s property is to weight sensory cues proportional to their relative reliability (i.e., the inverse of the variance). Recent studies showed that human observers apply this strategy when integrating low-level unisensory and multisensory signals, but evidence for high-level perception remains scarce. Here we asked if human observers optimally integrate high-level visual cues in a socially critical task, namely the recognition of a face. We therefore had subjects identify one of two previously learned synthetic facial identities (“Laura” and “Susan”) using facial form and motion. Five subjects performed a 2AFC identification task (i.e., “Laura or Susan?”) based on dynamic face stimuli that systematically varied in the amount of form and motion information they contained about each identity (10% morph steps from Laura to Susan). In single-cue conditions one cue (e.g., form) was varied while the other (e.g., motion) was kept uninformative (50% morph). In the combined-cue condition both cues varied by the same amount. To assess whether subjects weight facial form and motion proportional to their reliability, we also introduced cue-conflict conditions in which both cues were varied but separated by a small conflict (±10%). We fitted psychometric functions to the proportion of “Susan” choices pooled across subjects (fixed-effects analysis) for each condition. As predicted by optimal cue integration, the empirical combined variance was lower than the single-cue variances (p< 0.001, bootstrap test), and did not differ from the optimal combined variance (p>0.5). Moreover, no difference was found between empirical and optimal form and motion weights (p>0.5). Our data thus suggest that humans integrate high-level visual cues, such as facial form and motion, proportional to their reliability to yield a coherent percept of a facial identity. no notspecified http://www.kyb.tuebingen.mpg.de/ published -925 Optimal integration of facial form and motion during face recognition 15017 15422 ZhaoB2016 7 M Zhao I Bülthoff St. Pete Beach, FL, USA2016-05-15 731 16th Annual Meeting of the Vision Sciences Society (VSS 2016) Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific for faces or objects-of-expertise. While some researchers argue that holistic processing is unique for processing of faces (domain-specific hypothesis), others propose that it results from automatized attention strategy developed with expertise (i.e., expertise hypothesis). While these theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Non-face objects cannot elicit face-like holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line-patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. This face-like holistic processing of non-face objects also occurred when we tested faces and line patterns in different sessions on different days, suggesting that it was not due to the context effect incurred by testing both types of stimuli within a single session. Moreover, weakening the saliency of Gestalt information in line patterns reduced holistic processing for these stimuli, indicating the crucial role of Gestalt information in eliciting holistic processing. Taken together, these results indicate that, besides a top-down route based on expertise, holistic processing can be achieved via a bottom-up route relying merely on object-based information. Therefore, face-like holistic processing can extend beyond the domains of faces and objects-of-expertise, in contrary to current dominant theories. no notspecified http://www.kyb.tuebingen.mpg.de/ published -731 Holistic Processing of Unfamiliar Line Patterns 15017 15422 FademrechtNBBd2016 7 L Fademrecht J Nieuwenhuis I Bülthoff N Barraclough S de la Rosa St. Pete Beach, FL, USA2016-05-14 280 16th Annual Meeting of the Vision Sciences Society (VSS 2016) In real life humans need to recognize actions even if the actor is surrounded by a crowd of people but little is known about action recognition in cluttered environments. In the current study, we investigated whether a crowd influences action recognition with an adaptation paradigm. Using life-sized moving stimuli presented on a panoramic display, 16 participants adapted to either a hug or a clap action and subsequently viewed an ambiguous test stimulus (a morph between both adaptors). The task was to categorize the test stimulus as either ‘clap’ or ‘hug’. The change in perception of the ambiguous action due to adaptation is referred to as an ‘adaptation aftereffect’. We tested the influence of a cluttered background (a crowd of people) on the adaptation aftereffect under three experimental conditions: ‘no crowd’, ‘static crowd’ and ‘moving crowd’. Additionally, we tested the adaptation effect at 0° and 40° eccentricity. Participants showed a significant adaptation aftereffect at both eccentricities (p < .001). The results reveal that the presence of a crowd (static or moving) has no influence on the action adaptation effect (p = .07), neither in central vision nor in peripheral vision. Our results suggest that action recognition mechanisms and action adaptation aftereffects are robust even in complex and distracting environments. no notspecified http://www.kyb.tuebingen.mpg.de/ published -280 Does action recognition suffer in a crowded environment? 15017 15422 delaRosaFB2016 7 S de la Rosa Y Ferstl HH Bülthoff St. Pete Beach, FL, USA2016-05-14 268 16th Annual Meeting of the Vision Sciences Society (VSS 2016) It has been suggested that the motor system is essential for various social cognitive functions including the perception of actions in social interactions. Typically, the influence of the motor system on action recognition has been addressed in studies in which participants are merely action observers. This is in stark contrast to real social interactions in which humans often execute and observe actions at the same time. To overcome this discrepancy, we investigated the contribution of the motor system to action recognition when participants concurrently observed and executed actions. As a control, participants also observed and executed actions separately (i.e. not concurrently). Specifically, we probed the sensitivity of action recognition mechanisms to motor action information in both unimodal and bimodal motor-visual adaptation conditions. We found that unimodal visual adaptation to an action changed the percept of a subsequently presented ambiguous action away from the adapted action (adaptation aftereffect). We found a similar adaptation aftereffect in the unimodal non-visual motor adaptation condition confirming that also motor action information contributes to action recognition. However, in bimodal adaptation conditions, in which participants executed and observed actions at the same time, adaptation aftereffects were governed by the visual but not motor action information. Our results demonstrate that the contribution of the motor system to action recognition is small in conditions of simultaneous action observation and execution. Because humans often concurrently execute and observe actions in social interactions, our results suggest that action recognition in social interaction is mainly based on visual action information. no notspecified http://www.kyb.tuebingen.mpg.de/ published -268 Does the motor system contribute to action recognition in social interactions? 15017 15422 FichtnerHGZABK2016 7 N Fichtner A Henning I Giapitzakis N Zoelch N Avdievich C Boesch R Kreis Bern, Switzerland2016-03-31 36 11th Annual Meeting Brain Connectivity Introduction: Magnetic resonance spectroscopy benefits from using ultrahigh field scanners, as both the signal to noise ratio (SNR) and the separation of peaks improve. Inclusion of the downfield part of the spectrum (left of water peak) in addition to the generally used upfield part of the 1H MR spectrum is expected to allow for better monitoring of pathologies and metabolism in humans. The downfield part at 5-10ppm is less well characterized than the upfield spectrum, although some data is available for animal brain at high fields, as well as human brain at 3T. Experiments have been performed to elucidate the downfield spectrum in human brain and to quantify metabolite relaxation times T1 and T2 in grey matter at 7T using series of spectra with variable inversion recovery (IR) and echo time (TE) delays. Initial downfield experiments have also been performed in humans at 9.4T. Materials and Methods: Acquisition methods at 7T used a Philips 7T whole body scanner (UZH/ETH Zürich), with a voxel of interest placed in the visual cortex. A series of TEs and IRs was acquired in a total of 22 healthy volunteers. At 9.4T, spectra were acquired in three healthy volunteers on a Siemens whole-body MRI scanner (MPI Tuebingen). Results and Discussion: The spectra acquired at 7T and 9.4T demonstrate significant improvements in SNR and peak separation compared to those at lower field strengths. The averaged data sets from the 7T series were combined to develop a spectral model of partially overlapping signals this heuristic model describes the experimental data well and the results for many of the peaks are very consistent across subjects. T1 values found at 7T are mostly higher than those found at 3T, in particular for the NAA peak. Several peaks show a particularly short T1 in comparison to the others, indicating that they predominantly originate from macromolecules. The T2 values are in general much shorter than those found for upfield peaks. no notspecified http://www.kyb.tuebingen.mpg.de/ published -36 Downfield MR Spectroscopy at Ultrahigh Magnetic Fields 15017 15422 BeierholmRSN2016 7 U Beierholm T Rohe O Stegle U Noppeney Salt Lake City, UT, USA2016-02-25 Computational and Systems Neuroscience Meeting (COSYNE 2016) Combining multiple sources of information requires an estimate of the reliability of each source in order to perform optimal information integration. The human brain is faced with this challenge whenever processing multisensory stimuli, however how the brain estimates the reliability of each source is unclear with most studies assuming that the reliability is directly available. In practice however reliability of an information source requires inference too, and may depend on both current and previous information, a problem that can neatly be placed in a Bayesian framework. We performed three audio-visual spatial localization experiments where we manipulated the uncertainty of the visual stimulus over time. Subjects were presented with simultaneous auditory and visual cues in the horizontal plane and were tasked with locating the auditory cue. Due to the well-known ventriloquist illusion responses were biased towards the visual cue, depending on its reliability. We found that subjects changed their estimate of the visual reliability not only based on the presented visual stimulus, but were also influenced by the history of visual stimuli. The finding implies that the estimated reliability is governed by a learning process, here operating on a timescale on the order of 10 seconds. Using model comparison we found for all three experiments that a hierarchical Bayesian model that assumes a slowly varying reliability is best able to explain the data. Together these results indicate that the subjects’ estimated reliability of stimuli changes dynamically and thus that the brain utilizes the temporal dynamics of the environment by combining current and past estimates of reliability. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Using the past to estimate sensory uncertainty 15017 15422 Nooij2016 7 SAE Nooij Ulm, Germany2016-02-06 26th Oculomotor Meeting no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 The role of eye movements, head movements and vection in visually induced motion sickness 15017 15422 NestidB2016 7 A Nesti K de Winkel HH Bülthoff Ulm, Germany2016-02-05 26th Oculomotor Meeting no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Accumulation of sensory evidence in self-motion perception: long stimulus exposure facilitates discrimination of sinusoidal yaw rotations 15017 15422 D039Intino2016 15 G D'Intino 2016-09-29 no notspecified published Design and Experimental Evaluation of Haptic Support Systems for Pilot Training 15017 15422 Olivari2016 15 M Olivari 2016-06-00 no notspecified published Measuring Pilot Control Behavior in Control Tasks with Haptic Feedback 15017 15422 Bulthoff2016_7 10 HH Bülthoff Nesti2016 10 A Nesti Bulthoff2016_4 10 HH Bülthoff Scheer2016 10 M Scheer Chuang2016 10 LL Chuang Glatz2016 10 C Glatz Nesti2016_2 10 A Nesti Bulthoff2016_5 10 HH Bülthoff GlatzBC2016 10 C Glatz HH Bülthoff L Chuang ZhaoB2016_3 10 M Zhao I Bülthoff ChuangS2016 10 L Chuang M Scheer Chuang2016_3 10 L Chuang Chuang2016_2 10 LL Chuang Pretto2016 10 P Pretto FedorovG2016 10 LA Fedorov MA Giese Meilinger2016 10 C Horeis C Foster K Watanabe HH Bülthoff T Meilinger ChangFGBd2016 10 D-S Chang L Fedorov M Giese HH Bülthoff S de la Rosa DobsR2016 10 K Dobs L Reddy ChangBd2016 10 D-S Chang HH Bülthoff S de la Rosa Thaler2016 10 A Thaler Meilinger2016_2 10 T Meilinger Bulthoff2016_3 10 HH Bülthoff J Venrooij deWinkelB2016 10 KN de Winkel HH Bülthoff BulthoffV2016 10 HH Bülthoff J Venrooij LohmannKMB2016 10 J Lohmann J Kurz T Meilinger MV Butz HintereckerLBBM2016 10 T Hinterecker C Leroy MV Butz HH Bülthoff T Meilinger VenrooijB2016 10 J Venrooij HH Bülthoff Bulthoff2016_2 10 HH Bülthoff MeilingerRHBM2016 10 T Meilinger J Rebane A Henson HH Bülthoff HA Mallot Bulthoff2016 10 HH Bülthoff Dobs2015 1 K Dobs Logos Verlag Berlin, Germany 2015-00-00 Dynamic faces are highly complex, ecologically and socially relevant stimuli which we encounter almost everyday. When and what we extract from this rich source of information needs to be well coordinated by the face perception system. The current thesis investigates how this coordination is achieved. Part I comprises two psychophysical experiments examining the mechanisms underlying facial motion processing. Facial motion is represented as high-dimensional spatio-temporal data defining which part of the face is moving in which direction over time. Previous studies suggest that facial motion can be adequately represented using simple approximations. I argue against the use of synthetic facial motion by showing that the face perception system is highly sensitive towards manipulations of the natural spatio-temporal characteristics of facial motion. The neural processes coordinating facial motion processing may rely on two mechanisms: first, a sparse but meaningful spatio-temporal code representing facial motion; second, a mechanism that extracts distinctive motion characteristics. Evidence for the latter hypothesis is provided by the observation that facial motion, when performed in unconstrained contexts, helps identity judgments. Part II presents a functional magnetic resonance imaging (fMRI) study investigating the neural processing of expression and identity information in dynamic faces. Previous studies proposed a distributed neural system for face perception which distinguishes between invariant (e.g., identity) and changeable (e.g., expression) aspects of faces. Attention is a potential candidate mechanism to coordinate the processing of these two facial aspects. Two findings support this hypothesis: first, attention to expression versus identity of dynamic faces dissociates cortical areas assumed to process changeable aspects from those involved in discriminating invariant aspects of faces; second, attention leads to a more precise neural representation of the attended facial feature. Interactions between these two representations may be mediated by a part of the inferior occipital gyrus and the superior temporal sulcus which is supported by the observation that the latter area represented both expression and identity, while the first represented identity information irrespective of the attended feature. no notspecified http://www.kyb.tuebingen.mpg.de/ published 108 Behavioral and Neural Mechanisms Underlying Dynamic Face Perception 15017 15422 Esins2015 1 J Esins Logos Verlag Berlin, Germany 2015-00-00 Face recognition is one of the most important abilities for everyday social interactions. Congenital prosopagnosia, also referred to as glqqq face blindness", describes the innate, lifelong impairment to recognize other people by their face. About 2 % of the population is affected. This thesis aimed to investigate different aspects of face processing in prosopagnosia in order to gain a clearer picture and a better understanding of this heterogeneous impairment. In a first study, various aspects of face recognition and perception were investigated to allow for a better understanding of the nature of prosopagnosia. The results replicated previous findings and helped to resolve discrepancies between former studies. In addition, it was found that prosopagnosics show an irregular response behavior in tests for holistic face recognition. We propose that prosopagnosics either switch between strategies or respond randomly when performing these tests. In a second study, the general face recognition deficit observed in prosopagnosia was compared to face recognition deficits occurring when dealing with other-race faces. Most humans find it hard to recognize faces of an unfamiliar race, a phenomenon called the "other-race effect". The study served to investigate if there is a possible common mechanism underlying prosopagnosia and the other-race effect, as both are characterized by problems in recognizing faces. The results allowed to reject this hypothesis, and yielded new insights about similarities and dissimilarities between prosopagnosia and the other-race effect. In the last study, a possible treatment of prosopagnosia was investigated. This was based on a single case in which a prosopagnosic reported a sudden improvement of her face recognition abilities after she started a special diet. The different studies cover diverse aspects of prosopagnosia: the nature of prosopagnosia and measurement of its characteristics, comparison to other face recognition impairments, and treatment options. The results serve to broaden the knowledge about prosopagnosia and to gain a more detailed picture of this impairment. no notspecified http://www.kyb.tuebingen.mpg.de/ published 137 Face processing in congenital prosopagnosia 15017 15422 Venrooij2015 1 J Venrooij Logos Verlag Berlin, Germany 2015-00-00 Vehicle accelerations affect the human body in various ways. In some cases, accelerations cause involuntary motions of limbs like arms and hands. If someone is engaged in a manual control task at the same time, these involuntary limb motions can lead to involuntary control forces and control inputs. This phenomenon is called biodynamic feedthrough (BDFT). The control of many different vehicles is known to be vulnerable to BDFT effects, such as that of helicopters, aircraft, electric wheelchairs and hydraulic excavators. The fact that BDFT reduces comfort, control performance and safety in a wide variety of vehicles and under many different circumstances has motivated numerous efforts into measuring, modeling and mitigating these effects. It is known that BDFT dynamics depend on vehicle dynamics and control device dynamics, but also on factors such as seating dynamics, disturbance direction, disturbance frequency and the presence of seat belts and arm rests. The most complex and influential factor in BDFT is the human body. It is through the human body dynamics that the vehicle accelerations are transferred into involuntary limb motions and, consequently, into involuntary control inputs. Human body dynamics vary between persons with different body sizes and weights, but also within one person over time. The goal of the research was to increase the understanding of BDFT to allow for effective and efficient mitigation of the BDFT problem. The work dealt with several aspects of biodynamic feedthrough, but focused on the influence of the variable neuromuscular dynamics on BDFT dynamics. The approach of the research consisted of three parts: first, a method was developed to accurately measure BDFT. Then, several BDFT models were developed that describe the BDFT phenomenon based on various different principles. Finally, using the insights from the previous steps, a novel approach to BDFT mitigation was proposed and experimentally validated. no notspecified http://www.kyb.tuebingen.mpg.de/ published 440 Measuring, modeling and mitigating biodynamic feedthrough 15017 15422 Nesti2015 1 A Nesti Logos Verlag Berlin, Germany 2015-00-00 Everyday life requires humans to move through the environment, while completing crucial tasks such as retrieving nourishment, avoiding perils or controlling motor vehicles. Success in these tasks largely relies in a correct perception of self-motion, i.e. the continuous estimation of one's body position and its derivatives with respect to the world. The processes underlying self-motion perception have fascinated neuroscientists for more than a century and large bodies of neural, behavioural and physiological studies have been conducted to discover how the central nervous system integrates available sensory information to create an internal representation of the physical motion. The goal of this PhD thesis is to extend current knowledge on self-motion perception by focusing on conditions that closely resemble typical aspects of everyday life. In the works conducted within this thesis, I isolate different components typical of everyday life motion and employ psychophysical methodologies to systematically investigate their effect on human self-motion sensitivity. Particular attention is dedicated to the human ability to discriminate between motions of different intensity. How this is achieved has been a fundamental question in the study of perception since the seminal works of Weber and Fechner. When tested over wide ranges of rotations and translations, participants' sensitivity (i.e. their ability to detect motion changes) is found to decrease with increasing motion intensities, revealing a nonlinearity in the perception of self-motion that is not present at the level of ocular reflexes or in neural responses of sensory afferents. The relationship between the stimulus intensity and the smallest intensity change perceivable by the participants can be mathematically described by a power law, regardless on the sensory modality investigated (visual or inertial) and on whether visual and inertial cues were presented alone or congruently combined, such as during natural movements. Individual perceptual law parameters were fit based on experimental data for upward and downward translations and yaw rotations based on visual-only, inertial-only and combined visual-inertial motion cues. Besides wide ranges of motion intensities, everyday life scenarios also provide complex motion patterns involving combinations of rotational and translational motion, visual and inertial sensory cues and physical and mental workload. The question of how different combinations of these factors affect motion sensitivity was experimentally addressed within the framework of driving simulation and revealed that sensitivity might strongly decrease in more realistic conditions, where participants do not only focus on perceiving a 'simple' motion stimulus (e.g. a sinusoidal profile at a specific frequency) but are, instead, actively engaged in a dynamic driving simulation. Applied benefits of the present thesis include advances in the field of vehicle motion simulation, where knowledge on human self-motion perception supports the development of state-of-the-art algorithms to control simulator motion. This allows for reproducing, within a safe and controlled environment, driving or flying experiences that are perceptually realistic to the user. Furthermore, the present work will guide future research into the neural basis of perception and action. no notspecified http://www.kyb.tuebingen.mpg.de/ published 211 On the Perception of Self-motion: from Perceptual Laws to Car Driving Simulation 15017 15422 LeeBM2015 1 S-W Lee HH Bülthoff K-R Müller Springer Dordrecht, The Netherlands 2015-00-00 no notspecified http://www.kyb.tuebingen.mpg.de/ published 213 Recent Progress in Brain and Cognitive Engineering 15017 15422 Piryankov2015 1 I Piryankova Logos Verlag Berlin, Germany 2015-00-00 Technologische Fortschritte in der Computergrafik, dem dreidimensionalen Scannen und in Motion-Tracking-Technologien haben zu einem erhöhten Einsatz von Selbst-Avataren in immersiven virtuellen Realitäten (VR) beigetragen. Selbst-Avatare werden zum Beispiel in den Bereichen Visualisierung und Simulation, aber auch in klinischen Anwendungen oder für Unterhaltungszwecke eingesetzt. Deshalb ist es wichtig neue Erkenntnisse über die Wahrnehmung des eigenen Körpers, des Selbst-Avatars und der räumlichen Wahrnehmung des Benutzers zu gewinnen, sowie den Einfluss des Selbst-Avatars auf die räumliche Wahrnehmung in der virtuellen Welt zu untersuchen. Mit Hilfe von moderner VR-Technologie habe ich untersucht wie Veränderungen des Selbst-Avatars die Wahrnehmung des eigenen Körpers und des Raumes verändern. Die Ergebnisse zeigen, dass Selbst-Avatare nicht genau die gleichen Dimensionen wie der Körper des Benutzers haben müssen, damit sich der Benutzer mit seinem Selbst-Avatar identifizieren kann. no notspecified http://www.kyb.tuebingen.mpg.de/ published 143 The influence of a self-avatar on space and body perception in immersive virtual reality 15017 15422 Kaulard2015 1 K Kaulard Logos Verlag Berlin, Germany 2015-00-00 One of the defining attributes of the human species is sophisticated communication, for which facial expressions are crucial. Traditional research has so far mainly investigated a minority of 6 basic emotional expressions displayed as pictures. Despite the important insights of this approach, its ecological validity is limited: facial movements express more than emotions, and facial expressions are more than just pictures. The objective of the present thesis is therefore to improve the understanding of facial expression recognition by investigating the internal representations of a large range of facial expressions, displayed both as static pictures and as dynamic videos. To this end, it was necessary to develop and validate a new facial expression database which includes 20.000 stimuli of 55 expressions (study 1). Perceptual representations of the six basic emotional expressions were found previously to rely on evaluation of valence and arousal; study 2 showed that this evaluation generalises to many more expressions, particularly when displayed as videos. While it is widely accepted that knowledge influences perception, how these are linked is largely unknown; study 3 investigated this question by asking how knowledge about facial expressions, instantiated as conceptual representations, relates to perceptual representations of these expressions. A strong link was found which changed with the kind of expressions and the type of display. In probably the most extensive behavioural studies (with regards to the number of facial expressions used) to date, this thesis suggests that there are commonalities but also differences in processing of emotional and of other types of facial expressions. Thus, to understand facial expression processing, one needs to consider more than the 6 basic emotional expressions. These findings outline first steps towards a new domain in facial expression research, which has implications for a number of research and application fields where facial expressions play a role, ranging from social, developmental, and clinical psychology to computer vision and affective computing research. no notspecified http://www.kyb.tuebingen.mpg.de/ published 224 Visual Perception of Emotional and Conversational Facial Expressions 15017 15422 TrutoiuGKSM2015 28 L Trutoiu M Geuss S Kuhl B Sanders R Mantiuk BulthoffKP2015 28 HH Bülthoff A Kemeny P Pretto NestiBPB2015 3 A Nesti KA Beykirch P Pretto HH Bülthoff 2015-12-00 12 233 3553 3564 Experimental Brain Research To successfully perform daily activities such as maintaining posture or running, humans need to be sensitive to self-motion over a large range of motion intensities. Recent studies have shown that the human ability to discriminate self-motion in the presence of either inertial-only motion cues or visual-only motion cues is not constant but rather decreases with motion intensity. However, these results do not yet allow for a quantitative description of how self-motion is discriminated in the presence of combined visual and inertial cues, since little is known about visual–inertial perceptual integration and the resulting self-motion perception over a wide range of motion intensity. Here we investigate these two questions for head-centred yaw rotations (0.5 Hz) presented either in darkness or combined with visual cues (optical flow with limited lifetime dots). Participants discriminated a reference motion, repeated unchanged for every trial, from a comparison motion, iteratively adjusted in peak velocity so as to measure the participants’ differential threshold, i.e. the smallest perceivable change in stimulus intensity. A total of six participants were tested at four reference velocities (15, 30, 45 and 60 °/s). Results are combined for further analysis with previously published differential thresholds measured for visual-only yaw rotation cues using the same participants and procedure. Overall, differential thresholds increase with stimulus intensity following a trend described well by three power functions with exponents of 0.36, 0.62 and 0.49 for inertial, visual and visual–inertial stimuli, respectively. Despite the different exponents, differential thresholds do not depend on the type of sensory input significantly, suggesting that combining visual and inertial stimuli does not lead to improved discrimination performance over the investigated range of yaw rotations. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 Human discrimination of head-centred visual–inertial yaw rotations 15017 15422 OlivariNVBP2013 3 M Olivari F Nieuwenhuizen J Venrooij HH Bülthoff L Pollini 2015-12-00 12 45 2780 2791 IEEE Transactions on Cybernetics In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses 15017 15422 ChangBBd2015_2 3 D-S Chang F Burger HH Bülthoff S de la Rosa 2015-12-00 6 6 1 6 i-Perception Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 The Perception of Cooperativeness Without Any Visual or Auditory Communication 15017 15422 StrickrodtOW2015 3 M Strickrodt M O'Malley JM Wiener 2015-12-00 1936 6 1 12 Frontiers in Psychology We present two experiments investigating how navigators deal with ambiguous landmark information when learning unfamiliar routes. In the experiments we presented landmark objects repeatedly along a route, which allowed us to manipulate how informative single landmarks were (1) about the navigators' location along the route and (2) about the action navigators had to take at that location. Experiment 1 demonstrated that reducing location informativeness alone did not affect route learning performance. While reducing both location and action informativeness led to decreased route learning performance, participants still performed well above chance level. This demonstrates that they used other information than just the identity of landmark objects at their current position to disambiguate their location along the route. To investigate how navigators distinguish between visually identical intersections, we systematically manipulated the identity of landmark objects and the actions required at preceding intersections in Experiment 2. Results suggest that the direction of turn at the preceding intersections was sufficient to tell two otherwise identical intersections apart. Together, results from Experiments 1 and 2 suggest that route knowledge is more complex than simple stimulus-response associations and that neighboring places are tightly linked. These links not only encompass sequence information but also directional information which is used to identify the correct direction of travel at subsequent locations, but can also be used for self-localization. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 This Place Looks Familiar: How Navigators Distinguish Places with Ambiguous Landmark Objects When Learning Novel Routes 15017 15422 GianiBOKN2015 3 AS Giani P Belardinelli E Ortiz M Kleiner U Noppeney 2015-11-00 122 203–213 NeuroImage In everyday life, our auditory system is bombarded with many signals in complex auditory scenes. Limited processing capacities allow only a fraction of these signals to enter perceptual awareness. This magnetoencephalography (MEG) study used informational masking to identify the neural mechanisms that enable auditory awareness. On each trial, participants indicated whether they detected a pair of sequentially presented tones (i.e., the target) that were embedded within a multi-tone background. We analysed MEG activity for ‘hits’ and ‘misses’, separately for the first and second tones within a target pair. Comparing physically identical stimuli that were detected or missed provided insights into the neural processes underlying auditory awareness. While the first tone within a target elicited a stronger early P50m on hit trials, only the second tone evoked a negativity at 150 ms, which may index segregation of the tone pair from the multi-tone background. Notably, a later sustained deflection peaking around 300 and 500 ms (P300m) was the only component that was significantly amplified for both tones, when they were detected pointing towards its key role in perceptual awareness. Additional Dynamic Causal Modelling analyses indicated that the negativity at 150 ms underlying auditory stream segregation is mediated predominantly via changes in intrinsic connectivity within auditory cortices. By contrast, the later P300m response as a signature of perceptual awareness relies on interactions between parietal and auditory cortices. In conclusion, our results suggest that successful detection and hence auditory awareness of a two-tone pair within complex auditory scenes rely on recurrent processing between auditory and higher-order parietal cortices. no notspecified http://www.kyb.tuebingen.mpg.de/ published -203 Detecting tones in complex auditory scenes 15017 15422 15017 18826 GeussSCTM2015 3 MN Geuss JK Stefanucci SH Creem-Regehr WB Thompson BJ Mohler 2015-11-00 7 57 1235 1247 Human Factors Objective: Our goal was to evaluate the degree to which display technologies influence the perception of size in an image. Background: Research suggests that factors such as whether an image is displayed stereoscopically, whether a user’s viewpoint is tracked, and the field of view of a given display can affect users’ perception of scale in the displayed image. Method: Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen). Results: Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size. Conclusions: Display technologies that are capable of stereoscopic display and tracking of the user’s viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow. Applications: The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 Effect of Display Technology on Perceived Scale of Space 15017 15422 15017 MeilingerFWBH2014_2 3 T Meilinger J Frankenstein K Watanabe HH Bülthoff C Hölscher 2015-11-00 6 79 1000 1008 Psychological Research In everyday life, navigators often consult a map before they navigate to a destination (e.g., a hotel, a room, etc.). However, not much is known about how humans gain spatial knowledge from seeing a map and direct navigation together. In the present experiments, participants learned a simple multiple corridor space either from a map only, only from walking through the virtual environment, first from the map and then from navigation, or first from navigation and then from the map. Afterwards, they conducted a pointing task from multiple body orientations to infer the underlying reference frames. We constructed the learning experiences in a way such that map-only learning and navigation-only learning triggered spatial memory organized along different reference frame orientations. When learning from maps before and during navigation, participants employed a map- rather than a navigation-based reference frame in the subsequent pointing task. Consequently, maps caused the employment of a map-oriented reference frame found in memory for highly familiar urban environments ruling out explanations from environmental structure or north preference. When learning from navigation first and then from the map, the pattern of results reversed and participants employed a navigation-based reference frame. The priority of learning order suggests that despite considerable difference between map and navigation learning participants did not use the more salient or in general more useful information, but relied on the reference frame established first. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/Psychol-Res-2015-Meilinger.pdf published 8 Reference frames in learning from maps and navigation 15017 15422 KrimmelBBMDBRK2015 3 M Krimmel M Breidt M Bacher S Müller-Hagedorn K Dietz HH Bülthoff S Reinert S Kluba 2015-10-00 4 136 490e 501e Plastic & Reconstructive Surgery BACKGROUND: With the advent of computer-assisted three-dimensional surface imaging and rapid data processing, oral and maxillofacial surgeons and orthodontists are enabled to analyze facial growth three dimensionally. Normative data, however, are still rare and inconsistent. The aim of the present study was to establish a valid reference system and to give normative data for facial growth. METHODS: Three-dimensional facial surface images were obtained from 344 healthy Caucasian children (aged 0 to 7 years). The images were put in correspondence by means of six landmarks close to the skull base (exocanthion, endocanthion, otobasion inferius). Growth curves for 21 landmarks were estimated in the three dimensions. RESULTS: Facial regions close to the skull base (orbit and ear) showed a biphasic growth pattern, with accelerated growth during the first year of life that subsided to a decreased and linear velocity thereafter. Landmarks on the nose, lips, and chin demonstrated either a curvilinear or a linear growth pattern. CONCLUSIONS: The rapid increase of the orbit and ear region in infancy is a secondary phenomenon to the rapid growth of the neurocranium during the first year of life. Thereafter, maxillary and mandibular growth prevails. The present study gives three-dimensional normative data for an expanded growth span between birth and childhood. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 Three-Dimensional Normal Facial Growth from Birth to the Age of 7 Years 15017 15422 BiegCBB2015 3 H-J Bieg LL Chuang HH Bülthoff J-P Bresciani 2015-09-00 9 233 2527 2538 Experimental Brain Research Before initiating a saccade to a moving target, the brain must take into account the target’s eccentricity as well as its movement direction and speed. We tested how the kinematic characteristics of the target influence the time course of this oculomotor response. Participants performed a step-ramp task in which the target object stepped from a central to an eccentric position and moved at constant velocity either to the fixation position (foveopetal) or further to the periphery (foveofugal). The step size and target speed were varied. Of particular interest were trials that exhibited an initial saccade prior to a smooth pursuit eye movement. Measured saccade reaction times were longer in the foveopetal than in the foveofugal condition. In the foveopetal (but not the foveofugal) condition, the occurrence of an initial saccade, its reaction time as well as the strength of the pre-saccadic pursuit response depended on both the target’s speed and the step size. A common explanation for these results may be found in the neural mechanisms that select between oculomotor response alternatives, i.e., a saccadic or smooth response. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 Asymmetric saccade reaction times to smooth pursuit 15017 15422 ZhaoW2015_2 3 M Zhao WH Warren 2015-09-00 142 96–109 Cognition Path integration has long been thought of as an obligatory process that automatically updates one’s position and orientation during navigation. This has led to the hypotheses that path integration serves as a back-up system in case landmark navigation fails, and a reference system that detects discrepant landmarks. Three experiments tested these hypotheses in humans, using a homing task with a catch-trial paradigm. Contrary to the back-up system hypothesis, when stable landmarks unexpectedly disappeared on catch trials, participants were completely disoriented, and only then began to rely on path integration in subsequent trials (Experiment 1). Contrary to the reference system hypothesis, when stable landmarks unexpectedly shifted by 115° on catch trials, participants failed to detect the shift and were completely captured by the landmarks (Experiment 2). Conversely, when chronically unstable landmarks unexpectedly remained in place on catch trials, participants failed to notice and continued to navigate by path integration (Experiment 3). In the latter two cases, they gradually sensed the instability (or stability) of landmarks on later catch trials. These results demonstrate that path integration does not automatically serve as a back-up system, and does not function as a reference system on individual sorties, although it may contribute to monitoring environmental stability over time. Rather than being automatic, the roles of path integration and landmark navigation are thus dynamically modulated by the environmental context. no notspecified http://www.kyb.tuebingen.mpg.de/ published -96 Environmental stability modulates the role of path integration in human navigation 15017 15422 StefanucciCTLG2015 3 JK Stefanucci SH Creem-Regehr WB Thompson DA Lessard MN Geuss 2015-09-00 3 21 215 223 Journal of Experimental Psychology: Applied Accurate perception of the size of objects in computer-generated imagery is important for a growing number of applications that rely on absolute scale, such as medical visualization and architecture. Addressing this problem requires both the development of effective evaluation methods and an understanding of what visual information might contribute to differences between virtual displays and the real world. In the current study, we use 2 affordance judgments—perceived graspability of an object or reaching through an aperture—to compare size perception in high-fidelity graphical models presented on a large screen display to the real world. Our goals were to establish the use of perceived affordances within spaces near to the observer for evaluating computer graphics and to assess whether the graphical displays were perceived similarly to the real world. We varied the nature of the affordance task and whether or not the display enabled stereo presentation. We found that judgments of grasping and reaching through can be made effectively with screen-based displays. The affordance judgments revealed that sizes were perceived as smaller than in the real world. However, this difference was reduced when stereo viewing was enabled or when the virtual display was viewed before the real world. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Evaluating the accuracy of size perception on screen-based displays: Displayed objects appear smaller than real objects 15017 15422 SimHGK2014 3 E-J Sim HB Helbig M Graf M Kiefer 2015-09-00 9 25 2907 2918 Cerebral Cortex Recent evidence suggests an interaction between the ventral visual-perceptual and dorsal visuo-motor brain systems during the course of object recognition. However, the precise function of the dorsal stream for perception remains to be determined. The present study specified the functional contribution of the visuo-motor system to visual object recognition using functional magnetic resonance imaging and event-related potential (ERP) during action priming. Primes were movies showing hands performing an action with an object with the object being erased, followed by a manipulable target object, which either afforded a similar or a dissimilar action (congruent vs. incongruent condition). Participants had to recognize the target object within a picture–word matching task. Priming-related reductions of brain activity were found in frontal and parietal visuo-motor areas as well as in ventral regions including inferior and anterior temporal areas. Effective connectivity analyses suggested functional influences of parietal areas on anterior temporal areas. ERPs revealed priming-related source activity in visuo-motor regions at about 120 ms and later activity in the ventral stream at about 380 ms. Hence, rapidly initiated visuo-motor processes within the dorsal stream functionally contribute to visual object recognition in interaction with ventral stream processes dedicated to visual analysis and semantic integration. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 When Action Observation Facilitates Visual Perception: Activation in Visuo-Motor Areas Contributes to Object Recognition 15017 18824 15017 15422 SoykaBB2015 3 F Soyka HH Bülthoff M Barnett-Cowan 2015-08-00 8 10 1 14 PLoS ONE Humans are capable of moving about the world in complex ways. Every time we move, our self-motion must be detected and interpreted by the central nervous system in order to make appropriate sequential movements and informed decisions. The vestibular labyrinth consists of two unique sensory organs the semi-circular canals and the otoliths that are specialized to detect rotation and translation of the head, respectively. While thresholds for pure rotational and translational self-motion are well understood surprisingly little research has investigated the relative role of each organ on thresholds for more complex motion. Eccentric (off-center) rotations during which the participant faces away from the center of rotation stimulate both organs and are thus well suited for investigating integration of rotational and translational sensory information. Ten participants completed a psychophysical direction discrimination task for pure head-centered rotations, translations and eccentric rotations with 5 different radii. Discrimination thresholds for eccentric rotations reduced with increasing radii, indicating that additional tangential accelerations (which increase with radius length) increased sensitivity. Two competing models were used to predict the eccentric thresholds based on the pure rotation and translation thresholds: one assuming that information from the two organs is integrated in an optimal fashion and another assuming that motion discrimination is solved solely by relying on the sensor which is most strongly stimulated. Our findings clearly show that information from the two organs is integrated. However the measured thresholds for 3 of the 5 eccentric rotations are even more sensitive than predictions from the optimal integration model suggesting additional non-vestibular sources of information may be involved. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Integration of Semi-Circular Canal and Otolith Cues for Direction Discrimination during Eccentric Rotations 15017 15422 GrabeBSR2015 3 V Grabe HH Bülthoff D Scaramuzza P Robuffo Giordano 2015-07-00 8 34 1114 1135 International Journal of Robotics Research For the control of unmanned aerial vehicles (UAVs) in GPS-denied environments, cameras have been widely exploited as the main sensory modality for addressing the UAV state estimation problem. However, the use of visual information for ego-motion estimation presents several theoretical and practical difficulties, such as data association, occlusions, and lack of direct metric information when exploiting monocular cameras. In this paper, we address these issues by considering a quadrotor UAV equipped with an onboard monocular camera and an inertial measurement unit (IMU). First, we propose a robust ego-motion estimation algorithm for recovering the UAV scaled linear velocity and angular velocity from optical flow by exploiting the so-called continuous homography constraint in the presence of planar scenes. Then, we address the problem of retrieving the (unknown) metric scale by fusing the visual information with measurements from the onboard IMU. To this end, two different estimation strategies are proposed and critically compared: a first exploiting the classical extended Kalman filter (EKF) formulation, and a second one based on a novel nonlinear estimation framework. The main advantage of the latter scheme lies in the possibility of imposing a desired transient response to the estimation error when the camera moves with a constant acceleration norm with respect to the observed plane. We indeed show that, when compared against the EKF on the same trajectory and sensory data, the nonlinear scheme yields considerably superior performance in terms of convergence rate and predictability of the estimation. The paper is then concluded by an extensive experimental validation, including an onboard closed-loop control of a real quadrotor UAV meant to demonstrate the robustness of our approach in real-world conditions. no notspecified http://www.kyb.tuebingen.mpg.de/ published 21 Nonlinear ego-motion estimation from optical flow for online control of a quadrotor UAV 15017 15422 DobrickiM2015 3 M Dobricki BJ Mohler 2015-07-00 7 44 814 820 Perception When looking into a mirror healthy humans usually clearly perceive their own face. Such an unambiguous face self-perception indicates that an individual has a discrete facial self-representation and thereby the involvement of a self-other face distinction mechanism. We have stroked the trunk of healthy individuals while they watched the trunk of a virtual human that was facing them being synchronously stroked. Subjects sensed self-identification with the virtual body, which was accompanied by a decrease of their self-other face distinction. This suggests that face self-perception involves the self-other face distinction and that this mechanism is underlying the formation of a discrete representation of one’s face. Moreover, the self-identification with another’s body that we find suggests that the perception of one’s full body affects the self-other face distinction. Hence, changes in self-other face distinction can indicate alterations of body self-perception, and thereby serve to elucidate the relationship of face and body self-perception. no notspecified http://www.kyb.tuebingen.mpg.de/ published 6 Self-Identification With Another’s Body Alters Self-Other Face Distinction 15017 15422 15017 KimCPCWBK2015 3 J Kim YG Chung J-Y Park S-C Chung C Wallraven HH Bülthoff S-P Kim 2015-06-00 6 10 1 17 PLoS ONE Perceptual sensitivity to tactile roughness varies across individuals for the same degree of roughness. A number of neurophysiological studies have investigated the neural substrates of tactile roughness perception, but the neural processing underlying the strong individual differences in perceptual roughness sensitivity remains unknown. In this study, we explored the human brain activation patterns associated with the behavioral discriminability of surface texture roughness using functional magnetic resonance imaging (fMRI). First, a whole-brain searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions from which we could decode roughness information. The searchlight MVPA revealed four brain regions showing significant decoding results: the supplementary motor area (SMA), contralateral postcentral gyrus (S1), and superior portion of the bilateral temporal pole (STP). Next, we evaluated the behavioral roughness discrimination sensitivity of each individual using the just-noticeable difference (JND) and correlated this with the decoding accuracy in each of the four regions. We found that only the SMA showed a significant correlation between neuronal decoding accuracy and JND across individuals; Participants with a smaller JND (i.e., better discrimination ability) exhibited higher decoding accuracy from their voxel response patterns in the SMA. Our findings suggest that multivariate voxel response patterns presented in the SMA represent individual perceptual sensitivity to tactile roughness and people with greater perceptual sensitivity to tactile roughness are likely to have more distinct neural representations of different roughness levels in their SMA. no notspecified http://www.kyb.tuebingen.mpg.de/ published 16 Decoding Accuracy in Supplementary Motor Cortex Correlates with Perceptual Sensitivity to Tactile Roughness 15017 15422 ZhaoW2015 3 M Zhao WH Warren 2015-06-00 6 26 915 924 Psychological Science How do people combine their sense of direction with their use of visual landmarks during navigation? Cue-integration theory predicts that such cues will be optimally integrated to reduce variability, whereas cue-competition theory predicts that one cue will dominate the response direction. We tested these theories by measuring both accuracy and variability in a homing task while manipulating information about path integration and visual landmarks. We found that the two cues were near-optimally integrated to reduce variability, even when landmarks were shifted up to 90°. Yet the homing direction was dominated by a single cue, which switched from landmarks to path integration when landmark shifts were greater than 90°. These findings suggest that cue integration and cue competition govern different aspects of the homing response: Cues are integrated to reduce response variability but compete to determine the response direction. The results are remarkably similar to data on animal navigation, which implies that visual landmarks reset the orientation, but not the precision, of the path-integration system. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 How You Get There From Here: Interaction of Visual Landmarks and Path Integration in Human Navigation 15017 15422 deWinkelKB2015 3 KN de Winkel M Katliar HH Bülthoff 2015-05-00 5 10 1 20 PLoS ONE It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people. no notspecified http://www.kyb.tuebingen.mpg.de/ published 19 Forced Fusion in Multisensory Heading Estimation 15017 15422 SaultonDBd2015 3 A Saulton TJ Dodds HH Bülthoff S de la Rosa 2015-05-00 5 233 1471 1479 Experimental Brain Research Accurate knowledge about size and shape of the body derived from somatosensation is important to locate one’s own body in space. The internal representation of these body metrics (body model) has been assessed by contrasting the distortions of participants’ body estimates across two types of tasks (localization task vs. template matching task). Here, we examined to which extent this contrast is linked to the human body. We compared participants’ shape estimates of their own hand and non-corporeal objects (rake, post-it pad, CD-box) between a localization task and a template matching task. While most items were perceived accurately in the visual template matching task, they appeared to be distorted in the localization task. All items’ distortions were characterized by larger length underestimation compared to width. This pattern of distortion was maintained across orientation for the rake item only, suggesting that the biases measured on the rake were bound to an item-centric reference frame. This was previously assumed to be the case only for the hand. Although similar results can be found between non-corporeal items and the hand, the hand appears significantly more distorted than other items in the localization task. Therefore, we conclude that the magnitude of the distortions measured in the localization task is specific to the hand. Our results are in line with the idea that the localization task for the hand measures contributions of both an implicit body model that is not utilized in landmark localization with objects and other factors that are common to objects and the hand. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Objects exhibit body model like shape distortions 15017 15422 LeyrerLBM2015_2 3 M Leyrer SA Linkenauger HH Bülthoff BJ Mohler 2015-05-00 5 10 1 23 PLoS ONE In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height. no notspecified http://www.kyb.tuebingen.mpg.de/ published 22 The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality 15017 15422 15017 KimSWLB2015 3 J Kim J Schultz T Rohe C Wallraven S-W Lee HH Bülthoff 2015-04-00 14 35 5655 5663 Journal of Neuroscience Emotions can be aroused by various kinds of stimulus modalities. Recent neuroimaging studies indicate that several brain regions represent emotions at an abstract level, i.e., independently from the sensory cues from which they are perceived (e.g., face, body, or voice stimuli). If emotions are indeed represented at such an abstract level, then these abstract representations should also be activated by the memory of an emotional event. We tested this hypothesis by asking human participants to learn associations between emotional stimuli (videos of faces or bodies) and non-emotional stimuli (fractals). After successful learning, fMRI signals were recorded during the presentations of emotional stimuli and emotion-associated fractals. We tested whether emotions could be decoded from fMRI signals evoked by the fractal stimuli using a classifier trained on the responses to the emotional stimuli (and vice versa). This was implemented as a whole-brain searchlight, multivoxel activation pattern analysis, which revealed successful emotion decoding in four brain regions: posterior cingulate cortex (PCC), precuneus, MPFC, and angular gyrus. The same analysis run only on responses to emotional stimuli revealed clusters in PCC, precuneus, and MPFC. Multidimensional scaling analysis of the activation patterns revealed clear clustering of responses by emotion across stimulus types. Our results suggest that PCC, precuneus, and MPFC contain representations of emotions that can be evoked by stimuli that carry emotional information themselves or by stimuli that evoke memories of emotional stimuli, while angular gyrus is more likely to take part in emotional memory retrieval. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Abstract Representations of Associated Emotions in the Human Brain 15017 15422 15017 18826 BulthoffN2015 3 I Bülthoff FN Newell 2015-04-00 137 9–21 Cognition Several studies have provided evidence in favour of a norm-based representation of faces in memory. However, such models have hitherto failed to take account of how other person-relevant information affects face recognition performance. Here we investigated whether distinctive or typical auditory stimuli affect the subsequent recognition of previously unfamiliar faces and whether the type of auditory stimulus matters. In this study participants learned to associate either unfamiliar distinctive and typical voices or unfamiliar distinctive and typical sounds to unfamiliar faces. The results indicated that recognition performance was better to faces previously paired with distinctive than with typical voices but we failed to find any benefit on face recognition when the faces were previously associated with distinctive sounds. These findings possibly point to an expertise effect, as faces are usually associated to voices. More importantly, it suggests that the memory for visual faces can be modified by the perceptual quality of related vocal information and more specifically that facial distinctiveness can be of a multi-sensory nature. These results have important implications for our understanding of the structure of memory for person identification. no notspecified http://www.kyb.tuebingen.mpg.de/ published -9 Distinctive voices enhance the visual recognition of unfamiliar faces 15017 15422 RoheN2015_2 3 T Rohe U Noppeney 2015-04-00 5 15 1 16 Journal of Vision To obtain a coherent percept of the environment, the brain should integrate sensory signals from common sources and segregate those from independent sources. Recent research has demonstrated that humans integrate audiovisual information during spatial localization consistent with Bayesian Causal Inference (CI). However, the decision strategies that human observers employ for implicit and explicit CI remain unclear. Further, despite the key role of sensory reliability in multisensory integration, Bayesian CI has never been evaluated across a wide range of sensory reliabilities. This psychophysics study presented participants with spatially congruent and discrepant audiovisual signals at four levels of visual reliability. Participants localized the auditory signals (implicit CI) and judged whether auditory and visual signals came from common or independent sources (explicit CI). Our results demonstrate that humans employ model averaging as a decision strategy for implicit CI; they report an auditory spatial estimate that averages the spatial estimates under the two causal structures weighted by their posterior probabilities. Likewise, they explicitly infer a common source during the common-source judgment when the posterior probability for a common source exceeds a fixed threshold of 0.5. Critically, sensory reliability shapes multisensory integration in Bayesian CI via two distinct mechanisms: First, higher sensory reliability sensitizes humans to spatial disparity and thereby sharpens their multisensory integration window. Second, sensory reliability determines the relative signal weights in multisensory integration under the assumption of a common source. In conclusion, our results demonstrate that Bayesian CI is fundamental for integrating signals of variable reliabilities. no notspecified http://www.kyb.tuebingen.mpg.de/ published 15 Sensory reliability shapes perceptual inference via two mechanisms 15017 18826 15017 15422 LinkenaugerBM2014 3 SA Linkenauger HH Bülthoff BJ Mohler 2015-04-00 70 393–401 Neuropsychologia Considerable empirical evidence has shown influences of the action capabilities of the body on the perception of sizes and distances. Generally, as one׳s action capabilities increase, the perception of the relevant distance (over which the action is to be performed) decreases and vice versa. As a consequence, it has been proposed that the body׳s action capabilities act as a perceptual ruler, which is used to measure perceived sizes and distances. In this set of studies, we investigated this hypothesis by assessing the influence of arm׳s reach on the perception of distance. By providing participant with a self-representing avatar seen in a first-person perspective in virtual reality, we were able to introduce novel and completely unfamiliar alterations in the virtual arm׳s reach to evaluate their impact on perceived distance. Using both action-based and visual matching measures, we found that virtual arm׳s reach influenced perceived distance in virtual environments. Due to the participants׳ inexperience with the reach alterations, we also were able to assess the amount of experience with the new arm׳s reach required to influence perceived distance. We found that minimal experience reaching with the virtual arm can influence perceived distance. However, some reaching experience is required. Merely having a long or short virtual arm, even one that is synchronized to one׳s movements, is not enough to influence distance perception if one has no experience reaching. no notspecified http://www.kyb.tuebingen.mpg.de/ published -393 Virtual arm's reach influences perceived distances but only after experience reaching 15017 15422 15017 NestiBPB2014_3 3 A Nesti KA Beykirch P Pretto HH Bülthoff 2015-03-00 3 233 861 869 Experimental Brain Research While moving through the environment, humans use vision to discriminate different self-motion intensities and to control their actions (e.g. maintaining balance or controlling a vehicle). How the intensity of visual stimuli affects self-motion perception is an open, yet important, question. In this study, we investigate the human ability to discriminate perceived velocities of visually induced illusory self-motion (vection) around the vertical (yaw) axis. Stimuli, generated using a projection screen (70 × 90 deg field of view), consist of a natural virtual environment (360 deg panoramic colour picture of a forest) rotating at constant velocity. Participants control stimulus duration to allow for a complete vection illusion to occur in every single trial. In a two-interval forced-choice task, participants discriminate a reference motion from a comparison motion, adjusted after every presentation, by indicating which rotation feels stronger. Motion sensitivity is measured as the smallest perceivable change in stimulus intensity (differential threshold) for eight participants at five rotation velocities (5, 15, 30, 45 and 60 deg/s). Differential thresholds for circular vection increase with stimulus velocity, following a trend well described by a power law with an exponent of 0.64. The time necessary for complete vection to arise is slightly but significantly longer for the first stimulus presentation (average 11.56 s) than for the second (9.13 s) and does not depend on stimulus velocity. Results suggest that lower differential thresholds (higher sensitivity) are associated with smaller rotations, because they occur more frequently during everyday experience. Moreover, results also suggest that vection is facilitated by a recent exposure, possibly related to visual motion after-effect. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Self-motion sensitivity to visual yaw rotations in humans 15017 15422 LeeWKCLPC2014 3 I-S Lee C Wallraven J Kong D-S Chang H Lee H-J Park Y Chae 2015-03-00 140 148–155 Physiology & Behavior The aim of this study was to compare behavioral and functional brain responses to the act of inserting needles into the body in two different contexts, treatment and stimulation, and to determine whether the behavioral and functional brain responses to a subsequent pain stimulus were also context dependent. Twenty-four participants were randomly divided into two groups: an acupuncture treatment (AT) group and an acupuncture stimulation (AS) group. Each participant received three different types of stimuli, consisting of tactile, acupuncture, and pain stimuli, and was given behavioral assessments during fMRI scanning. Although the applied stimuli were physically identical in both groups, the verbal instructions differed: participants in the AS group were primed to consider the acupuncture as a painful stimulus, whereas the participants in the AT group were told that the acupuncture was part of therapeutic treatment. Acupuncture yielded greater brain activation in reward-related brain areas (ventral striatum) of the brain in the AT group when compared to the AS group. Brain activation in response to pain stimuli was significantly attenuated in the bilateral secondary somatosensory cortex and the right dorsolateral prefrontal cortex after prior acupuncture needle stimulation in the AT group but not in the AS group. Inserting needles into the body in the context of treatment activated reward circuitries in the brain and modulated pain responses in the pain matrix. Our findings suggest that pain induced by therapeutic tools in the context of a treatment is modulated differently in the brain, demonstrating the power of context in medical practice. no notspecified http://www.kyb.tuebingen.mpg.de/ published -148 When pain is not only pain: Inserting needles into the body evokes distinct reward-related brain responses in the context of a treatment 15017 15422 RyllBR2014 3 M Ryll HH Bülthoff P Robuffo Giordano 2015-02-00 2 23 540 556 IEEE Transactions on Control Systems Technology Standard quadrotor unmanned aerial vehicles (UAVs) possess a limited mobility because of their inherent underactuation, that is, availability of four independent control inputs (the four propeller spinning velocities) versus the 6 degrees of freedom parameterizing the quadrotor position/orientation in space. Thus, the quadrotor pose cannot track arbitrary trajectories in space (e.g., it can hover on the spot only when horizontal). Because UAVs are more and more employed as service robots for interaction with the environment, this loss of mobility due to their underactuation can constitute a limiting factor. In this paper, we present a novel design for a quadrotor UAV with tilting propellers which is able to overcome these limitations. Indeed, the additional set of four control inputs actuating the propeller tilting angles is shown to yield full actuation to the quadrotor position/orientation in space, thus allowing it to behave as a fully actuated flying vehicle. We then develop a comprehensive modeling and control framework for the proposed quadrotor, and subsequently illustrate the hardware and software specifications of an experimental prototype. Finally, the results of several simulations and real experiments are reported to illustrate the capabilities of the proposed novel UAV design. no notspecified http://www.kyb.tuebingen.mpg.de/ published 16 A Novel Overactuated Quadrotor Unmanned Aerial Vehicle: Modeling, Control, and Experimental Validation 15017 15422 AllerGCWN2015 3 M Aller A Giani V Conrad M Watanabe U Noppeney 2015-02-00 16 9 1 8 Frontiers in Integrative Neuroscience To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 A spatially collocated sound thrusts a flash into awareness 15017 18826 15017 15421 15017 1882415017 15422 RoheN2015 3 T Rohe U Noppeney 2015-02-00 2 13 1 18 PLoS Biology To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. no notspecified http://www.kyb.tuebingen.mpg.de/ published 17 Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception 15017 15422 15017 18826 LeyrerLBM2015 3 M Leyrer SA Linkenauger HH Bülthoff BJ Mohler 2015-02-00 1:1 12 1 23 ACM Transactions on Applied Perception Virtual reality technology can be considered a multipurpose tool for diverse applications in various domains, for example, training, prototyping, design, entertainment, and research investigating human perception. However, for many of these applications, it is necessary that the designed and computer-generated virtual environments are perceived as a replica of the real world. Many research studies have shown that this is not necessarily the case. Specifically, egocentric distances are underestimated compared to real-world estimates regardless of whether the virtual environment is displayed in a head-mounted display or on an immersive large-screen display. While the main reason for this observed distance underestimation is still unknown, we investigate a potential approach to reduce or even eliminate this distance underestimation. Building up on the angle of declination below the horizon relationship for perceiving egocentric distances, we describe how eye height manipulations in virtual reality should affect perceived distances. In addition, we describe how this relationship could be exploited to reduce distance underestimation for individual users. In a first experiment, we investigate the influence of a manipulated eye height on an action-based measure of egocentric distance perception. We found that eye height manipulations have similar predictable effects on an action-based measure of egocentric distance as we previously observed for a cognitive measure. This might make this approach more useful than other proposed solutions across different scenarios in various domains, for example, for collaborative tasks. In three additional experiments, we investigate the influence of an individualized manipulation of eye height to reduce distance underestimation in a sparse-cue and a rich-cue environment. In these experiments, we demonstrate that a simple eye height manipulation can be used to selectively alter perceived distances on an individual basis, which could be helpful to enable every user to have an experience close to what was intended by the content designer. no notspecified http://www.kyb.tuebingen.mpg.de/ published 22 Eye Height Manipulations: A Possible Solution to Reduce Underestimation of Egocentric Distances in Head-Mounted Displays 15017 15422 15017 ButlerCB2014 3 JS Butler JL Campos HH Bülthoff 2015-02-00 2 233 587 597 Experimental Brain Research Passive movement through an environment is typically perceived by integrating information from different sensory signals, including visual and vestibular information. A wealth of previous research in the field of multisensory integration has shown that if different sensory signals are spatially or temporally discrepant, they may not combine in a statistically optimal fashion; however, this has not been well explored for visual–vestibular integration. Self-motion perception involves the integration of various movement parameters including displacement, velocity, acceleration and higher derivatives such as jerk. It is often assumed that the vestibular system is optimized for the processing of acceleration and higher derivatives, while the visual system is specialized to process position and velocity. In order to determine the interactions between different spatiotemporal properties for self-motion perception, in Experiment 1, we first asked whether the velocity profile of a visual trajectory affects discrimination performance in a heading task. Participants performed a two-interval forced choice heading task while stationary. They were asked to make heading discriminations while the visual stimulus moved at a constant velocity (C-Vis) or with a raised cosine velocity (R-Vis) motion profile. Experiment 2 was designed to assess how the visual and vestibular velocity profiles combined during the same heading task. In this case, participants were seated on a Stewart motion platform and motion information was presented via visual information alone, vestibular information alone or both cues combined. The combined condition consisted of congruent blocks (R-Vis/R-Vest) in which both visual and vestibular cues consisted of a raised cosine velocity profile and incongruent blocks (C-Vis/R-Vest) in which the visual motion profile consisted of a constant velocity motion, while the vestibular motion consisted of a raised cosine velocity profile. Results from both Experiments 1 and 2 demonstrated that visual heading estimates are indeed affected by the velocity profile of the movement trajectory, with lower thresholds observed for the R-Vis compared to the C-Vis. In Exp. 2 when visual–vestibular inputs were both present, they were combined in a statistically optimal fashion irrespective of the type of visual velocity profile, thus demonstrating robust integration of visual and vestibular cues. The study suggests that while the time course of the velocity did affect visual heading judgments, a moderate conflict between visual and vestibular motion profiles does not cause a breakdown in optimal integration for heading. no notspecified http://www.kyb.tuebingen.mpg.de/ published 10 Optimal visual-vestibular integration under conditions of conflicting intersensory motion profiles 15017 15422 LinkenaugerPMCBSGW2014 3 SA Linkenauger Hy Wong M Geuss JK Stefanucci KC McCulloch HH Bülthoff BJ Mohler DR Proffitt 2015-02-00 1 144 103 113 Journal of Experimental Psychology: General Given that observing one’s body is ubiquitous in experience, it is natural to assume that people accurately perceive the relative sizes of their body parts. This assumption is mistaken. In a series of studies, we show that there are dramatic systematic distortions in the perception of bodily proportions, as assessed by visual estimation tasks, where participants were asked to compare the lengths of two body parts. These distortions are not evident when participants estimate the extent of a body part relative to a noncorporeal object or when asked to estimate noncorporal objects that are the same length as their body parts. Our results reveal a radical asymmetry in the perception of corporeal and noncorporeal relative size estimates. Our findings also suggest that people visually perceive the relative size of their body parts as a function of each part’s relative tactile sensitivity and physical size. no notspecified http://www.kyb.tuebingen.mpg.de/ published 10 The Perceptual Homunculus: The Perception of the Relative Proportions of the Human Body 15017 15422 15017 delaRosaCCUAB2014 3 S de la Rosa RN Choudhery C Curio S Ullman L Assif HH Bülthoff 2015-02-00 9-10 22 1233 1271 Visual Cognition Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting). no notspecified http://www.kyb.tuebingen.mpg.de/ published 38 Visual categorization of social interactions 15017 15422 CaniardBT2014 3 F Caniard HH Bülthoff IM Thornton 2015-01-00 1058 8 1 14 Frontiers in Human Neuroscience Local motion is known to produce strong illusory displacement in the perceived position of globally static objects. For example, if a dot-cloud or grating drifts to the left within a stationary aperture, the perceived position of the whole aperture will also be shifted to the left. Previously, we used a simple tracking task to demonstrate that active control over the global position of an object did not eliminate this form of illusion. Here, we used a new iPad task to directly compare the magnitude of illusory displacement under active and passive conditions. In the active condition, participants guided a drifting Gabor patch along a virtual slalom course by using the tilt control of an iPad. The task was to position the patch so that it entered each gate at the direct center, and we used the left/right deviations from that point as our dependent measure. In the passive condition, participants watched playback of standardized trajectories along the same course. We systematically varied deviation from midpoint at gate entry, and participants made 2AFC left/right judgments. We fitted cumulative normal functions to individual distributions and extracted the PSE as our dependent measure. To our surprise, the magnitude of displacement was consistently larger under active than under passive conditions. Importantly, control conditions ruled out the possibility that such amplification results from lack of motor control or differences in global trajectories as performance estimates were equivalent in the two conditions in the absence of local motion. Our results suggest that the illusion penetrates multiple levels of the perception-action cycle, indicating that one important direction for the future of perceptual illusions may be to more fully explore their influence during active vision. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Action can amplify motion-induced illusory displacement 15017 15422 ZelazoFBR2013 3 D Zelazo A Franchi HH Bülthoff P Robuffo Giordano 2015-01-00 1 34 105 128 International Journal of Robotics Research This work proposes a fully decentralized strategy for maintaining the formation rigidity of a multi-robot system using only range measurements, while still allowing the graph topology to change freely over time. In this direction, a first contribution of this work is an extension of rigidity theory to weighted frameworks and the rigidity eigenvalue, which when positive ensures the infinitesimal rigidity of the framework. We then propose a distributed algorithm for estimating a common relative position reference frame amongst a team of robots with only range measurements in addition to one agent endowed with the capability of measuring the bearing to two other agents. This first estimation step is embedded into a subsequent distributed algorithm for estimating the rigidity eigenvalue associated with the weighted framework. The estimate of the rigidity eigenvalue is finally used to generate a local control action for each agent that both maintains the rigidity property and enforces additional constraints such as collision avoidance and sensing/communication range limits and occlusions. As an additional feature of our approach, the communication and sensing links among the robots are also left free to change over time while preserving rigidity of the whole framework. The proposed scheme is then experimentally validated with a robotic testbed consisting of six quadrotor unmanned aerial vehicles operating in a cluttered environment. no notspecified http://www.kyb.tuebingen.mpg.de/ published 23 Decentralized rigidity maintenance control with range measurements for multi-robot systems 15017 15422 KimMCCPBK2014 3 J Kim K-R Müller YG Chung S-C Chung J-Y Park HH Bülthoff S-P Kim 2015-01-00 1070 8 1 10 Frontiers in Human Neuroscience According to the hierarchical view of human somatosensory network, somatic sensory information is relayed from the thalamus to primary somatosensory cortex (S1), and then distributed to adjacent cortical regions to perform further perceptual and cognitive functions. Although a number of neuroimaging studies have examined neuronal activity correlated with tactile stimuli, comparatively less attention has been devoted toward understanding how vibrotactile stimulus information is processed in the hierarchical somatosensory cortical network. To explore the hierarchical perspective of tactile information processing, we studied two cases: (a) discrimination between the locations of finger stimulation, and (b) detection of stimulation against no stimulation on individual fingers, using both standard general linear model (GLM) and searchlight multi-voxel pattern analysis (MVPA) techniques. These two cases were studied on the same data set resulting from a passive vibrotactile stimulation experiment. Our results showed that vibrotactile stimulus locations on fingers could be discriminated from measurements of human functional magnetic resonance imaging (fMRI). In particular, it was in case (a) where we observed activity in contralateral posterior parietal cortex (PPC) and supramarginal gyrus (SMG) but not in S1, while in case (b) we found significant cortical activations in S1 but not in PPC and SMG. These discrepant observations suggest the functional specialization with regard to vibrotactile stimulus locations, especially, the hierarchical information processing in the human somatosensory cortical areas. Our findings moreover support the general understanding that S1 is the main sensory receptive area for the sense of touch, and adjacent cortical regions (i.e., PPC and SMG) are in charge of a higher level of processing and may thus contribute most for the successful classification between stimulated finger locations. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Distributed functions of detection and discrimination of vibrotactile stimuli in the hierarchical human somatosensory system 15017 15422 Meilinger2015_2 7 T Meilinger Copenhagen, Denmark2015-11-09 12 13 Conference on Human Mobility, Cognition and GISc People use “route knowledge” to navigate to targets along familiar routes and “survey knowledge” to determine (by pointing, for example) a target’s metric location. We examined within which coordinate systems route and survey knowledge is represented in memory. Data suggests that navigators memorize survey knowledge of their city of residency (Fig1) within a single, north-oriented reference frame learned from maps. (1). However, when they recall this knowledge while located within the city, they spontaneously adjusted this knowledge towards their current body orientation and location relative to the recalled area – probably to have the information ready for later action (2). Contrary to survey knowledge, route knowledge of one’s home city was memorized in different representations relying on multiple, local, street-based coordinate systems presumably learned from navigation. (3). When recalling this knowledge to plan a route, navigators concentrate on turns and employ a “when-in-doubt-follow-your-nose” default strategy in order to not get lost (4). Taken together, our results suggest that people coordinate multiple representations of their surrounding environment and adjust these to their current situation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 How do people memorize and recall spatial knowledge within their city of residency? 15017 15422 StrickrodtM2015 7 M Strickrodt T Meilinger Copenhagen, Denmark2015-11-09 33 34 Conference on Human Mobility, Cognition and GISc A vista space (VS), e.g., a room, is perceived from one vantage point, whereas an environmental space (ES), e.g., a building, is experienced successively during movement. Participants learned the same object layout by walking through multiple corridors (ES) or within a differently oriented room (VS). In four VS conditions they either learned a fully or a successively visible object layout, and either from a static position or by walking through the environment along a path, mirroring the translation in ES. Afterwards, participants pointed between object locations in different body orientations and reproduced the object layout. Pointing latency in ES increased with the number of corridors to the target and pointing performance was best along corridor-based orientations. In VS conditions latency did not increase with distance and pointing performance was best along room-based orientations, which were oblique to corridor and walking orientations. Furthermore, ES learners arranged the layout in the order they experienced the objects, and less so VS learners. Most beneficial pointing orientations, distance and order effects suggest that spatial memory in ES is qualitatively different from spatial memory in VS and that differences in the visible environment (spatial structure) rather than movement or successive presentation are responsible for that. Our results are in line with the dissociation of vista and environmental space as postulated by Montello (1993. Furthermore, our study provides a behavioral foundation for the application of isovists when conducting visual integration analysis, which is one module of the space syntax approach (e.g., Hillier, 1999). no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Movement, successive presentation and environmental structure and their influence on spatial memory in vista and environmental space 15017 15422 OdelgaSBA2015 7 M Odelga P Stegagno HH Bülthoff A Ahmad Cancun, Mexico2015-11-00 204 210 3rd Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS 2015) In this paper, we present a hardware in the loop simulation setup for multi-UAV systems. With our setup, we are able to command the robots simulated in Gazebo, a popular open source ROS-enabled physical simulator, using the computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot hardware. In addition, since our setup is inherently multi-robot, we can also test the communication flow among the robots. We provide two use cases to show the characteristics of our setup. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/RED-UAS-2015-Odelga.pdf published 6 A Setup for multi-UAV hardware-in-the-loop simulations 15017 15422 SanzAL2015 7 D Sanz A Ahmad P Lima Lisboa, Portugal2015-11-00 547 559 Second Iberian Robotics Conference (ROBOT'2015) Domestic assistance for the elderly and impaired people is one of the biggest upcoming challenges of our society. Consequently, in-home care through domestic service robots is identified as one of the most important application area of robotics research. Assistive tasks may range from visitor reception at the door to catering for owner's small daily necessities within a house. Since most of these tasks require the robot to interact directly with humans, a predominant robot functionality is to detect and track humans in real time: either the owner of the robot or visitors at home or both. In this article we present a robust method for such a functionality that combines depth-based segmentation and visual detection. The robustness of our method lies in its capability to not only identify partially occluded humans (e.g., with only torso visible) but also to do so in varying lighting conditions. We thoroughly validate our method through extensive experiments on real robot datasets and comparisons with the ground truth. The datasets were collected on a home-like environment set up within the context of RoboCup@Home and RoCKIn@Home competitions. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 Onboard robust person detection and tracking for domestic service robots 15017 15422 LiuMSAZ2015 7 Y Liu JM Montenbruck P Stegagno F Allgöwer A Zell Hamburg, Germany2015-10-00 5410 5416 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015) This paper presents a nonlinear control approach for quadrotor Micro Aerial Vehicles (MAVs), which combines a backstepping-like regulator based on the solution of a certain class of global output regulation problems for the rigid body equations on SO(3), a robust controller for the system with bounded disturbances, as well as a trajectory generator using a model predictive control method. The proposed algorithm is endowed with strong convergence properties so that it allows the quadrotor MAVs to reach almost all the desired attitudes. The control approach is implemented on a high-payload-capable quadcopter with unstructured dynamics and unknown disturbances. The performance of our algorithm is demonstrated through a series of experimental evaluations and comparisons with another control method on normal and aggressive trajectory tracking tasks. no notspecified http://www.kyb.tuebingen.mpg.de/ published 6 A Robust Nonlinear Controller for Nontrivial Quadrotor Maneuvers: Approach and Verification 15017 15422 MassiddaBS2015 7 C Massidda HH Bülthoff P Stegagno Hamburg, Germany2015-10-00 3105 3110 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015) Identification of landmarks for outdoor navigation is often performed using computationally expensive computer vision methods or via heavy and expensive multi-spectral and range sensors. Both choices are forbidden on Micro Aerial Vehicles (MAV) due to limited payload and computational power. However, an appropriate choice of the hardware sensor equipment allows the employment of mixed multi-spectral analysis and computer vision techniques to identify natural landmarks. In this work, we propose a low-cost low-weight camera array with appropriate optical filters to be exploited both as stereo camera and multi-spectral sensor. Through stereo vision and the Normalized Difference Vegetation Index (NDVI), we are able to classify the observed materials in the scene among several different classes, identify vegetation and water bodies and provide measurements of their relative bearing and distance from the robot. A handheld prototype of this camera array is tested in outdoor environment. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/IROS-2015-Massidda.pdf published 5 Autonomous Vegetation Identification for Outdoor Aerial Navigation 15017 15422 MasaratiQZYDPVS2013 7 P Masarati G Quaranta L Zaichik Y Yashin P Desyatnik MD Pavel J Venrooij H Smaili Moskva, Russia2015-10-00 497 501 39th European Rotorcraft Forum (ERF 2013) no notspecified http://www.kyb.tuebingen.mpg.de/ published 4 Biodynamic Pilot Modelling for Aeroelastic A/RPC 15017 15422 GeluardiNPB2013 7 S Geluardi F Nieuwenhuizen L Pollini HH Bülthoff Moskva, Russia2015-10-00 419 433 39th European Rotorcraft Forum (ERF 2013) At the Max Planck Institute for Biological Cybernetics the influence of an augmented system on helicopter pilots with limited flight skills is being investigated. This study would provide important contributions in the research field on personal air transport systems. In this project, the flight condition under study is the hover. The first step is the implementation of a rigid-body dynamic model. This could be used to perform handling qualities evaluations for comparing the pilot performances with and without augmented system. This paper aims to provide a lean procedure and a reliable measurement setup for the collection of the flight test data. The latter are necessary to identify the helicopter dynamic model. The mathematical and technical tools used to reach this purpose are described in detail. First, the measurement setup is presented, used to collect the piloted control inputs and the helicopter response. Second, a description of the flight maneuvers and the pilot training phase is taken into consideration. Finally the flight test data collection is described and the results are showed to assess and validate the setup and the procedure presented. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2013/ERF-2013-Geluardi.pdf published 14 Data Collection for Developing a Dynamic Model of a Light Helicopter 15017 15422 MeilingerSFBB2015 7 T Meilinger J Schulte-Pelkum J Frankenstein D Berger HH Bülthoff Kyoto, Japan2015-10-00 25 28 25th International Conference on Artificial Reality and Telexistence and the 20h Eurographics Symposium on Virtual Environments (ICAT-EGVE 2015) Comparing spatial performance in different virtual reality setups can indicate which cues are relevant for a realistic virtual experience. Bodily self-movement cues and global orientation information were shown to increase spatial performance compared with local visual cues only. We tested the combined impact of bodily and global orientation cues by having participants learn a virtual multi corridor environment either by only walking through it, with additional distant landmarks providing heading information, or with a surrounding hall relative to which participants could determine their orientation and location. Subsequent measures on spatial memory only revealed small and non-reliable differences between the learning conditions. We conclude that additional global landmark information does not necessarily improve user's orientation within a virtual environment when bodily-self-movement cues are available. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICAT-EGVE-2015-Meilinger.pdf published 3 Global Landmarks Do Not Necessarily Improve Spatial Performance in Addition to Bodily Self-Movement Cues when Learning a Large-Scale Virtual Environment 15017 15422 OlivariNBP2015_2 7 M Olivari FM Nieuwenhuizen HH Bülthoff L Pollini Hong Kong, China2015-10-00 3079 3085 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2015) Effectiveness of hap tic guidance systems depends on how humans adapt their neuromuscular response to the force feedback. A quantitative insight into adaptation of neuromuscular response can be obtained by identifying neuromuscular dynamics. Since humans are likely to vary their neuromuscular response during realistic control scenarios, there is a need for methods that can identify time-varying neuromuscular dynamics. In this work an identification method is developed which estimates the impulse response of time-varying neuromuscular system by using a Recursive Least Squares (RLS) method. The proposed method extends the commonly used RLS-based method by employing the pseudo inverse operator instead of the inverse operator. This results in improved robustness to external noise. The method was validated in a human in-the-loop experiment. The neuromuscular estimates given by the proposed method were more accurate than those obtained with the commonly used RLS-based method. no notspecified http://www.kyb.tuebingen.mpg.de/ published 6 Identifying Time-Varying Neuromuscular Response: a Recursive Least-Squares Algorithm with Pseudoinverse 15017 15422 ScheerBC2015_3 7 M Scheer HH Bülthoff LL Chuang Berlin, Germany2015-10-00 24 11. Berliner Werkstatt Mensch-Maschine-Systeme The extent to which we experience ‚workload‘ whilst steering depends on (i) the availability of the human operator’s (presumably limited) resources and, (ii) the demands of the steering task. Typically, an increased demand of the steering task for a specific resource can be inferred from how steering modifies the components of the event-related potential (ERP), which is elicited by the stimuli of a competing task. Recent studies have demonstrated that this approach can continue to be applied even when the stimuli does not require an explicit response. Under certain circumstances, workload levels in the primary task can influence the ERPs that are elicited by task-irrelevant target events, in particular complex environmental sounds. Using this approach, the current study assesses the human operator’s resources that are demanded by different aspects of the steering task. To enable future studies to focus their analysis, we identify ERP components and electrodes that are relevant to steering demands, using mass univariate analysis. Additionally we compare the effectiveness of sound stimuli that are conventionally employed to elicit ERPs for assessing workload, namely pure-tone oddballs and environmental sounds. In the current experiment, participants performed a compensatory tracking task that required them to align a continuously perturbed target line to a stationary reference line. Task difficulty was manipulated either by varying the bandwidth of the disturbance or by varying the complexity of the controller dynamics of the steering system. Both manipulations presented two levels of difficulty (‚Easy‘ and ‚Hard‘), which could be contrasted to a baseline ‘View only’ condition. During the steering task, task-irrelevant sounds were presented to elicit ERPs: frequent pure-tone standards, rare pure-tone oddballs and rare environmental sounds. Our results show that steering task demands influence ERP components that are suggested by the previous literature to be related to the following cognitive processes, namely the call for orientation (i.e., early P3a), the orientation of attention (i.e., late P3a), and the semantic processing of the task-irrelevant sound stimuli (i.e., N400). The early P3 was decreased in the frontocentral electrodes, the late P3 centrally and the N400 centrally and over the left hemisphere. Single subject analyses on these identified components reveal differences that correspond to our manipulations of steering difficulty. More participants discriminate for above components in the ‘Hard’ relative to the ‘Easy’ condition. The current study identifies the spatial and temporal distribution of ERPs that ought to be targeted for future investigations of the influence of steering on workload. In addition, the use of task-irrelevant environmental sounds to elicit ERP indices for workload holds several advantages over conventional beep tones, especially in the operational context. Finally, the current findings indicate the involvement of cognitive processes in steering, which is typically viewed as being a predominantly visuo-motor task. no notspecified http://www.kyb.tuebingen.mpg.de/ published -24 On the influence of steering on the orienting response 15017 15422 SchenkBM2015 7 C Schenk HH Bülthoff C Masone Cheile Gradistei, Romania2015-10-00 427 434 19th International Conference on System Theory, Control and Computing (ICSTCC 2015) In this paper we consider the application problem of a redundant cable-driven parallel robot, tracking a reference trajectory in presence of uncertainties and disturbances. A Super Twisting controller is implemented using a recently proposed gains adaptation law [1], thus not requiring the knowledge of the upper bound of the lumped uncertainties. The controller is extended by a feedforward dynamic inversion control that reduces the effort of the sliding mode controller. Compared to a recently developed Adaptive Terminal Sliding Mode Controller for cable-driven parallel robots [2], the proposed controller manages to achieve lower tracking errors and less chattering in the actuation forces even in presence of perturbations. The system is implemented and tested in simulation using a model of a large redundant cable-driven robot and assuming noisy measurements. Simulations show the effectiveness of the proposed method. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Robust adaptive sliding mode control of a redundant cable driven parallel robot 15017 15422 GlatzBC2015_3 7 C Glatz HH Bülthoff LL Chuang Nottingham, UK2015-09-01 1 5 Workshop on Adaptive Ambient In-Vehicle Displays and Interactions In conjunction with AutomotiveUI 2015 (WAADI'15) Nowadays modern cars integrate advanced driving assistance systems which range up to fully automated driving modes. Since fully automated driving modes have not come into everyday practice yet, operators are currently making use of assistance systems. While still being in control of the vehicle, alerts signal possible collision dangers when, for example, parking. The reason for the necessity of such warnings is the fact that humans have limited resources. A critical event can stay unnoticed simply because the attention was focused elsewhere. This raises the question: What is an effective alert in a steering environment? Auditory warning signals have been shown to efficiently direct attention. In the context of traffic, they can prevent collisions by heightening the driver's situational awareness to potential accidents. no notspecified http://www.kyb.tuebingen.mpg.de/ published 4 Attention Enhancement During Steering Through Auditory Warning Signals 15017 15422 ChuangB2015 7 LL Chuang HH Bülthoff Nottingham, UK2015-09-01 1 4 Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interactions In conjunction with AutomotiveUI 2015 Gaze-tracking technology is used increasingly to determine how and which information is accessed and processed in a given interface environment, such as in-vehicle information systems in automobiles. Typically, fixations on regions of interest (e.g., windshield, GPS) are treated as an indication that the underlying information has been attended to and is, thus, vital to the task. Therefore, decisions such as optimal instrument placement are often made on the basis of the distribution of recorded fixations. In this paper, we briefly introduce gaze-tracking methods for in-vehicle monitoring, followed by a discussion on the relationship between gaze and user-attention. We posit that gaze-tracking data can yield stronger insights on the utility of novel regions- of-interests if they are considered in terms of their deviation from basic gaze patterns. In addition, we suggest how EEG recordings could complement gaze-tracking data and raise outstanding challenges in its implementation. It is contended that gaze-tracking is a powerful tool for understanding how visual information is processed in a given environment, provided it is understood in the context of a model that first specifies the task that has to be carried out. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 Towards a Better Understanding of Gaze Behavior in the Automobile 15017 15422 LockenBMCSAM2015 7 A Löcken SS Borojeni H Müller L Chuang R Schroeter I Alvarez V Meijering Nottingham, UK2015-09-01 1 4 Workshop on Adaptive Ambient In-Vehicle Displays and Interactions In conjunction with AutomotiveUI 2015 (WAADI'15) Informing a driver of the vehicle’s changing state and environment is a major challenge that grows with the introduction of automated in-vehicle assistant and infotainment systems. Poorly designed systems could compete for the driver’s attention, away from the primary driving task. Thus, such systems should communicate information in a way that conveys its relevant urgency. While some information is unimportant and should never distract a driver from important tasks, there are also calls for action, which a driver should not be able to ignore. We believe that adaptive ambient displays and peripheral interaction could serve to unobtrusively present information while switching the driver’s attention when needed. This workshop will focus in promoting an exchange of best known methods, by discussing challenges and potentials for this kind of interaction in today’s scenarios as well as in future mixed or full autonomous traffic. The central objective of this workshop is to bring together researchers from different domains and discuss innovative, and engaging ideas and a future landscape for research in this area. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 Workshop on Adaptive Ambient In-Vehicle Displays and Interactions 15017 15422 RienerAJCJPC2015 7 A Riener I Alvarez MP Jeon L Chuang W Ju B Pfleging M Chiesa Nottingham, UK2015-09-01 1 4 Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interactions In conjunction with AutomotiveUI 2015 no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 Workshop on Practical Experiences in Measuring and Modeling Drivers and Driver-Vehicle Interactions 15017 15422 GeussRS2015 7 M Geuss IT Ruginski JK Stefanucci Tübingen, Germany2015-09-00 241 242 DSC 2015 Europe: Driving Simulation Conference & Exhibition Previous research suggests that drivers use specific visual information to execute braking behaviors [Faj05] and that drivers calibrate braking behavior to this visual information over time [Faj09]. Specifically, Fajen (2005) argued that when successfully braking, participants adjust braking pressure to maintain a visually-specified ideal braking pressure less than one’s maximum ability to brake. In the current paper, we investigated whether factors, specifically one’s emotional state, would alter the relationship between braking behavior and visually-specified ideal braking pressure over time. Specifically, we investigated whether the performance of braking changed when anxious. Previous research demonstrated that anxiety influences static perceptual judgments of space [Gra12] and the performance of open-loop sports actions [Bei10]. Open-loop actions are actions where once the movement has been initiated there are no opportunities to alter the outcome (i.e., putting a golf ball). This research shows an influence of anxiety on static perceptual tasks and the performance of open-loop actions suggesting that anxiety may also influence more complex everyday actions like braking. It is important to know whether, and how, the influence of anxiety extends to the performance of closedloop actions like braking given the potential realworld consequences of poor performance. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Anxiety alters visual guidance of braking over time 15017 15422 JunSCGT2015 7 E Jun JK Stefanucci SH Creem-Regehr MN Geuss WB Thompson Tübingen, Germany2015-09-00 1 16 ACM SIGGRAPH Symposium on Applied Perception (SAP '15) Spatial perception research in the real world and in virtual environments suggests that the body (e.g., hands) plays a role in the perception of the scale of the world. However, little research has closely examined how varying the size of virtual body parts may influence judgments of action capabilities and spatial layout. Here, we questioned whether changing the size of virtual feet would affect judgments of stepping over and estimates of the width of a gap. Participants viewed their disembodied virtual feet as small or large and judged both their ability to step over a gap and the size of gaps shown in the virtual world. Foot size affected both affordance judgments and size estimates such that those with enlarged virtual feet estimated they could step over larger gaps and that the extent of the gap was smaller. Shrunken feet led to the perception of a reduced ability to step over a gap and smaller estimates of width. The results suggest that people use their visually perceived foot size to scale virtual spaces. Regardless of foot size, participants felt that they owned the feet rendered in the virtual world. Seeing disembodied, but motion-tracked, virtual feet affected spatial judgments, suggesting that the presentation of a single tracked body part is sufficient to produce similar effects on perception, as has been observed with the presence of fully co-located virtual self-avatars or other body parts in the past. no notspecified http://www.kyb.tuebingen.mpg.de/ published 15 Big Foot: Using the Size of a Virtual Foot to Scale Gap Width 15017 15422 CleijVPPBM2015 7 D Cleij J Venrooij P Pretto DM Pool M Mulder HH Bülthoff Tübingen, Germany2015-09-00 191 198 DSC 2015 Europe: Driving Simulation Conference & Exhibition Motion cueing algorithms (MCA) are used in motion simulation to map the inertial vehicle motions onto the simulator motion space. To increase fidelity of the motion simulation, these MCAs are tuned to minimize the perceived incoherence between the visual and inertial motion cues. Despite time-invariant MCA dynamics the incoherence is not constant, but changes over time. Currently used methods to measure the quality of an MCA focus on the overall differences between MCAs, but lack the ability to detect how quality varies over time and how this influences the overall quality judgement. This paper describes a continuous subjective rating method with which perceived motion incoherence can be detected over time. An experiment was performed to show the suitability of this method for measuring motion incoherence. The experiment results were used to validate the continuous rating method and showed it provides important additional information on the perceived motion incoherence during a simulation compared to an offline rating method. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Continuous rating of perceived visual-inertial motion incoherence during driving simulation 15017 15422 deWinkelKB2015_2 7 KN de Winkel M Katliar HH Bülthoff Tübingen, Germany2015-09-00 67 70 DSC 2015 Europe: Driving Simulation Conference & Exhibition The set of physically incoherent combinations of visual and inertial motions that are nonetheless judged coherent by human observers is referred to as the 'Coherence Zone' (CZ). Here we propose that Causal Inference (CI) models of self-motion perception may offer a more comprehensive alternative to the CZ. CI models include an assessment of the probability of competing causal structures. This probability may be interpreted as a CZ. In an experiment nine participants were presented with horizontal linear visual-only, inertial-only, and combined visual-inertial motion stimuli with heading discrepancies up to 90°, and asked to provide heading estimates. Model predictions were compared to obtained data to assess model tenability. The CI model accounted well for the data of one participant; for five others the results imply that discrepancies do not affect heading perception. Results for the remaining participants were inconclusive. We conclude that CI models can offer a more comprehensive interpretation of the CZ, but that more research is needed to identify when discrepancies are detected. The methodology proposed here may be adapted to account for characteristics of self-motion other than heading, such as amplitude and phase. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 Heading Coherence Zone from Causal Inference Modelling 15017 15422 KatliardPVB2015 7 M Katliar KN de Winkel J Venrooij P Pretto HH Bülthoff Tübingen, Germany2015-09-00 219 222 DSC 2015 Europe: Driving Simulation Conference & Exhibition Motion cueing algorithms (MCAs) based on Model Predictive Control (MPC) are becoming increasingly popular. The MPC approach consists of solving an optimization problem to find a feasible simulator motion that minimizes the difference between the sensed motions in the real vehicle and in the simulator for some time interval. The length of this time interval, which is called the prediction horizon, is an important parameter that needs to be selected. Longer prediction horizons generally lead to better motion cueing but require more computational power because of the larger optimization problem. Consequently the selection of an appropriate prediction horizon for MPC-based MCAs is a compromise between motion cueing fidelity and computational load. In this work the effect of the prediction horizon on motion cueing fidelity was studied by computing the simulation cost, i.e., the average error between desired and reproduced sensory stimulation (specific forces and rotational velocities), for a range of typical car and helicopter maneuvers, while varying the prediction horizon. We propose a simple parametric model that describes the effect of prediction horizon on the simulation cost. The proposed model provides an accurate description of the data (coefficient of determination R2>0.99) for horizons longer than 1s for 11 out of 13 tested maneuvers. One of the model’s parameters can be interpreted as the minimal prediction horizon needed to achieve reasonable quality of simulation. The simulation cost appears to decrease roughly quadratically with the prediction horizon. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 Impact of MPC Prediction Horizon on Motion Cueing Fidelity 15017 15422 AhmadB2015 7 A Ahmad HH Bülthoff Lincoln, UK2015-09-00 1 8 7th European Conference on Mobile Robots (ECMR 2015) In this article we present an online estimator for multirobot cooperative localization and target tracking based on nonlinear least squares minimization. Our method not only makes the rigorous optimization-based approach applicable online but also allows the estimator to be stable and convergent. We do so by employing a moving horizon technique to nonlinear least squares minimization and a novel design of the arrival cost function that ensures stability and convergence of the estimator. Through an extensive set of real robot experiments, we demonstrate the robustness of our method as well as the optimality of the arrival cost function. The experiments include comparisons of our method with i) an extended Kalman filter-based online-estimator and ii) an offline-estimator based on full-trajectory nonlinear least squares. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Moving-horizon Nonlinear Least Squares-based Multirobot Cooperative Perception 15017 15422 LacheleVPZB2015 7 J Lächele J Venrooij P Pretto A Zell HH Bülthoff Lincoln, UK2015-09-00 1 6 7th European Conference on Mobile Robots (ECMR 2015) In this paper we present a method for calculating inertial motion feedback in a teleoperation setup. For this we make a distinction between vehicle-state feedback that depends on the physical motion of the remote vehicle, and task-related motion feedback that provides information about the teleoperation task. By providing motion feedback that is independent of vehicle motion we exploit the spatial decoupling between the operator and the controlled vehicle. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 Novel approach for calculating motion feedback in teleoperation 15017 15422 ScheerBC2015_2 7 M Scheer HH Bülthoff LL Chuang Los Angeles, CA, USA2015-09-00 1042 1046 Human Factors and Ergonomics Society Annual Meeting (HFES 2015) The cognitive workload of a steering task could reflect its demand on attentional as well as working memory resources under different conditions. These respective demands could be differentiated by evaluating components of the event-related potential (ERP) response to different types of stimulus probes, which are claimed to reflect the availability of either attention (i.e., novelty-P3) or working memory (i.e., target-P3) resources. Here, a within-subject analysis is employed to evaluate the robustness of ERP measurements in discriminating the cognitive demands of different steering conditions. We find that the amplitude of novelty-P3 ERPs to task-irrelevant environmental sounds is diminished when participants are required to perform a steering task. This indicates that steering places a demand on attentional resources. In addition, target-P3 ERPs to a secondary auditory detection task vary when the controller dynamics in the steering task are manipulated. This indicates that differences in controller dynamics vary in their working memory demands. no notspecified http://www.kyb.tuebingen.mpg.de/ published 4 On the Cognitive Demands of Different Controller Dynamics: A within-subject P300 Analysis 15017 15422 WellerdiekBGSKBM2015 7 AC Wellerdiek M Breidt MN Geuss S Streuber U Kloos MJ Black BJ Mohler Tübingen, Germany2015-09-00 7 14 ACM SIGGRAPH Symposium on Applied Perception (SAP '15) We investigated the influence of body shape and pose on the perception of physical strength and social power for male virtual characters. In the first experiment, participants judged the physical strength of varying body shapes, derived from a statistical 3D body model. Based on these ratings, we determined three body shapes (weak, average, and strong) and animated them with a set of power poses for the second experiment. Participants rated how strong or powerful they perceived virtual characters of varying body shapes that were displayed in different poses. Our results show that perception of physical strength was mainly driven by the shape of the body. However, the social attribute of power was influenced by an interaction between pose and shape. Specifically, the effect of pose on power ratings was greater for weak body shapes. These results demonstrate that a character with a weak shape can be perceived as more powerful when in a high-power pose. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/SAP-2015-Wellerdiek.pdf published 7 Perception of Strength and Power of Realistic Male Characters 15017 1542215017 VenrooijPKNNLdCB2015 7 J Venrooij P Pretto M Katliar SAE Nooij A Nesti M Lächele KN de Winkel D Cleij HH Bülthoff Tübingen, Germany2015-09-00 153 161 DSC 2015 Europe: Driving Simulation Conference & Exhibition This paper describes a perception-based motion cueing (PBMC) algorithm, which aims to bridge the gap between what is known about human self-motion perception and what is currently used in motion simulation. In PBMC, motion perception knowledge is explicitly incorporated by means of a perception model and a cost function. PBMC has the potential of improving the realism of the motion simulation by exploiting the limitations and ambiguities of human self-motion perception and increasing the utilization of the simulator envelope, while reducing the need for parameter tuning. The PBMC algorithm was compared to a classical filter-based approach in an experimental study. To allow for a robust and reliable comparison, an evaluation method for motion cueing algorithms (MCAs) based on psychophysical techniques was developed. Results show that the PBMC approach received significantly higher ratings than the filter-based approach. This demonstrates the potential of the PBMC approach to improve motion cueing in vehicle simulation. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/DSC-2015-Venrooij.pdf published 8 Perception-based motion cueing: validation in driving simulation 15017 15422 NooijPB2015 7 SAE Nooij P Pretto HH Bülthoff Tübingen, Germany2015-09-00 33 38 DSC 2015 Europe: Driving Simulation Conference & Exhibition During curve driving a lateral force, coupled to the off-center yaw rotation, is acting on the driver. In simulation, however, the lateral force is often not generated using off-centric rotation, thereby uncoupling translational and rotational motion cues. This may cause misalignment of the lateral force w.r.t. the motion direction along the curve. In the present study we investigated how sensitive humans are to such misalignment. We performed a psychophysical study where participants were repeatedly moved along circular trajectories. The participants’ physical orientation with respect to the motion path was systematically varied, and the participants’ task was to indicate whether they felt facing to the inside or the outside of the curve, in a two-alternative forced choice. The experiment was performed in darkness and with a congruent visual motion stimulus. Heading JND, i.e. the smallest detectable difference in yaw orientation w.r.t. the direction of motion, was measured. The results show a considerably lower sensitivity to the misalignment of the lateral force than what is commonly found for heading sensitivity along straight paths, with better performance when congruent visual information was presented. This indicates that for simulated curve driving some misalignment of the lateral force is acceptable, without affecting perceptual realism. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 Sensitivity to lateral force is affected by concurrent yaw rotation during curve driving 15017 15422 GlatzBC2015_2 7 C Glatz HH Bülthoff LL Chuang Los Angeles, CA, USA2015-09-00 1011 Human Factors and Ergonomics Society Annual Meeting (HFES 2015) Auditory warnings are often used to direct a user’s attention from a primary task to critical peripheral events. In the context of traffic, in-vehicle collision avoidance systems could, for example, employ spatially relevant sounds to alert the driver to the possible presence of a crossing pedestrian. This raises the question: What is an effective auditory alert in a steering environment? Ideally, such warning signals should not only arouse the driver but also result in deeper processing of the event that the driver is being alerted to. Warning signals can be designed to convey the time to contact with an approaching object (Gray, 2011). That is, sounds can rise in intensity in accordance with the physical velocity of an approaching threat. The current experiment was a manual steering task in which participants were occasionally required to recognized peripheral visual targets. These visual targets were sometimes preceded by a spatially congruent auditory warning signal. This was either a sound with constant intensity, linearly rising intensity, or non-linearly rising intensity that conveyed time-to-contact. To study the influence of warning cues on the arousal state, different features of electroencephalography (EEG) were measured. Alpha frequency, which ranges from 7.5 to 12.5 Hz, is believed to represent different cognitive processes, in particular arousal (Klimesch, 1999). That is, greater desynchronization in the alpha frequency reflects higher levels of attention as well as alertness. Our results showed a significant decrease in alpha power for sounds with rising intensity profiles, indicating increased alertness and expectancy for an event to occur. To analyze whether the increased arousal for rising sounds resulted in deeper processing of the visual target, we analyzed the event related potential P3. It is a positive component that occurs approximately 300 ms after an event and is known to be associated with recognition performance of a stimulus (Parasuraman & Beatty, 1980). In other words, smaller P3 amplitudes indicate worse identification than larger amplitudes. Our results show that sounds with time-to-contact properties induced larger P3 responses to the targets that they cued compared to targets cued by constant or linearly rising sounds. This suggests that rising sounds with time-to-contact intensity profiles evoke deeper processing of the visual target and therefore result in better identification than events cued by sounds with linearly rising or constant intensity. no notspecified http://www.kyb.tuebingen.mpg.de/ published -1011 Warning Signals With Rising Profiles Increase Arousal 15017 15422 FladBC2015_3 7 N Flad HH Bülthoff LL Chuang Rostock, Germany2015-08-00 115 124 International Summer School on Visual Computing (VCSS 2015) no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Combined use of eye-tracking and EEG to understand visual information processing 15017 15422 NestmeyerFBR2015 7 T Nestmeyer A Franchi HH Bülthoff P Robuffo Giordano Roma, Italy2015-07-17 1 8 RSS 2015 Workshop: Reviewing the review process This paper presents a novel distributed control strategy that enables multi-target exploration while ensuring a time-varying connected topology in both 2D and 3D cluttered environments. Flexible continuous connectivity is guaranteed by gradient descent on a monotonic potential function applied on the algebraic connectivity (or Fiedler eigenvalue) of a generalized interaction graph. Limited range, line-of-sight visibility, and collision avoidance are taken into account simultaneously by weighting of the graph Laplacian. Completeness of the multi-target visiting algorithm is guaranteed by using a decentralized adaptive leader selection strategy and a suitable scaling of the exploration force based on the direction alignment between exploration and connectivity force and the traveling efficiency of the current leader. Extensive MonteCarlo simulations with a group of several quadrotor UAVs show the practicability, scalability and effectiveness of the proposed method. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Decentralized Multi-target Exploration and Connectivity Maintenance with a Multi-robot System 15017 15422 GerboniGONBP2014 7 CA Gerboni S Geluardi M Olivari FM Nieuwenhuizen HH Bülthoff L Pollini Southampton, UK2015-07-00 615 626 40th European Rotorcraft Forum (ERF 2014) This paper describes the different phases of realizing and validating a helicopter model for the MPI CyberMotion Simulator (CMS). The considered helicopter is a UH-60 Black Hawk. The helicopter model was developed based on equations and parameters available in literature. First, the validity of the model was assessed by performing tests based on ADS-33E-PRF criteria using closed loop controllers and with a non-expert pilot. Results on simulated data were similar to results obtained with the real helicopter. Second, the validity of the model was assessed with a helicopter pilot in-the-loop in both a fixed-base simulator and the CMS. The pilot performed a vertical remask maneuver defined in ADS-33E-PRF. Most metrics for performance were reached adequately with both simulators. The motion cues in the CMS allowed for improvements in some of the metrics. The pilot was also asked to give a subjective evaluation of the model by answering the Israel Aircraft Industries Pilot Rating Scale (IAI PRS). Similarly to results of ADS-33E-PRF, pilot responses confirmed that the motion cues provided more realistic flight experience. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/ERF-2014-Gerboni.pdf published 11 Development of a 6 dof nonlinear helicopter model for the MPI Cybermotion Simulator 15017 15422 Chuang2015 7 LL Chuang Los Angeles, CA, USA2015-07-00 3 11 9th International Conference on Augmented Cognition (AC 2015), held as part of HCI International 2015 A control schema for a human-machine system allows the human operator to be integrated as a mathematical description in a closed-loop control system, i.e., a pilot in an aircraft. Such an approach typically assumes that error feedback is perfectly communicated to the pilot who is responsible for tracking a single flight variable. However, this is unlikely to be true in a flight simulator or a real flight environment. This paper discusses different aspects that pertain to error visualization and the pilot’s ability in seeking out relevant information across a range of flight variables. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Error Visualization and Information-Seeking Behavior for Air-Vehicle Control 15017 15422 GeluardiNPB2015 7 S Geluardi FM Nieuwenhuizen L Pollini HH Bülthoff Virginia Beach, VA, USA2015-05-06 1428 1436 71st American Helicopter Society International Annual Forum (AHS 2015) This paper presents the implementation of classic augmented control stategies applied to an identified civil light heli- copter model in hover. Aim of this study is to enhance the stability and controllability of the helicopter model and to improve its Handling Qualities (HQs) in order to meet those defined for a new category of aircrafts, Personal Aerial Vehicles (PAVs). Two control methods were used to develop the augmented systems, H-control and m-synthesis. The resulting augmented systems were compared in terms of achieved robust stability, nominal performance and robust performance. The robustness was evaluated against parametric uncertainties and external disturbances modeled as real atmospheric turbulences that might be experienced in hover and low speed flight. The main result achieved in this work is that classical control techniques can augment a linear helicopter model to match PAVs responses at low frequencies. As a consequence, the achieved HQs performance resemble those defined for PAVs pilots. However, both control techniques performed poorly for some specific uncertainty conditions demonstrating unsatisfactory per- formance robustness. Differences, advantages and limitations of the implemented control architectures with respect to the considered requirements are described in the paper. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Augmented Systems for a Personal Aerial Vehicle Using a Civil Light Helicopter Model 15017 15422 GarsoffkyMHS2015 7 B Garsoffky T Meilinger C Horeis S Schwan Zürich, Switzerland2015-05-04 55 4th Workshop on Intelligent Camera Control, Cinematography and Editing (WICED 2015) Movies and especially animations, where cameras can move nearly without any restriction, often use moving cameras, thereby intensifying continuity [Bor02] and influencing the impression of cinematic space [Jon07]. Further studies effectively use moving cameras to explore perception and processing of real world action [HUGG14]. But what is the influence of simultaneous multiple movements of actors and camera on basic perception and understanding of film sequences? It seems reasonable to expect that understanding of object movement is easiest from a static viewpoint, but that nevertheless moving viewpoints can be partialed out during perception. no notspecified http://www.kyb.tuebingen.mpg.de/ published -55 The influence of a moving camera on the perception of distances between moving objects 15017 15422 YukselMSBF2015 7 B Yüksel S Mahboubi C Secchi HH Bülthoff A Franchi Seattle, WA, USA2015-05-00 870 876 IEEE International Conference on Robotics and Automation (ICRA 2015) In this paper we introduce the design of a light-weight novel flexible-joint arm for light-weight unmanned aerial vehicles (UAVs), which can be used both for safe physical interaction with the environment and it represents also a preliminary step in the direction of performing quick motions for tasks such as hammering or throwing. The actuator consists of an active pulley driven by a rotational servo motor, a passive pulley which is attached to a rigid link, and the elastic connections (springs) between these two pulleys. We identify the physical parameters of the system, and use an optimal control strategy to maximize its velocity by taking advantage of elastic components. The prototype can be extended to a light-weight variable stiffness actuator. The flexible-joint arm is applied on a quadrotor, to be used in aerial physical interaction tasks, which implies that the elastic components can also be used for stable interaction absorbing the interactive disturbances which might damage the flying system and its hardware. The design is validated through several experiments, and future developments are discussed in the paper. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICRA-2015-Yueksel.pdf published 6 Design, Identification and Experimental Testing of a Light-Weight Flexible-joint Arm for Aerial Physical Interaction 15017 15422 RajappaRBF2015 7 S Rajappa M Ryll HH Bülthoff A Franchi Seattle, WA, USA2015-05-00 4006 4013 IEEE International Conference on Robotics and Automation (ICRA 2015) Mobility of a hexarotor UAV in its standard configuration is limited, since all the propeller force vectors are parallel and they achieve only 4-DoF actuation, similar, e.g., to quadrotors. As a consequence, the hexarotor pose cannot track an arbitrary trajectory while the center of mass is tracking a position trajectory. In this paper, we consider a different hexarotor architecture where propellers are tilted, without the need of any additional hardware. In this way, the hexarotor gains a 6-DoF actuation which allows to independently reach positions and orientations in free space and to be able to exert forces on the environment to resist any wrench for aerial manipulation tasks. After deriving the dynamical model of the proposed hexarotor, we discuss the controllability and the tilt angle optimization to reduce the control effort for the specific task. An exact feedback linearization and decoupling control law is proposed based on the input-output mapping, considering the Jacobian and task acceleration, for non-linear trajectory tracking. The capabilities of our approach are shown by simulation results. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICRA-2015-Rajappa.pdf published 7 Modeling, Control and Design Optimization for a Fully-actuated Hexarotor Aerial Vehicle with Tilted Propellers 15017 15422 StegagnoMB2015 7 P Stegagno C Massidda HH Bülthoff Salamanca, Spain2015-04-00 307 313 30th ACM/SIGAPP Symposium On Applied Computing (SAC 2015) The ability to identify the target of a common action is fundamental for the development of a multi-robot team able to interact with the environment. In most existing systems, the identification is carried on individually, based on either color coding, shape identification or complex vision systems. Those methods usually assume a broad point of view over the objects, which are observed in their entirety. This assumption is sometimes difficult to fulfil in practice, and in particular in swarm systems, constituted by a multitude of small robots with limited sensing and computational capabilities. In this paper, we propose a method for target identification with a heterogeneous swarm of low-informative spatially-distributed sensors employing a distributed version of the naive Bayes classifier. Despite limited individual sensing capabilities, the recursive application of the Bayes law allows the identification if the robots cooperate sharing the information that they are able to gather from their limited points of view. Simulation results show the effectiveness of this approach highlighting some properties of the developed algorithm. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/SAC-2015-Stegagno.pdf published 6 Distributed Target Identification in Robotic Swarms 15017 15422 SoykaLKBSM2015 7 F Soyka E Kokkinara M Leyrer HH Bülthoff M Slater BJ Mohler Arles, France2015-03-25 33 40 IEEE Virtual Reality (VR 2015) The International Air Transport Association forecasts that there will be at least a 30% increase in passenger demand for flights over the next five years. In these circumstances the aircraft industry is looking for new ways to keep passengers occupied, entertained and healthy, and one of the methods under consideration is immersive virtual reality. It is therefore becoming important to understand how motion sickness and presence in virtual reality are influenced by physical motion. We were specifically interested in the use of head-mounted displays (HMD) while experiencing in-flight motions such as turbulence. 50 people were tested in different virtual environments varying in their context (virtual airplane versus magic carpet ride over tropical islands) and the way the physical motion was incorporated into the virtual world (matching visual and auditory stimuli versus no incorporation). Participants were subjected to three brief periods of turbulent motions realized with a motion simulator. Physiological signals (postural stability, heart rate and skin conductance) as well as subjective experiences (sickness and presence questionnaires) were measured. None of our participants experienced severe motion sickness during the experiment and although there were only small differences between conditions we found indications that it is beneficial for both wellbeing and presence to choose a virtual environment in which turbulent motions could be plausible and perceived as part of the scenario. Therefore we can conclude that brief exposure to turbulent motions does not get participants sick. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Turbulent Motions Cannot Shake VR 15017 1542215017 PaulM2015 7 S Paul B Mohler Arles, France2015-03-00 1 2 IEEE VR Doctoral Consortium 2015 So far in my research studies with virtual reality I have focused on using body and hand motion tracking systems in order to animate different 3D self-avatars in immersive virtual reality environments (head-mounted displays or desktop virtual reality). We are using self-avatars to explore the following basic research question: what sensory information is used to perceive ones body dimensions? And the applied question of how we can best create a calibrated selfavatar for efficient use in first-person immersive head-mounted display interaction scenarios. The self-avatar used for such research questions and applications has to be precise, easy to use and enable the virtual hand and body to interact with physical objects. This is what my research has focused on thus far and what I am developing for the completion of my first year of my graduate studies. We plan to use LEAP motion for hand and arm movements and the Moven Inertial Measurement suit for full body tracking and the Oculus DK2 head-mounted display. A several step process of setting up and calibrating an animated self-avatar with full body motion and hand tracking is described in this paper. First, the user’s dimensions will be measured, they will be given a self-avatar with these dimensions, then they will be asked to perform pre-determined actions (i.e. touching objects, walking in a specific trajectory), then we will in real-time estimate how precise the animated body and body parts are relative to the real world reference objects, and finally a scaling of the avatar size or retargetting of the motion is performed in order to meet a specific minimum error requirement. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Animated self-avatars in immersive virtual reality for studying body perception and distortions 15017 1542215017 Chang2014 7 D-S Chang Tübingen, Germany2015-02-00 57 64 Tagung 2013: Kognition und Kooperation: Überzeugungen in Gehirn und Gesellschaft no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Die Wahrnehmung von sozialen Signalen 15017 15422 OlivariNBP2015 7 M Olivari FM Nieuwenhuizen HH Bülthoff L Pollini Kissimmee, FL, USA2015-01-00 284 298 AIAA Modeling and Simulation Technologies Conference 2015: Held at the SciTech Forum 2015 Methods for identifying neuromuscular response commonly assume time-invariant neuromuscular dynamics. However, neuromuscular dynamics are likely to change during realistic control scenarios. In a previous paper we presented a method for identifying time-varying neuromuscular dynamics based on a Recursive Least Squares (RLS) algorithm. To date, this method has only been validated in a Monte Carlo simulation study. This paper presents an experimental validation of the same method. In the experiment, three different disturbance-rejection tasks were performed: a position task with the human instructed to minimize the stick deflection in front of an external force disturbance, a relax task with the instruction to relax the arm, and a time-varying task with the instruction to alternate between position and relax tasks. The position and relax tasks induce different time-invariant neuromuscular dynamics, whereas the time-varying task induces time-varying neuromuscular dynamics. The RLS-based method was used to estimate neuromuscular dynamics in the three tasks. The neuromuscular estimates were reliable both in time-invariant and time-varying tasks. These findings indicate that the RLS-based method can be used to estimate time-varying neuromuscular responses in human-in-the loop experiments. no notspecified http://www.kyb.tuebingen.mpg.de/ published 14 Identifying Time-Varying Neuromuscular Response: Experimental Evaluation of a RLS-based Algorithm 15017 15422 vonLassbergBC2015 2 C von Lassberg KA Beykirch JL Campos Nova Biomedical New York, NY, USA 2015-00-00 61 81 Advances in visual perception research Visual orientation during head- and self-motion without retinal image slips requires efficient gaze stabilizing oculomotor functions that support unblurred retinal function. These mechanisms are driven by different sensor systems, such as vestibular afferents (vestibulo-ocular reflex - VOR) or retinal afferents (optokinetic reflex - OKR) to generate reflexive eye movements that continuously compensate for 3-dimensional head motions in space. High-level athletes trained in sports that involve fast and complex rotational movements (i.e., gymnasts) have a highly developed capability of efficiently orienting while executing such complex movements. This spatial orientation ability can, to a certain extend be learned. However, one's intrinsic aptitude for easily coping with such multiaxial orientation challenges seems to be specific to each individual. It is not clear which role the individual level of VOR precision plays within the possible factors that determine such individual aptitudes. The aim of the present study is to examine to what extend the individual level of VOR correlates with individual aptitude to cope with multiaxial spatial orientation challenges as required in gymnastics. For this we used a method to evaluate the individual aptitude for multiaxial spatial orientation during actively performed maneuvers in competitive gymnasts by exploiting the accumulated expertise of coaches. We directly compared these expert-rating measures to individual VOR characteristics. The results indicate relationships between these ratings and the response of the vertical VOR in gymnasts. no notspecified http://www.kyb.tuebingen.mpg.de/ published 20 Comparing vestibulo-ocular eye movement characteristics with coaches' rankings of spatial orientation aptitudes in gymnasts 15017 15422 PrettoVNB2015 2 P Pretto J Venrooij A Nesti HH Bülthoff Springer Dordrecht, The Netherlands 2015-00-00 131 152 Recent Progress in Brain and Cognitive Engineering The goal of vehicle motion simulation is the realistic reproduction of the perception a human observer would have inside the moving vehicle by providing realistic motion cues inside a motion simulator. Motion cueing algorithms play a central role in this process by converting the desired vehicle motion into simulator input commands with maximal perceptual fidelity, while remaining within the limited workspace of the motion simulator. By understanding how the one’s own body motion through the environment is transduced into neural information by the visual, vestibular and somatosensory systems and how this information is processed in order to create a whole percept of self-motion we can qualify the perceptual fidelity of the simulation. In this chapter, we address how a deep understanding of the functional principles underlying self-motion perception can be exploited to develop new motion cueing algorithms and, in turn, how motion simulation can increase our understanding of the brain’s perceptual processes. We propose a perception-based motion cueing algorithm that relies on knowledge about human self-motion perception and uses it to calculate the vehicle motion percept, i.e. how the motion of a vehicle is perceived by a human observer. The calculation is possible through the use of a self-motion perception model, which simulate the brain’s motion perception processes. The goal of the perception-based algorithm is then to reproduce the simulator motion that minimizes the difference between the vehicle’s desired percept and the actual simulator percept, i.e. the “perceptual error”. Finally, we describe the first experimental validation of the new motion cueing algorithm and shown that an improvement in the current standards of motion cueing is possible. no notspecified http://www.kyb.tuebingen.mpg.de/ published 21 Perception-Based Motion Cueing: A Cybernetics Approach to Motion Simulation 15017 15422 BulthoffALB2015 2 I Bülthoff RGM Armann RK Lee HH Bülthoff Springer Dordrecht, The Netherlands 2015-00-00 153 165 Recent Progress in Brain and Cognitive Engineering The other-race effect refers to the observation that we perform better in tasks involving faces of our own race compared to faces of a race we are not familiar with. This is especially interesting as from a biological perspective, the category “race” does in fact not exist (Cosmides L, Tooby J, Krurzban R, Trends Cogn Sci 7(4):173–179, 2003); visually, however, we do group the people around us into such categories. Usually, the other-race effect is investigated in memory tasks where observers have to learn and subsequently recognize faces of individuals of different races (Meissner CA, Brigham JC, Psychol Public Policy Law 7(1):3–35, 2001) but it has also been demonstrated in perceptual tasks where observers compare one face to another on a screen (Walker PM, Tanaka J, Perception 32(9):1117–1125, 2003). In all tasks (and primarily for technical reasons) the test faces differ in race and identity. To broaden our general understanding of the effect that the race of a face has on the observer, in the present study, we investigated whether an other-race effect is also observed when participants are confronted with faces that differ only in ethnicity but not in identity. To that end, using Asian and Caucasian faces and a morph algorithm (Blanz V, Vetter T, A morphable model for the synthesis of 3D faces. In: Proceedings of the 26th annual conference on Computer graphics and interactive techniques – SIGGRAPH’99, pp 187–194, 1999), we manipulated each original Asian or Caucasian face to generate face “race morphs” that shared the same identity but whose race appearance was manipulated stepwise toward the other ethnicity. We presented each Asian or Caucasian face pair (original face and a race morph) to Asian (South Korea) and Caucasian (Germany) participants who had to judge which face in each pair looked “more Asian” or “more Caucasian”. In both groups, participants did not perform better for same-race pairs than for other-race pairs. These results point to the importance of identity information for the occurrence of an other-race effect. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 The Other-Race Effect Revisited: No Effect for Faces Varying in Race Only 15017 15422 HardlessMM2014 2 G Hardless T Meilinger HA Mallot Elsevier Science Amsterdam, The Netherlands 2015-00-00 133 137 International Encyclopedia of the Social & Behavioral Sciences In this article, the significance of virtual reality within the field of spatial cognition is outlined. The role of virtual reality is grouped in three sections addressing (1) the current and latest technology of virtual reality regarding the two main functions within virtual reality, that is, technology to interact with virtuality (input devices used to record observer actions and output devices used to simulate sensory stimuli) and technology for presenting the virtual environments to the user, (2) the usage of this technology for the purpose of research in the field of spatial cognition regarding behavioral and neuronal processes (discussing advantages and disadvantages of virtual reality), and (3) virtual reality experiments and their results that are relevant in current research of spatial cognition covering place memory, wayfinding in large-scale spaces, and the neural representations of spatial features. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/hardiess_et_al_2015_virtual_reality_and_spatial_cognition.pdf published 4 Virtual Reality and Spatial Cognition 15017 15422 SchuchardtLNP2015 46 BI Schuchardt P Lehmann F Nieuwenhuizen P Perfect 2015-01-00 2015-01-00 Deliverable D6.5 Final list of desirable features/options for the PAV and supporting systems no notspecified Deliverable D6.5 Final list of desirable features/options for the PAV and supporting systems 15017 15422 VolkovaVolkmar2015 46 E Volkova-Volkmar 2015-01-00 2015-01-00 Report on experiment supported by the Visionair project: Effect of Visual and Auditory Feedback Modulation on Embodiment and Emotional State in VR no notspecified Report on experiment supported by the Visionair project: Effect of Visual and Auditory Feedback Modulation on Embodiment and Emotional State in VR 15017 15422 FosterB2015 7 C Foster A Bartels Schramberg, Germany2015-11-23 42 16th Conference of Junior Neuroscientists of Tübingen (NeNa 2015) Our natural visual world contains a variety of different types of motion. Two of the most prominent are global ow, the movement of the entire visual scene that occurs whenever we make an eye or head movement, and local motion, the real movement of people and objects in our environment. We constantly have a mixture of these two kinds of motion, but generally have no problems distinguishing between the two, even though they can produce similar movements on the retina. This ability of the visual system was explored in the present study. Subjects watched a feature movie, used as an approximation to the natural visual world whilst functional magnetic resonance images (fMRI) were made of their brains. Relative amounts of global ow and local motion in the movie were determined using a motion algorithm, and compared to bloodoxygenation level dependent (BOLD) activations in specific visual regions of interest, which were determined from standard retinotopic mapping and localizer techniques. A significant preference to local motion was identified in areas MST, V5/MT, V3A, V2 and V3. Furthermore whole brain analyses showed additional areas with a preference to local motion, and responses to global flow in activity in areas commonly involved in perception of our surrounding spatial environment. These findings further support the idea that there are different brain areas involved in the processing of global flow and local motion. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/NeNa-2015-Abstract-Book.pdf published -42 Perception of Global Flow and Local Motion under Natural Conditions 15017 1542115017 15422 StanglMPSBW2015 7 M Stangl T Meilinger A-A Pape J Schultz HH Bülthoff T Wolbers Chicago, IL, USA2015-10-19 45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015) Navigating the environment requires the integration of distance, direction, and place information, which critically depends on hippocampal place and entorhinal grid cells. Studies in rodents have shown, however, that substantial changes in the environment’s surroundings can trigger a change in the set of active place cells, accompanied by a rotation of the grid cell firing pattern (Fyhn et al., 2007) - a phenomenon commonly referred to as global remapping. In the present study, we investigated whether human grid and place cells show a similar remapping behavior in response to environmental changes and whether different episodes in the same environment might cause remapping as well. In two experiments, participants underwent 3T fMRI scanning while they navigated a virtual environment, comprising two different rooms in which objects were placed in random locations. Participants explored the first room and learned these object-location conjunctions (learning-phase), after which the objects disappeared and participants were asked to navigate repeatedly to the different object locations (test-phase). This procedure (i.e. a learning- and test-phase within a room) was repeated several times, separated by different events, such as leaving and re-entering the same room, or moving to the second, different room. Indicators of grid cell firing were derived from the BOLD activation while participants moved within the virtual environment, whereas indicators of place cell firing were derived from the activation patterns while participants were standing at particular object locations. We compared these indicators between the different rooms and events to investigate how these manipulations influence remapping. Overall, our findings demonstrate entorhinal grid cell and hippocampal place cell remapping in humans. Furthermore, our results suggest that beside environmental changes, also other events (e.g., re-entering the same environment) might evoke remapping. We conclude that, in humans, remapping is not only environment-based but also event-based and might serve as a neural mechanism to create distinct memory traces for episodic memory formation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Triggers of entorhinal grid cell and hippocampal place cell remapping in humans 15017 15422 FademrechtBBd2015_2 7 L Fademrecht I Bülthoff NE Barraclough S de la Rosa Chicago, IL, USA2015-10-18 45th Annual Meeting of the Society for Neuroscience (Neuroscience 2015) Actions often occur in the visual periphery. Here we measured the spatial extent of action sensitive perceptual channels across the visual field using a behavioral action adaptation paradigm. Participants viewed an action (punch or handshake) for a prolonged amount of time (adaptor) and subsequently categorized an ambiguous test action as either 'punch' or 'handshake'. The adaptation effect refers to the biased perception of the test stimulus due to the prolonged viewing of the adaptor and the resulting loss of sensitivity to that stimulus. Therefore the more a channel responds to a specific stimulus the higher is the adaptation effect for that certain channel. We measured the size of the adaptation effect as a function of the spatial distance between adaptor and test stimuli in order to determine if actions can be processed in spatially distinct channels. Specifically, we adapted participants at 0° (fixation), 20° and 40° eccentricity in three separate conditions to measure the putative spatial extent of action channels at these positions. In each condition, we measured the size of the adaptation effect at -60°,-40°,-20°, 0°,20°,40°,60° of eccentricity. We fitted Gaussian functions to describe the channel response of each condition and used the full width at half maximum (FWHM) of the Gaussians as a measure of the spatial extent of the action channels. In contrast to previous reports of an increase of midget ganglion cell dendritic field size with eccentricity (Dacey, 1993), our results showed that FWHM decreased with eccentricity (FWHM at 0°: 56°, FWHM at 20°: 29, FWHM at 40°: 26). We then asked whether the response of these action sensitive perceptual channels can be used to predict average recognition performance (d') of social actions across the visual field obtained in a previous study (Fademrecht et al. 2014). We used G(x) - the summed response of all three channels at eccentricity x, to predict recognition performance at eccentricity x. A simple linear transformation of the summed channel response of the form a+b*G(x) was able to predict 95.5% of the variation in the recognition performance. Taken together these results demonstrate that actions can be processed in separate spatially distinct perceptual channels, their FWHM decreases with eccentricity and can be used to predict action recognition performance in the visual periphery. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 The spatial extent of action sensitive perceptual channels decrease with visual eccentricity 15017 15422 ChangJBd2015 7 D-S Chang U Ju HH Bülthoff S de la Rosa St. Pete Beach, FL, USA2015-09-00 493 15th Annual Meeting of the Vision Sciences Society (VSS 2015) The way we use social actions in everyday life to interact with other people differs across various cultures. Can this cultural specificity of social interactions be already observed in perceptual processes underlying the visual recognition of actions? In the current study, we investigated whether there were any differences in action recognition between Germans and Koreans using a visual adaptation paradigm. German (n=24, male=10, female=14) and Korean (n=24, male=13, female=11) participants first had to recognize and describe four different social actions (handshake, punch, wave, fist-bump) presented as brief movies of point-light-stimuli. The actions handshake, punch and wave are commonly known in both cultures, but fist-bump is largely unknown in Korea. In the subsequent adaptation experiment, participants were repeatedly exposed to each of the four actions as adaptors (40 seconds in the beginning, and 3 times before each trial) in separate experimental blocks. The order of actions was mixed and balanced across all participants. In each experimental block, participants had to categorize ambiguous actions in a 2-Alternatives-Forced-Choice task. The ambiguous test stimuli were created by linearly combining the kinematic patterns of two actions such as a punch and a handshake. We measured to what degree each of the four adaptors biased the perception of the subsequent test stimulus for German and Korean participants. The actions handshake, punch and wave were correctly recognized by both Germans and Koreans, but most Koreans failed to recognize the correct meaning of a fist-bump. However, Germans and Koreans showed a remarkable similarity regarding the relative perceptual biases that the adaptors induced in the perception of the test stimuli. This consistency extended even to the action (fist-bump) which was not accurately recognized by Koreans. These results imply a surprising consistency and robustness of action recognition processes across different cultures. no notspecified http://www.kyb.tuebingen.mpg.de/ published -493 How different is Action Recognition across Cultures? Visual Adaptation to Social Actions in Germany vs. Korea 15017 15422 DobsSBG2015 7 K Dobs J Schultz I Bülthoff JL Gardner St. Pete Beach, FL, USA2015-09-00 684 15th Annual Meeting of the Vision Sciences Society (VSS 2015) Humans can easily extract who someone is and what expression they are making from the complex interplay of invariant and changeable visual features of faces. Recent evidence suggests that cortical mechanisms to selectively extract information about these two socially critical cues are segregated. Here we asked if these systems are independently controlled by task demands. We therefore had subjects attend to either identity or expression of the same dynamic face stimuli and examined cortical representations in topographically and functionally localized visual areas using fMRI. Six human subjects performed a task that involved detecting changes in the attended cue (expression or identity) of dynamic face stimuli (8 presentations per trial of 2s movie clips depicting 1 of 2 facial identities expressing happiness or anger) in 18-20 7min scans (20 trials/scan in pseudorandom order) in 2 sessions. Dorsal areas such as hMT and STS were disassociated from more ventral areas such as FFA and OFA by their modulation with task demands and their encoding of exemplars of expression and identity. In particular, dorsal areas showed higher activity during the expression task (hMT: p< 0.05, lSTS: p< 0.01; t-test) where subjects were cued to attend to the changeable aspects of the faces whereas ventral areas showed higher activity during the identity task (lOFA: p< 0.05; lFFA: p< 0.05). Specific exemplars of identity could be reliably decoded (using linear classifiers) from responses of ventral areas (lFFA: p< 0.05; rFFA: p< 0.01; permutation-test). In contradistinction, dorsal area responses could be used to decode specific exemplars of expression (hMT: p< 0.01; rSTS: p< 0.01), but only if expression was attended by subjects. Our data support the notion that identity and expression are processed by segregated cortical areas and that the strength of the representations for particular exemplars is under independent task control. no notspecified http://www.kyb.tuebingen.mpg.de/ published -684 Independent control of cortical representations for expression and identity of dynamic faces 15017 15422 ZhaoB2015 7 M Zhao I Bülthoff St. Pete Beach, FL, USA2015-09-00 698 15th Annual Meeting of the Vision Sciences Society (VSS 2015) Does a face itself determine how well it will be recognized? Unlike many previous studies that have linked face recognition performance to individuals’ face processing ability (e.g., holistic processing), the present study investigated whether recognition of natural faces can be predicted by the faces themselves. Specifically, we examined whether short- and long-term recognition memory of both dynamic and static faces can be predicted according to face-based properties. Participants memorized either dynamic (Experiment 1) or static (Experiment 2) natural faces, and recognized them with both short- and long-term retention intervals (three minutes vs. seven days). We found that the intrinsic memorability of individual faces (i.e., the rate of correct recognition across a group of participants) consistently predicted an independent group of participants’ performance in recognizing the same faces, for both static and dynamic faces and for both short- and long-term face recognition memory. This result indicates that intrinsic memorability of faces is bound to face identity rather than image properties. Moreover, we also asked participants to judge subjective memorability of faces they just learned, and to judge whether they were able to recognize the faces in late test. The result shows that participants can extract intrinsic face memorability at encoding. Together, these results provide compelling evidence for the hypothesis that intrinsic face memorability predicts natural face recognition, highlighting that face recognition performance is not only a function of individuals’ face processing ability, but also determined by intrinsic properties of faces. no notspecified http://www.kyb.tuebingen.mpg.de/ published -698 Intrinsic Memorability Predicts Short- and Long-Term Memory of Static and Dynamic Faces 15017 15422 delaRosaLSSMBC2015 7 S de la Rosa M Lubkull S Streuber A Saulton T Meilinger HH Bülthoff R Cañal-Bruland St. Pete Beach, FL, USA2015-09-00 52 15th Annual Meeting of the Vision Sciences Society (VSS 2015) How do we control our bodily movements when socially interacting with others? Research on online motor control provides evidence that task relevant visual information is used for guiding corrective movements of ongoing motor actions. In social interactions observers have been shown to use their own motor system for predicting the outcome of another person's action (direct matching hypothesis) and it has been suggested that this information is used for the online control of their social interactions such as when giving someone a high five. Because only human but not non-human (e.g. robot) movements can be simulated within the observer's motor system, the human-likeness of the interaction partner should affect both the planning and online control of movement execution. We examined this hypothesis by investigating the effect of human-likeness of the interaction partner on motor planning and online motor control during natural social interactions. To this end, we employed a novel virtual reality paradigm in which participants naturally interacted with a life-sized virtual avatar. While 14 participants interacted with a human avatar, another 14 participants interacted with a robot avatar. All participants were instructed to give a high-five to the avatar. To test for online motor control we randomly perturbed the avatar's hand trajectories during participants' motor execution. Importantly, human and robot looking avatars were executing identical movements. We used optical tracking to track participants' hand positions. The analysis of hand trajectories showed that participants were faster in carrying out the high-five movements with humans than with robots suggesting that the human-likeness of the interaction partner indeed affected motor planning. However, there was little evidence for a substantial effect of the human-likeness on online motor control. Taken together the results indicate that the human-likeness of the interaction partner influences motor planning but not online motor control. no notspecified http://www.kyb.tuebingen.mpg.de/ published -52 Motor planning and control: Humans interact faster with a human than a robot avatar 15017 15422 FademrechtBd2015 7 L Fademrecht I Bülthoff S de la Rosa St. Pete Beach, FL, USA2015-09-00 494 15th Annual Meeting of the Vision Sciences Society (VSS 2015) Although actions often appear in the visual periphery, little is known about action recognition outside of the fovea. Our previous results have shown that action recognition of moving life-size human stick figures is surprisingly accurate even in far periphery and declines non-linearly with eccentricity. Here, our aim was (1) to investigate the influence of motion information on action recognition in the periphery by comparing static and dynamic stimuli recognition and (2) to assess whether the observed non-linearity in our previous study was caused by the presence of motion because a linear decline of recognition performance with increasing eccentricity was reported with static presentations of objects and animals (Jebara et al. 2009; Thorpe et al. 2001). In our study, 16 participants saw life-size stick figure avatars that carried out six different social actions (three different greetings and three different aggressive actions). The avatars were shown dynamically and statically on a large screen at different positions in the visual field. In a 2AFC paradigm, participants performed 3 tasks with all actions: (a) They assessed their emotional valence; (b) they categorized each of them as greeting or attack and (c) they identified each of the six actions. We found better recognition performance for dynamic stimuli at all eccentricities. Thus motion information helps recognition in the fovea as well as in far periphery. (2) We observed a non-linear decrease of recognition performance for both static and dynamic stimuli. Power law functions with an exponent of 3.4 and 2.9 described the non-linearity observed for dynamic and static actions respectively. These non-linear functions describe the data significantly better (p=.002) than linear functions and suggest that human actions are processed differently from objects or animals. no notspecified http://www.kyb.tuebingen.mpg.de/ published -494 Recognition of static and dynamic social actions in the visual periphery 15017 15422 BulthoffZ2015 7 I Bülthoff M Zhao St. Pete Beach, FL, USA2015-09-00 145 15th Annual Meeting of the Vision Sciences Society (VSS 2015) Holistic face processing is often referred to as the inability to selectively attend to part of faces without interference from irrelevant facial parts. While extensive research seeks for the origin of holistic face processing in perceiver-based properties (e.g., expertise), the present study aimed to pinpoint face-based visual information that may support this hallmark indicator of face processing. Specifically, we used the composite face task, a standard task of holistic processing, to investigate whether facial surface information (e.g., texture) or facial shape information underlies holistic face processing, since both sources of information have been shown to support face recognition. In Experiment 1, participants performed two composite face tasks, one for normal faces (i.e., shape + surface information) and one for shape-only faces (i.e., without facial surface information). We found that facial shape information alone is as sufficient to elicit holistic processing as normal faces, indicating that facial surface information is not necessary for holistic processing. In Experiment 2, we tested whether facial surface information alone is sufficient to observe holistic face processing. We chose to control facial shape information instead of removing it by having all test faces to share exactly the same facial shape, while exhibiting different facial surface information. Participants performed two composite face tasks, one for normal faces and one for same-shape faces. We found a composite face effect in normal faces but not in same-shape faces, indicating that holistic processing is mediated predominantly by facial shape rather than surface information. Together, these results indicate that facial shape, but not surface information, underlies holistic face processing. no notspecified http://www.kyb.tuebingen.mpg.de/ published -145 What Type of Facial Information Underlies Holistic Face Processing? 15017 15422 BulthoffMT2015 7 I Bülthoff B Mohler IM Thornton Liverpool, UK2015-08-00 51 38th European Conference on Visual Perception (ECVP 2015) In most face recognition studies, learned faces are shown without a visible body to passive participants. Here, faces were attached to a body and participants were either actively or passively viewing them before their recognition performance was tested. 3D-laser scans of real faces were integrated onto sitting or standing full-bodied avatars placed in a virtual room. In the ‘active’ learning condition, participants viewed the virtual environment through a head-mounted display. Their head position was tracked to allow them to walk physically from one avatar to the next and to move their heads to look up or down to the standing or sitting avatars. In the ‘passive dynamic’ condition, participants saw a rendering of the visual explorations of the first group. In the ‘passive static’ condition, participants saw static screenshots of the upper bodies in the room. Face orientation congruency (up versus down) was manipulated at test. Faces were recognized more accurately when viewed in a familiar orientation for all learning conditions. While active viewing in general improved performance as compared to viewing static faces, passive observers and active observers - who received the same visual information - performed similarly, despite the absence of volitional movements for the passive dynamic observers. no notspecified http://www.kyb.tuebingen.mpg.de/ published -51 Active and passive exploration of faces 15017 1542215017 delaRosaB2015 7 S de la Rosa HH Bülthoff Liverpool, UK2015-08-00 210 38th European Conference on Visual Perception (ECVP 2015) Previous results showed that actions can be recognized in multiple ways suggesting that several recognition levels exist in action recognition (e.g. a waving action can be recognized as a greeting or a wave). Categorization tasks suggest that the recognition of social interactions is more accurate at the basic-level (e.g. greeting) than at the subordinate level (e.g. waving). What is the origin of the supremacy of basic-level recognition? Here we examined whether basic-level recognition relies to a larger degree on configural processing than subordinate social interaction recognition. To do so we probed basic-level and subordinate recognition performance (RT and discrimination ability (d')) of 20 participants for upright and inverted social interactions. Larger inversion effects are typically associated with stronger configural processing. Participants saw a one image at a time and reported whether it matched a predefined action. Our results showed that - contrary to our initial hypothesis - subordinate recognition of social interactions was significantly more affected by stimulus inversion than basic-level recognition. Moreover, recognition performance was better for subordinate than basic-level recognition. We show that these results can be well explained by a top-down activation of snapshot templates. no notspecified http://www.kyb.tuebingen.mpg.de/ published -210 Inversion effects are stronger for subordinate than for basic-level action recognition 15017 15422 FademrechtBBd2015 7 L Fademrecht NE Barraclough I Bülthoff S de la Rosa Liverpool, UK2015-08-00 214 38th European Conference on Visual Perception (ECVP 2015) Although actions often appear in the visual periphery, little is known about action recognition away from fixation. We showed in previous studies that action recognition of moving stick-figures is surprisingly good in peripheral vision even at 75° eccentricity. Furthermore, there was no decline of performance up to 45° eccentricity. This finding could be explained by action sensitive units in the fovea sampling also action information from the periphery. To investigate this possibility, we assessed the horizontal extent of the spatial sampling area (SSA) of action sensitive units in the fovea by using an action adaptation paradigm. Fifteen participants adapted to an action (handshake, punch) at the fovea were tested with an ambiguous action stimulus at 0°, 20°, 40° and 60° eccentricity left and right of fixation. We used a large screen display to cover the whole horizontal visual field of view. An adaptation effect was present in the periphery up to 20° eccentricity (p<0.001), suggesting a large SSA of action sensitive units representing foveal space. Hence, action recognition in the visual periphery might benefit from a large SSA of foveal units. no notspecified http://www.kyb.tuebingen.mpg.de/ published -214 Seeing actions in the fovea influences subsequent action recognition in the periphery 15017 15422 Chang2015 7 D-S Chang Budapest, Hungary2015-07-03 41 6th Joint Action Meeting (JAM 2015) When two people interact, they adjust their behavior to each other. For this, they utilize verbal or non-verbal communicative signals which are in most cases either visual or auditory. But how do people adjust their behavior with a partner when there are no possibilities to exchange visual or auditory cues? Furthermore, do people make social inferences about each mother in such a situation? In a novel experimental setup, we connected two people with a rope and they had to accomplish a joint motor task together while being separated by a blind and not able to see or hear each other. However, the participant’s confederate was always an experimenter who behaved either egoistically or cooperatively in a consistent manner. We measured the point-collecting behavior and speed of coordination during the interaction, and person-related judgments about the confederate after the interaction (n=24). Results showed strong partner-dependent changes in behavior depending on whether the partner was egoistic or cooperative (t(23)=24.21, p<0.001). In addition, an egoistic partner was more often judged as a male and bigger in size compared to a cooperative partner. These results demonstrate that partner-dependent changes in behavior and automatic judgments occur naturally even when possibilities for communication are minimal. no notspecified http://www.kyb.tuebingen.mpg.de/ published -41 Blindly judging other people: Social interaction with an egoistic vs. cooperative person while being connected with a rope without seeing or hearing each other 15017 15422 delaRosaWBFSMC2015 7 S de la Rosa Y Wahn HH Bülthoff L Fademrecht A Saulton T Meilinger D-S Chang Budapest, Hungary2015-07-02 53 6th Joint Action Meeting (JAM 2015) Associating sensory action information with the correct action interpretation (semantic action categorization (SAC)) is important for successful joint action, e.g. for the generation of an appropriate complementary response. Vision for perception and vision for action has been suggested to rely on different visual mechanisms (two streams hypothesis). To better understand visual processes supporting joint actions, we compared SAC processes in passive observation and in joint actions. If passive observation and joint action taps into different SAC processes, then adapting SAC processes during passive observation should not affect the generation of complementary action responses. We used an action adaptation paradigm to selectively measure SAC processes in a novel virtual reality set up, which allowed participants to naturally interact with a human looking avatar. Participants visually adapted to an action of an avatar and gave a SAC judgment about a subsequently presented ambiguous action in three different experimental conditions: (1) by pressing a button (passive condition) or by either creating an action response (2) subsequently to (active condition) or (3) simultaneously with (joint action condition) the avatar's action. We found no significant difference between the three conditions suggesting that SAC mechanisms for passive observation and joint action shares similar processes. no notspecified http://www.kyb.tuebingen.mpg.de/ published -53 Does the two streams hypothesis hold for joint actions? 15017 15422 KimCCPBK2015 7 J Kim YG Chung S-C Chung J-Y Park HH Bülthoff S-P Kim Honolulu, HI, USA2015-06-17 21st Annual Meeting of the Organization for Human Brain Mapping (OHBM 2015) Introduction: As the use of mobile devices (particularly, wearable devices with vibrating alert features) are becoming more widespread, investigations on perceptual grouping of vibrotactile stimuli with different features, such as vibrating frequencies, are becoming more important for the design of effective haptic user interfaces. Previous psychophysical studies demonstrated that human perceive vibration frequencies as three distinctive groups: 'slow motion' ranging from 1 to 3 Hz, 'fluttering' ranging from 10 to 70 Hz, and 'smooth vibration' ranging from 100 to 300 Hz [1, 2]. This perceptual grouping pattern has been mainly explained based on the different characteristics of the tactile sensory innervations [3, 4]. However, characteristics of tactile innervations and sensory afferents do not fully describe perceptual grouping of vibrotactile stimuli. For instance, a boundary frequency should be between 40 and 50 Hz according to the afferent characteristics, but perception of vibrotactile stimuli is rather discriminated between 70 and 100 Hz. Furthermore, perceptual grouping is more likely to be affected by the neural encoding of vibration frequencies in the central nervous system, in addition to the characteristics of afferents. Here, we therefore search for the brain regions carrying frequency discriminative information using the searchlight multi-voxel pattern analysis (MVPA) [5] and compare the neural representations of different frequencies with the perceptual grouping patterns using multidimensional scaling (MDS). Methods: Fourteen subjects participated in this study and experimental procedures were approved by the Korea University (KU-IRB-11-46-A-1). Vibrotactile stimuli whose frequency varied from 20 to 200 Hz with an increment of 20 Hz were delivered to the tip of the index finger of the right hand by a vibrotactile stimulation device. Subjects performed ten runs of two sessions (one run for each frequency). Each session consisted of two consecutive periods: a 30 s resting period followed by a 30 s stimulation period. Functional images (T2*-weighted gradient EPI, TR = 3 s, voxel size = 2.0 × 2.0 × 2.0 mm3) were obtained using a 3T scanner. An information-based analysis with a cubical searchlight was employed to find spatially localized neuronal patterns varying with tactile frequencies. Decoding accuracies evaluated by a 2-fold cross-validation procedure were allocated to the center voxel of each searchlight. Then, we computed a correlation-based dissimilarity matrix and used MDS to map the neural representations for each of ten different frequencies onto the 2D space. Results: A random-effects group analysis revealed that a cluster exhibited statistically significant decoding capabilities to differentiate distinct frequencies (p<0.0001 uncorrected, cluster size>50). This cluster covered the contralateral postcentral gyrus (S1) and the supramarginal gyrus (SMG). Mean decoding accuracy was 77.7 ± 13.8 % and decoding accuracy results significantly exceeded the chance level (t13=7.5, p<0.01). The MDS analysis showed that neural representations of 20 and 200 Hz were mapped the farthest positions (i.e. located in opposite side). Moreover, hierarchical cluster analyses revealed that neural representations of each frequency were grouped into two clusters, one for 20-100Hz and the other for 120-200 Hz. Conclusions: In this study, we statistically assessed each set of multi-voxel patterns and revealed that contralateral S1 and SMG exhibited neural activity patterns specific to the vibration frequency discrimination. Results of MDS indicated that neural representations of 20~100 Hz and 120~200 Hz were divided into two distinct groups. This grouping pattern of neural representations is in line with the perceptual frequency categories suggested by previous studies [1, 2]. Our findings therefore suggest that the neural activity patterns in contralateral S1 and SMG may be closely related to perceptual grouping of vibrotactile frequency. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Multi-voxel patterns in the human brain associated with perceptual grouping of tactile frequencies 15017 15422 RoheN2015_3 7 T Rohe U Noppeney Honolulu, HI, USA2015-06-17 21st Annual Meeting of the Organization for Human Brain Mapping (OHBM 2015) Introduction: To form a reliable percept of the multisensory environment, the brain integrates signals across the senses. To estimate for example an object's location from vision and audition, the optimal strategy is to integrate the object's audiovisual signals proportional to their reliability under the assumption that they were caused by a single source (i.e., maximum likelihood estimation, MLE). Behaviorally, it is well-established that humans integrate signals weighted by their reliability in a near-optimal fashion when integrating visual-haptic (Ernst and Banks, 2002) and audiovisual signals (Alais and Burr, 2004). Recently, elegant neurophysiological studies in macaques have shown that single neurons and neuronal populations implement reliability-weighted integration of visual-vestibular signals (Fetsch, et al., 2012; Morgan, et al., 2008). Yet, it is unclear how the human brain accomplishes this feat. Combining psychophysics and multivariate fMRI decoding in a spatial ventriloquist paradigm, we characterized the computational operations underlying audiovisual reliability-weighted integration at several cortical levels along the auditory and visual processing hierarchy. Methods: In a spatial ventriloquist paradigm, participants (N = 5) were presented with auditory and visual signals that were independently sampled from four locations along the azimuth (Fig. 1). The signals were presented alone in unisensory conditions or jointly in bisensory conditions. The spatial reliability of the visual signal was high or low. Participants localized either the auditory or the visual spatial signal. The behavioral signal weights were estimated by fitting psychometric functions to participants' localization responses in bisensory conditions without (0°, i.e. congruent) or with a small spatial discrepancy (± 6°). These empirical weights were compared to weights which were predicted according to the MLE model from the signals' sensory reliabilities estimated in unisensory conditions. Similarly, neural signal weights were estimated by fitting 'neurometric' functions to the spatial locations decoded from regional fMRI activation patterns in bisensory conditions and compared to weight predictions from unisensory conditions. For decoding signal locations, a support vector machine was trained on activation patterns from congruent conditions and then generalized to data from discrepant conditions as well as unisensory conditions. Conclusions: In summary, the results demonstrate that higher-order multisensory regions perform probabilistic computations such as reliability-weighting. However, despite the small signal discrepancy, the signals were not mandatorily integrated as predicted by the MLE model because task-relevant signals attained larger weights. Thus, probabilistic multisensory computations might involve more complex processes than mandatory reliability-weighted integration, such as inferring whether the signals were caused by a common or independent sources (i.e., causal inference). Only under conditions in which the assumptions of a common source is fostered (e.g., by presenting only correlated signals with a small discrepancy), multisensory signals might be fully integrated weighted by their reliability. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Task-dependent reliability-weighted integration of audiovisual spatial signals in parietal cortex 15017 1882615017 15422 deWinkelB2015 7 K de Winkel HH Bülthoff Pisa, Italy2015-06-15 74 75 16th International Multisensory Research Forum (IMRF 2015) It has been shown repeatedly that visual and inertial sensory information on the heading of self-motion is fused by the CNS in a manner consistent with Bayesian Integration (BI). However, a few studies report violations of BI predictions. This dichotomy in experimental findings previously led us to develop a Causal Inference model for multisensory heading estimation, which could account for different strategies of processing multisensory heading information, based on discrepancies between the heading of the visual and inertial cues. Surprisingly, the results of an assessment of this model showed that multisensory heading estimates were consistent with BI regardless of any discrepancy. Here, we hypothesized that Causal Inference is a slow-top down process, and that heading estimates for discrepant cues show less consistency with BI when motion duration increases. Six participants were presented with unisensory visual and inertial horizontal linear motions with headings ranging between ±180°, and combinations thereof with discrepancies up to ±90°. Motion profiles followed a single period of a raised cosine bell with a maximum velocity of 0.3m/s, and had durations of two, four, and six seconds. For each stimulus, participants provided an estimate of the heading of self-motion. In general, the results showed that the probability that heading estimates are consistent with BI decreases as a function of stimulus duration, consistent with the hypothesis. We conclude that BI is likely to be a default mode of processing multisensory heading information, and that Causal Inference is a slow top-down process that interferes only given enough time. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Effects of Motion Duration on Causal Inference in Multisensory Heading Estimation 15017 15422 YukselSSF2015 7 B Yüksel N Staub C Secchi A Franchi Seattle WA, USA.2015-05-26 ICRA 2015 Workshop: Aerial Robotics Manipulation and Load Transportation no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICRA-2015-Workshop-Yueksel.pdf published 0 Aerial Physical Interaction: Design, Control, Identification and Estimation 15017 15422 GeussS2015 7 MN Geuss J Stefanucci Amsterdam, The Netherlands2015-03-15 3 International Convention of Psychological Science (ICPS 2015) Fear is characterized by both state- and trait-level changes, both of which can temporally fluctuate to alter behavior. We observed an interaction between state and trait fear on perceptual estimates over time. When trait fear was low, estimates increased with state fear. High trait fear led to consistent overestimation. no notspecified http://www.kyb.tuebingen.mpg.de/ published -3 Height Estimates Are Altered by State- and Trait-Levels of Fear 15017 15422 SaultonDB2015 7 A Saulton T Dodds HH Bülthoff Amsterdam, The Netherlands2015-03-13 18 International Convention of Psychological Science (ICPS 2015) We demonstrate that German and South Korean cultures perceive the size of surrounding indoor spaces differently. While Koreans seem to attend to all aspects/dimensions of rooms when comparing their size, Germans anchored on one single dimension of the space (egocentric depth) resulting in biases in room size perception. no notspecified http://www.kyb.tuebingen.mpg.de/ published -18 Holistic Versus Analytic Perception of Indoor Spaces: Korean and German Cultural Differences in Comparative Judgments of Room Size 15017 15422 ZhaoB2015_2 7 M Zhao I Bülthoff Amsterdam, The Netherlands2015-03-12 32 International Convention of Psychological Science (ICPS 2015) We demonstrate that both encoding and memory processes affect recognition of own- and other-race faces differently. Static own-race faces are better recognized than static other-race faces but this other-race effect is not found for rigidly moving faces. Further, this effect is larger in short-term memory than in long-term memory. no notspecified http://www.kyb.tuebingen.mpg.de/ published -32 Memory of Own- and Other-Race Faces: Influences of Encoding and Retention Processes 15017 15422 SymeonidouOBC2015 7 E-R Symeonidou M Olivari HH Bülthoff LL Chuang Hildesheim, Germany2015-03-10 249 250 57th Conference of Experimental Psychologists (TeaP 2015) Haptic feedback can be introduced in control devices to improve steering performance, such as in driving and flying scenarios. For example, direct haptic feedback (DHF) can be employed to guide the operator towards an optimal trajectory. It remains unclear how DHF magnitude could interact with user performance. A weak DHF might not be perceptible to the user, while a large DHF could result in overreliance. To assess the influence of DHF, five naive participants performed a compensatory tracking task across different DHF magnitudes. During the task, participants were seated in front of an artificial horizon display and were asked to compensate for externally induced disturbances in the roll dimension by manipulating a control joystick. Our results indicate that haptic feedback benefits steering performance across all tested DHF levels. This benefit increases linearly with increasing DHF magnitude. Interestingly, shared control performance was always inferior to the same DHF system without human input. This could be due to involuntary resistance that results from the arm-dynamics. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Direct haptic feedback benefits control performance during steering 15017 15422 HuffPMd2015 7 M Huff H Papenmeier T Meilinger S de la Rosa Hildesheim, Germany2015-03-10 122 57th Conference of Experimental Psychologists (TeaP 2015) When processing the semantic relations in a picture, observers are faster in determining the agent (i.e. the acting person) than the patient of an action (i.e. the person receiving an action). This “agent advantage effect” was shown with static pictorial stimulus material (e.g., one fish biting an other fish). We investigated whether this effect also holds true for dynamic social interactions (e.g. one person pushing an other person). The most important difference between static and dynamic stimuli is the amount of change per time unit, which is different for agents and patients. Participants viewed dynamic animations depicting two stick figures with one patting the other on the shoulder. The viewing angle on this interaction as well as the start frame of the movement were systematically varied and randomly presented. Participants were instructed to search for the agent (i.e. the person patting) and the patient (i.e. the person being patted; order counterbalanced across participants) in these interactions and to press the button corresponding to the location on the screen. Results indicated a reversed “agent advantage effect” with the participants being more correct when searching for the patient. This suggests that motion information derived from the dynamic interactions interacts with semantic processing. no notspecified http://www.kyb.tuebingen.mpg.de/ published -122 Semantic Relations in Asymmetric Dynamic Social Interactions 15017 15422 FladBC2015 7 N Flad HH Bülthoff LL Chuang Hildesheim, Germany2015-03-10 81 57th Conference of Experimental Psychologists (TeaP 2015) Eye-movements can result in large artifacts in the EEG signal that could potentially obscure weaker cortically-based signals. Therefore, EEG studies are typically designed to minimize eyemovements [although see, Plöchl et al., 2012; Dimigen et al., 2011]. We present methods for simultaneous EEG and eye-tracking recordings in a visual scanning task. Participants were required to serially attend to four area-of-interests to detect a visual target. We compare EEG results, which were recorded either in the presence or absence of natural eye-movements. Furthermore, we demonstrate how natural eye-movement fixations can be reconstructed from the EOG signal, in a way that is comparable to the input from a simultaneous video-based eye-tracker. Based on these fixations, we address how EEG data can be segmented according to eye-movements (as opposed to experimentally timed stimuli). Finally, we explain how eyemovement induced artifacts can be effectively removed via independent component analysis (ICA), which allows EEG components to be classified as having either a 'cortical' or 'noncortical' origin. These methods offer the potential of measuring robust EEG signals even in the presence of natural eye-movements. no notspecified http://www.kyb.tuebingen.mpg.de/ published -81 Simultaneous EEG and eye-movement recording in a visual scanning task 15017 15422 GlatzBC2015 7 C Glatz HH Bülthoff LL Chuang Hildesheim, Germany2015-03-10 93 57th Conference of Experimental Psychologists (TeaP 2015) Sounds with rising intensities are known to be more salient than their constant amplitude counterparts [Seifritz et al., 2002]. Incorporating a time-to-contact characteristic into the rising profile can further increase their perceived saliency [Gray, 2011]. We investigated whether looming sounds with this time-to-contact profile might be especially effective as warning signals. Nine volunteers performed a primary steering task whilst occasionally discriminating oriented Gabor patches that were presented in their visual periphery. These visual stimuli could be preceded by an auditory warning cue, 1 second before they appeared. The 2000 Hz tone could have an intensity profile that was either constant (65 dB), linearly rising (60 - 75 dB, ramped tone), or exponentially increasing (looming tone). Overall, warning cues resulted in significantly faster and more sensitive detections of the visual targets. More importantly, we found that EEG potentials to the looming tone were significantly earlier and sustained for longer, compared to both the constant and ramped tones. This suggests that looming sounds are processed preferentially because of time-to-contact cues rather than rising intensity alone. no notspecified http://www.kyb.tuebingen.mpg.de/ published -93 Sounds with time-to-contact properties are processed preferentially 15017 15422 ScheerBC2015 7 M Scheer HH Bülthoff LL Chuang Hildesheim, Germany2015-03-09 220 57th Conference of Experimental Psychologists (TeaP 2015) The workload of a given task, such as steering, can be defined as the demand that it places on the limited attentional and cognitive resources of a driver. Given this, an increase in workload should reduce the amount of resources that are available for other tasks. For example, increasing workload in a primary steering task can decrease attention to oddball targets in a secondary auditory detection task. This can diminish the amplitude of its event-related potential (i.e., P3; Wickens et al., 1984). Here, we present a novel approach that does not require the participant to perform a secondary task. During steering, participants experienced a threestimuli oddball paradigm, where pure tones were intermixed with infrequently presented, unexpected environmental sounds (e.g., cat meowing). Such sounds are known to elicit a subcomponent of the P3, namely novelty-P3. Novelty-P3 reflects a passive shift of attention, which also applies to task-irrelevant events, thus removing the need for a secondary task (Ullsperger et al., 2001). We found that performing a manual steering task attenuated the amplitude of the novelty-P3, elicited by task-irrelevant novel sounds. The presented paradigm could be a viable approach to estimate workload in real-world scenarios. no notspecified http://www.kyb.tuebingen.mpg.de/ published -220 Measuring workload during steering: A novelty-P3 study 15017 15422 Ryll2015 15 M Ryll 2015-07-27 no notspecified published A novel overactuated quadrotor UAV 15017 15422 delaRosaM2016 41 S de la Rosa T Meilinger Meilinger2015 10 T Meilinger Chuang2015_3 10 LL Chuang Bulthoff2015_4 10 J Venrooij HH Bülthoff Chang2015_5 10 D-S Chang Bulthoff2015_3 10 HH Bülthoff AhmadL2015 10 A Ahmad P Lima Reichenbach2015 10 A Reichenbach Chuang2015_2 10 L Chuang CurioB2015 10 C Curio M Breidt SaultonMBd2015 10 A Saulton T Meilinger HH Bülthoff S de la Rosa vanderHamWM2015 10 I van der Ham J Wiener T Meilinger MeilingerRHBM2015 10 T Meilinger J Rebane A Henson HH Bülthoff HA Mallot MeilingerTFWBd2015 10 T Meilinger K Takahashi C Foster K Watanabe HH Bülthoff S de la Rosa CleijVPPMB2015 10 D Cleij J Venrooij P Pretto DM Pool M Mulder HH Bülthoff Paul2015 10 S Paul FladBC2015_2 10 N Flad HH Bülthoff LL Chuang CanalBrulandd2015 10 R Cañal-Bruland S de la Rosa ChangBd2015 10 D-S Chang HH Bülthoff S de la Rosa Chang2015_4 10 D-S Chang Chang2015_3 10 D-S Chang Bulthoff2015 10 HH Bülthoff ChangJBd2015_2 10 D-S Chang U Ju HH Bülthoff S de la Rosa Chang2015_2 10 D-S Chang Meilingerd2015 10 T Meilinger S de la Rosa Changd2015 10 D-S Chang S de la Rosa Breidt2015 10 L Trutoiu M Breidt B Mohler A Steed ChangBBd2015 10 D-S Chang F Burger HH Bülthoff S de la Rosa Bulthoff2015_2 10 HH Bülthoff MeilingerHB2015 10 T Meilinger A Henson HH Bülthoff FosterTKHBdWBM2015 10 C Foster K Takahashi S Kurek C Horeis MJ Bäuerle S de la Rosa K Watanabe MV Butz T Meilinger delaRosaLSMBC2015 10 S de la Rosa M Lubkoll A Saulton T Meilinger HH Bülthoff C Cañal-Bruland StickrodtM2015 10 M Stickrodt T Meilinger LeroyZBBM2015 10 C Leroy M Zhao MV Butz HH Bülthoff T Meilinger MeilingerGHS2015 10 T Meilinger B Garsoffky C Horeis S Schwan ChuangNWB2015 10 LL Chuang FM Nieuwenhuizen J Walter HH Bülthoff deWinkel2015_2 10 K de Winkel Nooij2015 10 SAE Nooij deWinkel2015 10 K de Winkel Giani2015 1 A Giani Logos Verlag Berlin, Germany 2014-00-00 Throughout the day, our senses provide us with a rich stream of information about the environment: We see colours and shapes, hear music or smell food. With seemingly no effort, the human brain integrates these signals to create a conscious sensory experience of the external world. Yet, this sensory experience is not a truthful representation of the physical world. Instead, it is crucially shaped by a variety of processes, two of which are the focus of the current work: multisensory integration and awareness. Yet, in contrast to their impact, relatively little is known about the mechanisms that enable perceptual awareness within a multisensory world. For example, does multisensory integration occur automatically or are higher order cognitive processes (such as awareness) necessary to bind the information? And where does awareness emerge within the human brain? The current work describes three experimental studies, which were designed to provide further insights into auditory and visual perception and the human brain. Tübingen, Univ., Diss., 2014 no notspecified http://www.kyb.tuebingen.mpg.de/ published 120 From multiple senses to perceptual awareness 15017 1882615017 15422 Alaimo2014 1 S Alaimo Logos Verlag Berlin, Germany 2014-00-00 Both remote piloted systems for Unmanned Aerial Vehicles and Fly-By-Wire systems for manned aircrafts do not transfer to the pilot important information or cues regarding the state of the aircraft and the loads which are being imposed by the pilot's control actions. These cues have been shown to be highly responsible for pilot situational awareness; this has a negative impact on system performance especially in the presence of remote and unforeseen environmental constraints and disturbances. Extending the visual feedback with force feedback is able to complement the visual information (when missing or limited). An artificially recreated sense of touch (haptics) may allow the operator to better perceive information from the remote aircraft state, the environment and its constraints, hopefully preventing dangerous situations. The dissertation introduces first of all a novel classification for haptic aid systems in two large classes: Direct Haptic Aid (DHA) and Indirect Haptic Aid (IHA), then, after showing that almost all existing aid concepts belong to the first class, focuses on IHA and tries to show that classical applications (that use a DHA approach) can be revised in a IHA fashion. The novel IHA systems produce different sensations, which in most cases may appear as exactly "opposite in sign" from the corresponding DHA; such sensations can provide valuable cues for the pilot, both in terms of performance improvement and "level of appreciation". Furthermore, the present dissertation shows that the novel IHA cueing algorithms, which were designed just to appear "natural" to the operator and not to directly help the pilot in the task (as in the DHA cases), can outperform the corresponding DHA systems. Three case studies are selected: obstacle avoidance, wind gust rejection, and a combination of the two. For all the cases, DHA and IHA systems are designed and compared against baseline performance with no haptic aid. Both professional pilots and naive subjects were asked to test them through a deep campaign of experiments. Test results show that a net improvement in terms of performance is provided by employing the IHA cues instead of both the DHA cues or the visual cues only. In the end, this thesis aim is to show that the IHA philosophy is a valid and promising alternative to the other commonly used, and published in the scientific literature, approaches which fall in the DHA category. Finally the haptic cue for the obstacle avoidance task was tested in the presence of time delay in the communication link as in a classical bilateral teleoperation scheme. The Master was provided with an admittance controller and an observer of force exerted by the human on the stick was developed. Experiments have shown that the proposed system is capable of standing substantial communication delays. Zugl.: Università di Pisa & Max Planck Institute for Biologocal Cybernetics, Diss., 2013 no notspecified http://www.kyb.tuebingen.mpg.de/ published 241 Novel Haptic Cues for UAV Tele-Operation 15017 15422 Bieg2014 1 H-J Bieg Logos Verlag Berlin, Germany 2014-00-00 Saccades are rapid eye movements that relocate the fovea, the retinal area with highest acuity, to fixate different points in the visual field in turn. Where and when the eyes shift needs to be tightly coordinated with our behavior. The current thesis investigates how this coordination is achieved. Part I examines the coordination of eye and hand movements. Previous studies suggest that the neural processes that coordinate saccades and hand movements do so by adjusting the onset time and movement speed of saccades. I argue against this hypothesis by showing that the need to process task-relevant visual information at the saccade endpoint is sufficient to cause such adjustments. Rather than a mechanism to coordinate the eyes with the hands, changes in saccade onset time and speed may reflect the increased importance of vision at a saccade's target location. Part II examines the coordination of smooth pursuit and saccadic eye movements. Smooth pursuit eye movements are slow eye movements that follow a moving object of interest. The eyes frequently alternate between smooth pursuit and saccadic eye movements, which suggests that their control processes are closely coupled. In support of this idea, smooth pursuit eye movements are shown to systematically influence the onset time of saccadic eye movements. This influence may rest on two different mechanisms: first, a bias in visual attention in the direction of pursuit for saccades that occur during smooth pursuit; second, a mechanism that inhibits the saccadic response in the case of saccades to a moving target. Evidence for the latter hypothesis is provided by the observation that both the probability of occurence and the latency of saccades to a moving target depend on the target's eccentricity and velocity. Tübingen, Univ., Diss., 2014 no notspecified http://www.kyb.tuebingen.mpg.de/ published 130 On the coordination of saccades with hand and smooth pursuit eye movements 15017 15422 Leyrer2014 1 M Leyrer Logos Verlag Berlin, Germany 2014-00-00 Today, virtual reality technology is a multi-purpose tool for diverse applications in various domains. However, research has shown that virtual worlds are often not perceived in scale, especially regarding egocentric distances, as the programmer intended them. While the main reason for this misperception of distances in virtual environments is still unknown, this dissertation investigates one specific aspect of fundamental importance to distance perception – eye height. In human perception, the ability to determine eye height is essential, because eye height is used to perceive heights of objects, velocity, affordances and distances, all of which allow for successful environmental interaction. It is reasonably well understood how eye height is used to determine many of these percepts. Yet, how eye height itself is determined is still unknown. In multiple studies conducted in virtual reality and the real world, this dissertation investigates how eye height might be determined in common scenarios in virtual reality. Using manipulations of the virtual eye height and distance perception tasks, the results suggest that humans rely more on their body-based information to determine their eye height, if they have no possibility for calibration. This has major implications for many existing virtual reality setups. Because humans rely on their body-based eye height, this can be exploited to systematically alter the perceived space in immersive virtual environments, which might be sufficient to enable every user an experience close to what was intended by the programmer. Tübingen, Univ., Diss., 2014 no notspecified http://www.kyb.tuebingen.mpg.de/ published 164 Understanding and Manipulating Eye Height to Change the User's Experience of Perceived Space in Virtual Reality 15017 1542215017 BaileyKMS2014 28 R Bailey S Kuhl B Mohler K Singh ZhaoHB2014_2 3 M Zhao WG Hayward I Bülthoff 2014-12-00 105 61–69 Vision Research Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remain unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. no notspecified http://www.kyb.tuebingen.mpg.de/ published -61 Holistic processing, contact, and the other-race effect in face recognition 15017 15422 LeeLLPCWBC2014 3 I-S Lee A-R Lee H Lee H-J Park S-Y Chung C Wallraven I Bülthoff Y Chae 2014-12-00 6 19 680 686 Psychology, Health & Medicine Acne vulgaris is a common inflammatory disease that manifests on the face and affects appearance. In general, facial acne has a wide-ranging negative impact on the psychosocial functioning of acne sufferers and leaves physical and emotional scars. In the present study, we investigated whether patients with acne vulgaris demonstrate enhanced psychological bias when assessing the attractiveness of faces with acne symptoms and whether they devote greater selective attention to acne lesions than to acne-free (control) individuals. Participants viewed images of faces under two different skin (acne vs. acne-free) and emotional facial expression (happy and neutral) conditions. They rated the attractiveness of the faces, and the time spent fixating on the acne lesions was recorded with an eye tracker. We found that the gap in perceived attractiveness between acne and acne-free faces was greater for acne sufferers. Furthermore, patients with acne fixated longer on facial regions exhibiting acne lesions than did control participants irrespective of the facial expression depicted. In summary, patients with acne have a stronger attentional bias for acne lesions and focus more on the skin lesions than do those without acne. Clinicians treating the skin problems of patients with acne should consider these psychological and emotional scars. no notspecified http://www.kyb.tuebingen.mpg.de/ published 6 Psychological distress and attentional bias toward acne lesions in patients with acne 15017 15422 VolkovadBM2014 3 E Volkova S de la Rosa HH Bülthoff B Mohler 2014-12-00 12 9 1 28 PLoS ONE Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth. Figures no notspecified http://www.kyb.tuebingen.mpg.de/ published 27 The MPI Emotional Body Expressions Database for Narrative Scenarios 15017 15422 15017 LinkenaugerGSLRPBM2014 3 SA Linkenauger MN Geuss JK Stefanucci M Leyrer BH Richardson DR Proffitt HH Bülthoff BJ Mohler 2014-11-00 11 25 2086 2094 Psychological Science The hand is a reliable and ecologically useful perceptual ruler that can be used to scale the sizes of close, manipulatable objects in the world in a manner similar to the way in which eye height is used to scale the heights of objects on the ground plane. Certain objects are perceived proportionally to the size of the hand, and as a result, changes in the relationship between the sizes of objects in the world and the size of the hand are attributed to changes in object size rather than hand size. To illustrate this notion, we provide evidence from several experiments showing that people perceive their dominant hand as less magnified than other body parts or objects when these items are subjected to the same degree of magnification. These findings suggest that the hand is perceived as having a more constant size and, consequently, can serve as a reliable metric with which to measure objects of commensurate size. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Evidence for Hand-Size Constancy: The Dominant Hand as a Natural Perceptual Metric 15017 15422 15017 SchecklmannGTLRPVHHF2014 3 M Schecklmann A Giani S Tupak B Langguth V Raab T Polak C Várallyay W Harnisch MJ Herrmann AJ Fallgatter 2014-11-00 894203 2014 1 8 Neural Plasticity Objective. Several neuroscience tools showed the involvement of auditory cortex in chronic tinnitus. In this proof-of-principle study we probed the capability of functional near-infrared spectroscopy (fNIRS) for the measurement of brain oxygenation in auditory cortex in dependence from chronic tinnitus and from intervention with transcranial magnetic stimulation. Methods. Twenty-three patients received continuous theta burst stimulation over the left primary auditory cortex in a randomized sham-controlled neuronavigated trial (verum = 12; placebo = 11). Before and after treatment, sound-evoked brain oxygenation in temporal areas was measured with fNIRS. Brain oxygenation was measured once in healthy controls . Results. Sound-evoked activity in right temporal areas was increased in the patients in contrast to healthy controls. Left-sided temporal activity under the stimulated area changed over the course of the trial; high baseline oxygenation was reduced and vice versa. Conclusions. By demonstrating that rTMS interacts with auditory evoked brain activity, our results confirm earlier electrophysiological findings and indicate the sensitivity of fNIRS for detecting rTMS induced changes in brain activity. Moreover, our findings of trait- and state-related oxygenation changes indicate the potential of fNIRS for the investigation of tinnitus pathophysiology and treatment response. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Functional Near-Infrared Spectroscopy to Probe State- and Trait-Like Conditions in Chronic Tinnitus: A Proof-of-Principle Study 15017 18826 15017 15422 OlivariNBP2014_2 3 M Olivari F Nieuwenhuizen HH Bülthoff L Pollini 2014-11-00 6 37 1741 1753 Journal of Guidance, Control, and Dynamics Haptic aids have been largely used in manual control tasks to complement the visual information through the sense of touch. To analytically design a haptic aid, adequate knowledge is needed about how pilots adapt their visual response and the biomechanical properties of their arm (i.e., admittance) to a generic haptic aid. In this work, two different haptic aids, a direct haptic aid and an indirect haptic aid, are designed for a target tracking task, with the aim of investigating the pilot response to these aids. The direct haptic aid provides forces on the control device that suggest the right control action to the pilot, whereas the indirect haptic aid provides forces opposite in sign with respect to the direct haptic aid. The direct haptic aid and the indirect haptic aid were tested in an experimental setup with nonpilot participants and compared to a condition without haptic support. It was found that control performance improved with haptic aids. Participants significantly adapted both their admittance and visual response to fully exploit the haptic aids. They were more compliant with the direct haptic aid force, whereas they showed stiffer neuromuscular settings with the indirect haptic aid, as this approach required opposing the haptic forces. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 Pilot Adaptation to Different Classes of Haptic Aids in Tracking Tasks 15017 15422 ZhaoCWRCCH2014 3 M Zhao S-H Cheung AC-N Wong G Rhodes EKS Chan WWL Chan WG Hayward 2014-11-00 3-4 5 160 167 Cognitive Neuroscience We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Processing of configural and componential information in face-selective cortical areas 15017 15422 MeilingerFB2014 3 T Meilinger J Frankenstein HH Bülthoff 2014-11-00 1363 5 1 7 Frontiers in Psychology Route selection is governed by various strategies which often allow minimizing the required memory capacity. Previous research showed that navigators primarily remember information at route decision points and at route turns, rather than at intersections which required straight walking. However, when actually navigating the route or indicating directional decisions, navigators make fewer errors when they are required to walk straight. This tradeoff between location memory and route decisions accuracy was interpreted as a “when in doubt follow your nose” strategy which allows navigators to only memorize turns and walk straight by default, thus considerably reducing the number of intersections to memorize. These findings were based on newly learned routes. In the present study we show that such an asymmetry in route memory also prevails for planning routes within highly familiar environments. Participants planned route sequences between locations in their city of residency by pressing arrow keys on a keyboard. They tended to ignore straight walking intersections, but they ignored turns much less so. However, for reported intersections participants were quicker at indicating straight walking than turning. Together with results described in the literature, these findings suggest that a “when in doubt follow your nose strategy” is applied also within highly familiar spaces and might originate from limited working memory capacity during planning a route. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/Frontiers-Psychol-2014-Meilinger.pdf published 6 When in doubt follow your nose: a wayfinding strategy 15017 15422 BrowatzkiTMBW2014 3 B Browatzki V Tikhanoff G Metta HH Bülthoff C Wallraven 2014-10-00 5 30 1260 1269 IEEE Transactions on Robotics For any robot, the ability to recognize and manipulate unknown objects is crucial to successfully work in natural environments. Object recognition and categorization is a very challenging problem, as 3-D objects often give rise to ambiguous, 2-D views. Here, we present a perception-driven exploration and recognition scheme for in-hand object recognition implemented on the iCub humanoid robot. In this setup, the robot actively seeks out object views to optimize the exploration sequence. This is achieved by regarding the object recognition problem as a localization problem. We search for the most likely viewpoint position on the viewsphere of all objects. This problem can be solved efficiently using a particle filter that fuses visual cues with associated motor actions. Based on the state of the filter, we can predict the next best viewpoint after each recognition step by searching for the action that leads to the highest expected information gain. We conduct extensive evaluations of the proposed system in simulation as well as on the actual robot and show the benefit of perception-driven exploration over passive, vision-only processes at discriminating between highly similar objects. We demonstrate that objects are recognized faster and at the same time with a higher accuracy. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Active In-Hand Object Recognition on a Humanoid Robot 15017 15422 CamposBB2014 3 JL Campos JS Butler HH Bülthoff 2014-10-00 10 232 3277 3289 Experimental Brain Research Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 Contributions of visual and proprioceptive information to travelled distance estimation during changing sensory congruencies 15017 15422 ChaeLJCNLPW2014 3 Y Chae I-S Lee W-M Jung D-S Chang V Napadow H Lee H-J Park C Wallraven 2014-10-00 10 9 1 10 PLoS ONE Acupuncture stimulation increases local blood flow around the site of stimulation and induces signal changes in brain regions related to the body matrix. The rubber hand illusion (RHI) is an experimental paradigm that manipulates important aspects of bodily self-awareness. The present study aimed to investigate how modifications of body ownership using the RHI affect local blood flow and cerebral responses during acupuncture needle stimulation. During the RHI, acupuncture needle stimulation was applied to the real left hand while measuring blood microcirculation with a LASER Doppler imager (Experiment 1, N = 28) and concurrent brain signal changes using functional magnetic resonance imaging (fMRI; Experiment 2, N = 17). When the body ownership of participants was altered by the RHI, acupuncture stimulation resulted in a significantly lower increase in local blood flow (Experiment 1), and significantly less brain activation was detected in the right insula (Experiment 2). This study found changes in both local blood flow and brain responses during acupuncture needle stimulation following modification of body ownership. These findings suggest that physiological responses during acupuncture stimulation can be influenced by the modification of body ownership. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Decreased Peripheral and Central Responses to Acupuncture Stimulation following Modification of Body Ownership 15017 15422 HeinrichdS2014_2 3 A Heinrich S de la Rosa BA Schneider 2014-10-00 4:1797 136 1 11 Journal of the Acoustical Society of America Thresholds for detecting a gap between two complex tones were determined for young listeners with normal hearing and old listeners with mild age-related hearing loss. The leading tonal marker was always a 20-ms, 250-Hz complex tone with energy at 250, 500, 750, and 1000 Hz. The lagging marker, also tonal, could differ from the leading marker with respect to fundamental frequency (f0), the presence versus absence of energy at f0, and the degree to which it overlapped spectrally with the leading marker. All stimuli were presented with steeper (1 ms) and less steep (4 ms) envelope rise and fall times. F0 differences, decreases in the degree of spectral overlap between the markers, and shallower envelope shape all contributed to increases in gap-detection thresholds. Age differences for gap detection of complex sounds were generally small and constant when gap-detection thresholds were measured on a log scale. When comparing the results for complex sounds to thresholds obtained for pure-tones in a previous study by Heinrich and Schneider [(2006). J. Acoust. Soc. Am. 119, 2316–2326], thresholds increased in an orderly fashion from markers with identical (within-channel) pure tones to different (between-channel) pure tones to complex sounds. This pattern of results was true for listeners of both ages although younger listeners had smaller thresholds overall. no notspecified http://www.kyb.tuebingen.mpg.de/ published 10 The role of stimulus complexity, spectral overlap, and pitch for gap-detection thresholds in young and old listeners 15017 15422 VenrooijvMAvB2014 3 J Venrooij MM van Paassen M Mulder DA Abbink FCT van der Helm HH Bülthoff 2014-09-00 9 44 1686 1698 IEEE Transactions on Cybernetics Biodynamic feedthrough (BDFT) is a complex phenomenon, which has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, a framework for biodynamic feedthrough analysis is presented. The goal of this framework is two-fold. First, it provides some common ground between the seemingly large range of different approaches existing in the BDFT literature. Second, the framework itself allows for gaining new insights into BDFT phenomena. It will be shown how relevant signals can be obtained from measurement, how different BDFT dynamics can be derived from them, and how these different dynamics are related. Using the framework, BDFT can be dissected into several dynamical relationships, each relevant in understanding BDFT phenomena in more detail. The presentation of the BDFT framework is divided into two parts. This paper, Part I, addresses the theoretical foundations of the framework. Part II, which is also published in this issue, addresses the validation of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework's theoretical background and its validation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 A Framework for Biodynamic Feedthrough Analysis Part I: Theoretical Foundations 15017 15422 VenrooijvMAMvB2014 3 J Venrooij MM van Paassen M Mulder DA Abbink M Mulder FCT van der Helm HH Bülthoff 2014-09-00 9 44 1699 1710 IEEE Transactions on Cybernetics Biodynamic feedthrough (BDFT) is a complex phenomenon, that has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, the framework for BDFT analysis, as presented in Part I of this dual publication, is validated and applied. The goal of this framework is twofold. First of all, it provides some common ground between the seemingly large range of different approaches existing in BDFT literature. Secondly, the framework itself allows for gaining new insights into BDFT phenomena. Using recently obtained measurement data, parts of the framework that were not already addressed elsewhere, are validated. As an example of a practical application of the framework, it will be demonstrated how the effects of control device dynamics on BDFT can be understood and accurately predicted. Other ways of employing the framework are illustrated by interpreting the results of three selected studies from the literature using the BDFT framework. The presentation of the BDFT framework is divided into two parts. This paper, Part II, addresses the validation and application of the framework. Part I, which is also published in this journal issue, addresses the theoretical foundations of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework’s theoretical background and its validation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 A Framework for Biodynamic Feedthrough Analysis Part II: Validation and Application 15017 15422 EsinsSWB2014 3 J Esins J Schultz C Wallraven I Bülthoff 2014-09-00 759 8 1 14 Frontiers in Human Neuroscience Congenital prosopagnosia, an innate impairment in recognizing faces, as well as the other-race effect, a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls in three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the other-race effect and congenital prosopagnosia. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms? 15017 15422 EsinsSBK2013 3 J Esins J Schultz I Bülthoff I Kennerknecht 2014-09-00 5 17 239 240 Nutritional Neuroscience A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Galactose uncovers face recognition and mental images in congenital prosopagnosia: The first case report 15017 15422 ZhaoHB2014 3 M Zhao WG Hayward I Bülthoff 2014-08-00 9:6 14 1 13 Journal of Vision Memory of own-race faces is generally better than memory of other-races faces. This other-race effect (ORE) in face memory has been attributed to differences in contact, holistic processing, and motivation to individuate faces. Since most studies demonstrate the ORE with participants learning and recognizing static, single-view faces, it remains unclear whether the ORE can be generalized to different face learning conditions. Using an old/new recognition task, we tested whether face format at encoding modulates the ORE. The results showed a significant ORE when participants learned static, single-view faces (Experiment 1). In contrast, the ORE disappeared when participants learned rigidly moving faces (Experiment 2). Moreover, learning faces displayed from four discrete views produced the same results as learning rigidly moving faces (Experiment 3). Contact with other-race faces was correlated with the magnitude of the ORE. Nonetheless, the absence of the ORE in Experiments 2 and 3 cannot be readily explained by either more frequent contact with other-race faces or stronger motivation to individuate them. These results demonstrate that the ORE is sensitive to face format at encoding, supporting the hypothesis that relative involvement of holistic and featural processing at encoding mediates the ORE observed in face memory. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 Face format at encoding affects the other-race effect in face memory 15017 15422 LeeN2014_2 3 H Lee U Noppeney 2014-08-00 868 5 1 9 Frontiers in Psychology This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music 15017 15422 15017 18826 PiryankovaWLSLBM2014 3 IV Piryankova HY Wong SA Linkenauger C Stinson MR Longo HH Bülthoff BJ Mohler 2014-08-00 8 9 1 13 PLoS ONE Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant's experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups. no notspecified http://www.kyb.tuebingen.mpg.de/ published 12 Owning an Overweight or Underweight Body: Distinguishing the Physical, Experienced and Virtual Body 15017 15422 15017 WallravenBWvG2013 3 C Wallraven HH Bülthoff S Waterkamp L van Dam N Gaissert 2014-08-00 4 21 976 985 Psychonomic Bulletin & Review Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch 15017 15422 15017 18824 VenrooijAMvvB2013 3 J Venrooij DA Abbink M Mulder MM van Paassen FCT van der Helm HH Bülthoff 2014-07-00 7 44 1141 1154 IEEE Transactions on Cybernetics A biodynamic feedthrough (BDFT) model is proposed that describes how vehicle accelerations feed through the human body, causing involuntary limb motions and so involuntary control inputs. BDFT dynamics strongly depend on limb dynamics, which can vary between persons (between-subject variability), but also within one person over time, e.g., due to the control task performed (within-subject variability). The proposed BDFT model is based on physical neuromuscular principles and is derived from an established admittance model---describing limb dynamics---which was extended to include control device dynamics and account for acceleration effects. The resulting BDFT model serves primarily the purpose of increasing the understanding of the relationship between neuromuscular admittance and biodynamic feedthrough. An added advantage of the proposed model is that its parameters can be estimated using a two-stage approach, making the parameter estimation more robust, as the procedure is largely based on the well documented procedure required for the admittance model. To estimate the parameter values of the BDFT model, data are used from an experiment in which both neuromuscular admittance and biodynamic feedthrough are measured. The quality of the BDFT model is evaluated in the frequency and time domain. Results provide strong evidence that the BDFT model and the proposed method of parameter estimation put forward in this paper allows for accurate BDFT modeling across different subjects (accounting for between-subject variability) and across control tasks (accounting for within-subject variability). no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 A Biodynamic Feedthrough Model Based on Neuromuscular Principles 15017 15422 BrielmannBA2014 3 AA Brielmann I Bülthoff R Armann 2014-07-00 100 105–112 Vision Research Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: 1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? 2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face’s race. no notspecified http://www.kyb.tuebingen.mpg.de/ published -105 Looking at faces from different angles: Europeans fixate different features in Asian and Caucasian faces 15017 15422 VenrooijMAvMvB2013 3 J Venrooij M Mulder DA Abbink MM van Paassen M Mulder FCT van der Helm HH Bülthoff 2014-07-00 7 44 1025 1038 IEEE Transactions on Cybernetics Biodynamic feedthrough (BDFT) occurs when vehicle accelerations feed through the human body and cause involuntary control inputs. This paper proposes a model to quantitatively predict this effect in rotorcraft. This mathematical BDFT model aims to fill the gap between the currently existing black box BDFT models and physical BDFT models. The model structure was systematically constructed using asymptote modeling, a procedure described in detail in this paper. The resulting model can easily be implemented in many typical rotorcraft BDFT studies, using the provided model parameters. The model's performance was validated in both the frequency and time domain. Furthermore, it was compared with several recent BDFT models. The results show that the proposed mathematical model performs better than typical black box models and is easier to parameterize and implement than a recent physical model. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Mathematical Biodynamic Feedthrough Model Applied to Rotorcraft 15017 15422 DobsBBVCS2014 3 K Dobs I Bülthoff M Breidt QC Vuong C Curio J Schultz 2014-07-00 100 78–87 Vision Research A great deal of perceptual and social information is conveyed by facial motion. Here, we investigated observers’ sensitivity to the complex spatio-temporal information in facial expressions and what cues they use to judge the similarity of these movements. We motion-captured four facial expressions and decomposed them into time courses of semantically meaningful local facial actions (e.g., eyebrow raise). We then generated approximations of the time courses which differed in the amount of information about the natural facial motion they contained, and used these and the original time courses to animate an avatar head. Observers chose which of two animations based on approximations was more similar to the animation based on the original time course. We found that observers preferred animations containing more information about the natural facial motion dynamics. To explain observers’ similarity judgments, we developed and used several measures of objective stimulus similarity. The time course of facial actions (e.g., onset and peak of eyebrow raise) explained observers’ behavioral choices better than image-based measures (e.g., optic flow). Our results thus revealed observers’ sensitivity to changes of natural facial dynamics. Importantly, our method allows a quantitative explanation of the perceived similarity of dynamic facial expressions, which suggests that sparse but meaningful spatio-temporal cues are used to process facial motion. no notspecified http://www.kyb.tuebingen.mpg.de/ published -78 Quantifying human sensitivity to spatio-temporal information in dynamic faces 15017 15422 VolkovaMDTB2014_2 3 EP Volkova BJ Mohler TJ Dodds J Tesch HH Bülthoff 2014-06-00 623 5 1 11 Frontiers in Psychology Humans can recognize emotions expressed through body motion with high accuracy even when the stimuli are impoverished. However, most of the research on body motion has relied on exaggerated displays of emotions. In this paper we present two experiments where we investigated whether emotional body expressions could be recognized when they were recorded during natural narration. Our actors were free to use their entire body, face, and voice to express emotions, but our resulting visual stimuli used only the upper body motion trajectories in the form of animated stick figures. Observers were asked to perform an emotion recognition task on short motion sequences using a large and balanced set of emotions (amusement, joy, pride, relief, surprise, anger, disgust, fear, sadness, shame, and neutral). Even with only upper body motion available, our results show recognition accuracy significantly above chance level and high consistency rates among observers. In our first experiment, that used more classic emotion induction setup, all emotions were well recognized. In the second study that employed narrations, four basic emotion categories (joy, anger, fear, and sadness), three non-basic emotion categories (amusement, pride, and shame) and the “neutral” category were recognized above chance. Interestingly, especially in the second experiment, observers showed a bias toward anger when recognizing the motion sequences for emotions. We discovered that similarities between motion sequences across the emotions along such properties as mean motion speed, number of peaks in the motion trajectory and mean motion span can explain a large percent of the variation in observers' responses. Overall, our results show that upper body motion is informative for emotion recognition in narrative scenarios. no notspecified http://www.kyb.tuebingen.mpg.de/ published 10 Emotion categorization of body expressions in narrative scenarios 15017 15422 15017 DavidSMSSMSVE2013 3 N David J Schultz E Milne O Schunke D Schöttle A Münchau M Siegel K Vogeley AK Engel 2014-06-00 6 44 1433 1446 Journal of Autism and Developmental Disorders Individuals with an autism spectrum disorder (ASD) show hallmark deficits in social perception. These difficulties might also reflect fundamental deficits in integrating visual signals. We contrasted predictions of a social perception and a spatial–temporal integration deficit account. Participants with ASD and matched controls performed two tasks: the first required spatiotemporal integration of global motion signals without social meaning, the second required processing of socially relevant local motion. The ASD group only showed differences to controls in social motion evaluation. In addition, gray matter volume in the temporal–parietal junction correlated positively with accuracy in social motion perception in the ASD group. Our findings suggest that social–perceptual difficulties in ASD cannot be reduced to deficits in spatial–temporal integration. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Right Temporoparietal Gray Matter Predicts Accuracy of Social Perception in the Autism Spectrum 15017 15422 CaitiCGGM2014 3 A Caiti V Calabrò S Geluardi S Grammatico C Munafò 2014-05-00 2 228 136 145 Proceedings of the Institution of Mechanical Engineers Part M: Journal of Engineering for the Maritime Environment The dynamic, control-oriented model of an underwater glider with independently controllable wings is presented. The onboard vehicle's actuators are a ballast tank and two hydrodynamic wings. A control strategy is proposed to improve the vehicle's maneuverability. In particular, a switching control law, together with a backstepping feedback scheme, is designed to limit the energy-inefficient actions of the ballast tank and hence to enforce efficient maneuvers. The case study considered here is an underwater vehicle with hydrodynamic wings behind its main hull. This unusual structure is motivated by the recently introduced concept of the underwater wave glider, which is a vehicle capable of both surface and underwater navigation. The proposed control strategy is validated via numerical simulations, in which the simulated vehicle has to perform three-dimensional path-following maneuvers. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Switching control of an underwater glider with independently controllable wings 15017 15422 BrowatzkiBC2014 3 B Browatzki HH Bülthoff LL Chuang 2014-04-00 200 8 1 12 Frontiers in Human Neuroscience Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 A comparison of geometric- and regression-based mobile gaze-tracking 15017 15422 vonLassbergBMB2014 3 C von Lassberg KA Beykirch BJ Mohler HH Bülthoff 2014-04-00 4 9 1 15 PLoS ONE Using state-of-the-art technology, interactions of eye, head and intersegmental body movements were analyzed for the first time during multiple twisting somersaults of high-level gymnasts. With this aim, we used a unique combination of a 16-channel infrared kinemetric system; a three-dimensional video kinemetric system; wireless electromyography; and a specialized wireless sport-video-oculography system, which was able to capture and calculate precise oculomotor data under conditions of rapid multiaxial acceleration. All data were synchronized and integrated in a multimodal software tool for three-dimensional analysis. During specific phases of the recorded movements, a previously unknown eye-head-body interaction was observed. The phenomenon was marked by a prolonged and complete suppression of gaze-stabilizing eye movements, in favor of a tight coupling with the head, spine and joint movements of the gymnasts. Potential reasons for these observations are discussed with regard to earlier findings and integrated within a functional model. no notspecified http://www.kyb.tuebingen.mpg.de/ published 14 Intersegmental Eye-Head-Body Interactions during Complex Whole Body Movements 15017 15422 15017 delaRosaB2013 3 S de la Rosa HH Bülthoff 2014-04-00 2 37 197 198 Behavioral and Brain Sciences Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions – namely, context-specific and contingency based learning. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Motor-visual neurons and action recognition in social interactions 15017 15422 PariseKE2014 3 CV Parise K Knorre MO Ernst 2014-04-00 16 111 6104–6108 Proceedings of the National Academy of Sciences of the United States of America Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. no notspecified http://www.kyb.tuebingen.mpg.de/ published -6104 Natural auditory scene statistics shapes human spatial hearing 15017 18824 15017 15422 NestiBMBB2013 3 A Nesti KA Beykirch PR MacNeilage M Barnett-Cowan HH Bülthoff 2014-04-00 4 9 1 8 PLoS ONE Motion simulators are widely employed in basic and applied research to study the neural mechanisms of perception and action during inertial stimulation. In these studies, uncontrolled simulator-introduced noise inevitably leads to a disparity between the reproduced motion and the trajectories meticulously designed by the experimenter, possibly resulting in undesired motion cues to the investigated system. Understanding actual simulator responses to different motion commands is therefore a crucial yet often underestimated step towards the interpretation of experimental results. In this work, we developed analysis methods based on signal processing techniques to quantify the noise in the actual motion, and its deterministic and stochastic components. Our methods allow comparisons between commanded and actual motion as well as between different actual motion profiles. A specific practical example from one of our studies is used to illustrate the methodologies and their relevance, but this does not detract from its general applicability. Analyses of the simulator’s inertial recordings show direction-dependent noise and nonlinearity related to the command amplitude. The Signal-to-Noise Ratio is one order of magnitude higher for the larger motion amplitudes we tested, compared to the smaller motion amplitudes. Simulator-introduced noise is found to be primarily of deterministic nature, particularly for the stronger motion intensities. The effect of simulator noise on quantification of animal/human motion sensitivity is discussed. We conclude that accurate recording and characterization of executed simulator motion are a crucial prerequisite for the investigation of uncertainty in self-motion perception. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 The importance of stimulus noise analysis for self-motion studies 15017 15422 MayerDE2013 3 KM Mayer M Di Luca MO Ernst 2014-03-00 147 2 9 Acta Psychologica How humans perform duration judgments with multisensory stimuli is an ongoing debate. Here, we investigated how sub-second duration judgments are achieved by asking participants to compare the duration of a continuous sound to the duration of an empty interval in which onset and offset were marked by signals of different modalities using all combinations of visual, auditory and tactile stimuli. The pattern of perceived durations across five stimulus durations (ranging from 100 ms to 900 ms) follows the Vierordt Law. Furthermore, intervals with a sound as onset (audio-visual, audio-tactile) are perceived longer than intervals with a sound as offset. No modality ordering effect is found for visualtactile intervals. To infer whether a single modality-independent or multiple modality-dependent time-keeping mechanisms exist we tested whether perceived duration follows a summative or a multiplicative distortion pattern by fitting a model to all modality combinations and durations. The results confirm that perceived duration depends on sensory latency (summative distortion). Instead, we did not find evidence for multiplicative distortions. The results of the model and the behavioural data support the concept of a single time-keeping mechanism that allows for judgments of durations marked by multisensory stimuli. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 Duration perception in crossmodally-defined intervals 15017 18824 15017 15422 MeilingerRB2013 3 T Meilinger BE Riecke HH Bülthoff 2014-03-00 3 67 542 569 Quarterly Journal of Experimental Psychology Two experiments examined how locations in environmental spaces, which cannot be overseen from one location, are represented in memory: by global reference frames, multiple local reference frames, or orientation-free representations. After learning an immersive virtual environment by repeatedly walking a closed multisegment route, participants pointed to seven previously learned targets from different locations. Contrary to many conceptions of survey knowledge, local reference frames played an important role: Participants performed better when their body or pointing targets were aligned with the local reference frame (corridor). Moreover, most participants turned their head to align it with local reference frames. However, indications for global reference frames were also found: Participants performed better when their body or current corridor was parallel/orthogonal to a global reference frame instead of oblique. Participants showing this pattern performed comparatively better. We conclude that survey tasks can be solved based on interconnected local reference frames. Participants who pointed more accurately or quickly additionally used global reference frames. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/QJEP-2014.pdf published 27 Local and global reference frames for environmental spaces 15017 15422 SennaMBP2014 3 I Senna A Maravita N Bolognini CV Parise 2014-03-00 3 9 1 6 PLoS ONE Our body is made of flesh and bones. We know it, and in our daily lives all the senses constantly provide converging information about this simple, factual truth. But is this always the case? Here we report a surprising bodily illusion demonstrating that humans rapidly update their assumptions about the material qualities of their body, based on their recent multisensory perceptual experience. To induce a misperception of the material properties of the hand, we repeatedly gently hit participants' hand with a small hammer, while progressively replacing the natural sound of the hammer against the skin with the sound of a hammer hitting a piece of marble. After five minutes, the hand started feeling stiffer, heavier, harder, less sensitive, unnatural, and showed enhanced Galvanic skin response (GSR) to threatening stimuli. Notably, such a change in skin conductivity positively correlated with changes in perceived hand stiffness. Conversely, when hammer hits and impact sounds were temporally uncorrelated, participants did not spontaneously report any changes in the perceived properties of the hand, nor did they show any modulation in GSR. In two further experiments, we ruled out that mere audio-tactile synchrony is the causal factor triggering the illusion, further demonstrating the key role of material information conveyed by impact sounds in modulating the perceived material properties of the hand. This novel bodily illusion, the ‘Marble-Hand Illusion', demonstrates that the perceived material of our body, surely the most stable attribute of our bodily self, can be quickly updated through multisensory integration. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 The Marble-Hand Illusion 15017 15422 15017 18824 ThorntonBHRL2014 3 IM Thornton HH Bülthoff TS Horowitz A Rynning S-W Lee 2014-02-00 2 9 1 19 PLoS ONE We introduce a new task for exploring the relationship between action and attention. In this interactive multiple object tracking (iMOT) task, implemented as an iPad app, participants were presented with a display of multiple, visually identical disks which moved independently. The task was to prevent any collisions during a fixed duration. Participants could perturb object trajectories via the touchscreen. In Experiment 1, we used a staircase procedure to measure the ability to control moving objects. Object speed was set to 1°/s. On average participants could control 8.4 items without collision. Individual control strategies were quite variable, but did not predict overall performance. In Experiment 2, we compared iMOT with standard MOT performance using identical displays. Object speed was set to 2°/s. Participants could reliably control more objects (M = 6.6) than they could track (M = 4.0), but performance in the two tasks was positively correlated. In Experiment 3, we used a dual-task design. Compared to single-task baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be completed together. Overall, these findings suggest: 1) There is a clear limit to the number of items that can be simultaneously controlled, for a given speed and display density; 2) participants can control more items than they can track; 3) task-relevant action appears not to disrupt MOT performance in the current experimental context. no notspecified http://www.kyb.tuebingen.mpg.de/ published 18 Interactive Multiple Object Tracking (iMOT) 15017 15422 ReichenbachTPBB2013 3 A Reichenbach A Thielscher A Peer HH Bülthoff JP Bresciani 2014-01-00 84 615–625 NeuroImage Seemingly effortless, we adjust our movements to continuously changing environments. After initiation of a goal-directed movement, the motor command is under constant control of sensory feedback loops. The main sensory signals contributing to movement control are vision and proprioception. Recent neuroimaging studies have focused mainly on identifying the parts of the posterior parietal cortex (PPC) that contribute to visually guided movements. We used event-related TMS and force perturbations of the reaching hand to test whether the same sub-regions of the left PPC contribute to the processing of proprioceptive-only and of multi-sensory information about hand position when reaching for a visual target. TMS over two distinct stimulation sites elicited differential effects: TMS applied over the posterior part of the medial intraparietal sulcus (mIPS) compromised reaching accuracy when proprioception was the only sensory information available for correcting the reaching error. When visual feedback of the hand was available, TMS over the anterior intraparietal sulcus (aIPS) prolonged reaching time. Our results show for the first time the causal involvement of the posterior mIPS in processing proprioceptive feedback for online reaching control, and demonstrate that distinct cortical areas process proprioceptive-only and multi-sensory information for fast feedback corrections. no notspecified http://www.kyb.tuebingen.mpg.de/ published -615 A key region in the human parietal cortex for processing proprioceptive hand feedback during reaching movements 15017 15422 15017 18821 SonJLCB2013 3 HI Son H Jung DY Lee JH Cho HH Bülthoff 2014-01-00 1 32 1 17 Robotica In this paper, human viscosity perception in haptic teleoperation systems is thoroughly analyzed. An accurate perception of viscoelastic environmental properties such as viscosity is a critical ability in several contexts, such as telesurgery, telerehabilitation, telemedicine, and soft-tissue interaction. We study and compare the ability to perceive viscosity from the standpoint of detection and discrimination using several relevant control methods for the teleoperator. The perception-based method, which was proposed by the authors to enhance the operator's kinesthetic perception, is compared with the conventional transparency-based control method for the teleoperation system. The fidelity-based method, which is a primary method among perception-centered control schemes in teleoperation, is also studied. We also examine the necessity and impact of the remote-site force information for each of the methods. The comparison is based on a series of psychophysical experiments measuring absolute threshold and just noticeable difference for all conditions. The results clearly show that the perception-based method enhances both detection and discrimination abilities compare with other control methods. The results further show that the fidelity-based method confers a better discrimination ability than the transparency-based method, although this is not true with respect to detection ability. In addition, we show that force information improves viscosity detection for all control methods, as predicted from previous theoretical analysis, but improves the discrimination threshold only for the perception-based method. no notspecified http://www.kyb.tuebingen.mpg.de/ published 16 A psychophysical evaluation of haptic controllers: viscosity perception of soft environments 15017 15422 NestiBMB2013 3 A Nesti M Barnett-Cowan PR MacNeilage HH Bülthoff 2014-01-00 1 232 303 314 Experimental Brain Research Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s2. Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws’ exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber’s Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 Human sensitivity to vertical self-motion 15017 15422 delaRosaSGBC2013_2 3 S de la Rosa S Streuber M Giese HH Bülthoff C Curio 2014-01-00 1 9 1 10 PLoS ONE The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants’ perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after-effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 Putting Actions in Context: Visual Action Adaptation Aftereffects Are Modulated by Social Contexts 15017 15422 GutekunstGSKM2014 7 M Gutekunst M Geuss G Rauhoeft JK Stefanucci U Kloos B Mohler Bremen, Germany2014-12-09 9 12 International Conference on Artificial Reality and Telexistence, 19th Eurographics Symposium on Virtual Environments (ICAT-EGVE 2014) This paper compares the influence a video self-avatar and a lack of a visual representation of a body have on height estimation when standing at a virtual visual cliff. A height estimation experiment was conducted using a custom augmented reality Oculus Rift hardware and software prototype also described in this paper. The results show a consistency with previous research demonstrating that the presence of a visual body influences height estimates, just as it has been shown to influence distance estimates and affordance estimates. no notspecified http://www.kyb.tuebingen.mpg.de/ published 3 A Video Self-avatar Influences the Perception of Heights in an Augmented Reality Oculus Rift 15017 1542215017 VenrooijMAvMvB2014 7 J Venrooij M Mulder DA Abbink MM van Paassen M Mulder FCT van der Helm HH Bülthoff San Diego, CA, USA2014-10-00 1946 1951 IEEE International Conference on Systems, Man and Cybernetics (SMC 2014) Biodynamic feedthrough (BDFT) is the feedthrough of vehicle accelerations through the human body, leading to involuntary control device inputs. BDFT is a relevant problem as it reduces control performance in a large range of vehicles under various circumstances. This paper proposes an approach to mitigate BDFT. What differentiates this method from other mitigation approaches is that it accounts for adaptations in the neuromuscular dynamics of the human body. It is known that BDFT is strongly dependent on these dynamics. The approach was tested, as proof-of-concept, in an experiment in a motion simulator where participants were asked to fly a simulated vehicle through a virtual tunnel. By evaluating the performance with and without motion disturbance active and with and without cancellation active, the performance of the cancellation approach was evaluated. Results showed that the cancellation approach was successful. The detrimental effects of BDFT, such as a decrease in control performance and increase in control effort, were largely removed. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/IEEE-SMC-2014-Venrooij-Slides.pdf published 5 Admittance-adaptive model-based cancellation of biodynamic feedthrough 15017 15422 OlivariNBP2014_3 7 M Olivari FM Nieuwenhuizen HH Bülthoff L Pollini San Diego, CA, USA2014-10-00 3573 3578 IEEE International Conference on Systems, Man and Cybernetics (SMC 2014) A human-centered design of haptic aids aims at tuning the force feedback based on the effect it has on human behavior. For this goal, a better understanding of the influence of haptic aids on the pilot neuromuscular response becomes crucial. In realistic scenarios, the neuromuscular response can continuously vary depending on many factors, such as environmental factors or pilot fatigue. This paper presents a method that online estimates time-varying neuromuscular dynamics during force-related tasks. This method is based on a Recursive Least Squares (RLS) algorithm and assumes that the neuromuscular response can be approximated by a Finite Impulse Response filter. The reliability and the robustness of the method were investigated by performing a set of Monte-Carlo simulations with increasing level or remnant noise. Even with high level of remnant noise, the RLS algorithm provided accurate estimates when the neuromuscular dynamics were constant or changed slowly. With instantaneous changes, the RLS algorithm needed almost 8s to converge to a reliable estimate. These results seem to indicate that RLS algorithm is a valid tool for estimating online time-varying admittance. no notspecified http://www.kyb.tuebingen.mpg.de/ published 5 Identifying Time-Varying Neuromuscular System with a Recursive Least-Squares Algorithm: a Monte-Carlo Simulation Study 15017 15422 CognettiOPRS2014 7 M Cognetti G Oriolo P Peliti L Rosa P Stegagno Chicago, IL, USA2014-09-00 350 356 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014) We propose a cooperative control scheme for a heterogeneous multi-robot system, consisting of an Unmanned Aerial Vehicle (UAV) equipped with a camera and multiple identical Unmanned Ground Vehicles (UGVs). Our control scheme takes advantage of the different capabilities of the robots. Since the system is highly redundant, the execution of multiple different tasks is possible. The primary task is aimed at keeping the UGVs well inside the camera field of view, so as to allow our localization system to reconstruct the identity and relative pose of each UGV with respect to the UAV. Additional tasks include formation control, navigation and obstacle avoidance. We thoroughly discuss the feasibility of each task, proving convergence when possible. Simulation results are presented to validate the proposed method. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/IROS-2014-Cognetti.pdf published 6 Cooperative Control of a Heterogeneous Multi-Robot System based on Relative Localization 15017 15422 ScheerBC2014 7 M Scheer HH Bülthoff LL Chuang Tübingen, Germany2014-09-00 S135 S137 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014) Difficulties experienced in steering a vehicle can be expected to place a demand on one’s mental resources (O’Donnell, Eggemeier 1986). While the extent of this mental workload (MWL) can be estimated by self-reports (e.g., NASA-TLX; Hart, Staveland 1988), it can also be physiologically evaluated in terms of how a primary task taxes a common and limited pool of mental resources, to the extent that it reduces the electroencephalographic (EEG) responses to a secondary task (e.g. an auditory oddball task). For example, the participant could be primarily required to control a cursor to track a target while attending to a series of auditory stimuli, which would infrequently present target tones that should be responded to with a button-press (e.g., Wickens, Kramer, Vanasse and Donchin 1983). Infrequently presented targets, termed oddballs, are known to elicit a large positive potential after approximately 300 ms of their presentation (i.e.,P3). Indeed, increasing tracking difficulty either by decreasing the predictability of the tracked target or by changing the complexity of the controller dynamics has been shown to attenuate P3 responses in the secondary auditory monitoring task (Wickens et al. 1983; Wickens, Kramer and Donchin 1984). In contrast, increasing tracking difficulty—by introducing more frequent direction changes of the tracked target (i.e. including higher frequencies in the function that describes the motion trajectory of the target)—has been shown to bear little influence on the secondary task’s P3 response (Wickens, Israel and Donchin 1977; Isreal, Chesney, Wickens and Donchin 1980). Overall, the added requirement of a steering task consistently results in a lower P3 amplitude, relative to performing auditory monitoring alone (Wickens et al. 1983; Wickens et al. 1977; Isreal et al. 1980). Using a dual-task paradigm for indexing workload is not ideal. First, it requires participants to perform a secondary task. This prevents it from being applied in real-world scenarios; users cannot be expected to perform an unnecessary task that could compromise their critical work performance. Second, it can only be expected to work if the performance of the secondary task relies on the same mental resources as those of the primary task (Wickens, Yeh 1983), requiring a deliberate choice of the secondary task. Thus, it is fortunate that more recent studies have demonstrated that P3 amplitudes can be sensitive to MWL, even if the auditory oddball is ignored (Ullsperger, Freude and Erdmann 2001; Allison, Polich 2008). This effect is said to induce a momentary and involuntary shift in general attention, especially if recognizable sounds (e.g. a dog bark, opposed to a pure sound) are used (Miller, Rietschel, McDonald and Hatfield 2011). The current work, containing two experiments, investigates the conditions that would allow ‘novelty-P3’, the P3 elicited by the ignored, recognizable oddball, to be an effective index for the MWL of compensatory tracking. Compensatory tracking is a basic steering task that can be generalized to most implementations of vehicular control. In both experiments participants were required to use a joystick to counteract disturbances of a horizontal plane. To evaluate the generalizability of this paradigm, we depicted this horizontal plane as either a line in a simplified visualization or as the horizon in a realworld environment. In the latter, participants experienced a large field-of-view perspective of the outside world from the cockpit of an aircraft that rotated erratically about its heading axis. The task was the same regardless of the visualization. In both experiments, we employed a full factorial design for the visualization (instrument, world) and 3 oddball paradigms (in experiment 1) or 4 levels of task difficulty (in experiment 2) respectively. Two sessions were conducted on separate days for the different visualizations, which were counter-balanced for order. Three trials were presented per oddball paradigm (experiment 1) or level of task difficulty (experiment 2) in blocks, which were randomized for order. Overall, we found that steering performance was worse when the visualization was provided by a realistic world environment in experiments 1 (F (1, 11) = 42.8, p\0.01) and 2 (F (1, 13) = 35.0, p\0.01). Nonetheless, this manipulation of visualization had no consequence on our participants’ MWL as evaluated by a post-experimental questionnaire (i.e., NASATLX) and EEG responses. This suggests that MWL was unaffected by our choice of visualization. The first experiment, with 12 participants, was designed to identify the optimal presentation paradigm of the auditory oddball. For the EEG analysis, two participants had to be excluded, due to noisy electrophysiological recordings (more than 50 % of rejected epochs). Whilst performing the tracking task, participants were presented with a sequence of auditory stimuli that they were instructed to ignore. This sequence would, in the 1-stimulus paradigm, only contain the infrequent odd-ball stimulus (i.e., the familiar sound of a dog’s bark (Fabiani, Kazmerski, Cycowicz and Friedmann 1996)). In the 2-stimulus paradigm this infrequently presented oddball (0.1) is accompanied by a more frequently presented pure tone (0.9) and in the 3-stimulus paradigm the infrequently presented oddball (0.1) is accompanied by a more frequently presented pure tone (0.8) and an infrequently presented pure tone (0.1). These three paradigms are widely used in P3 research (Katayama, Polich 1996). It should be noted, however, that the target to target interval is 20 s regardless of the paradigm. To obtain the ERPs the epochs from 100 ms before to 900 ms after the onset of the recognizable oddball stimulus, were averaged. Mean amplitude measurements were obtained in a 60 ms window, centered at the group- mean peak latency for the largest positive maximum component between 250 and 400 ms for the oddball P3, for each of the three mid-line electrode channels of interest (i.e., Fz, Cz, Pz). In agreement with previous work, the novelty-P3 response is smaller when participants had to perform the tracking task compared to when they were only presented with the task-irrelevant auditory stimuli, without the tracking task (F (1, 9) = 10.9, p\0.01). However, the amplitude of the novelty-P3 differed significantly across the presentation paradigms (F (2, 18) = 5.3, p\0.05), whereby the largest response to our task-irrelevant stimuli was elicited by the 1- stimulus oddball paradigm. This suggests that the 1-stimulus oddball paradigm is most likely to elicit novelty-P3 s that are sensitive to changes in MWL. Finally, the attenuation of novelty-P3 amplitudes by the tracking task varied across the three mid-line electrodes (F (2, 18) = 28.0, p\0.001). Pairwise comparison, Bonferroni corrected for multiple comparisons, revealed P3 amplitude to be largest at Cz, followed by Fz and smallest at Pz (all p\0.05). This stands in contrast with previous work that found control difficulty to attenuate P3 responses in parietal electrodes (cf., Isreal et al. 1980; Wickens et al. 1983). Thus, the current paradigm that uses a recognizable, ignored sound is likely to reflect an underlying process that is different from previous studies, which could be more sensitive to the MWL demands of a tracking task. Given the result of experiment 1, the second experiment with 14 participants, investigated whether the 1-stimulus oddball paradigm would be sufficiently sensitive in indexing tracking difficulty as defined by the bandwidth of frequencies that contributed to the disturbance of the horizontal plane (cf., Isreal et al. 1980). Three different bandwidth profiles (easy, medium, hard) defined the linear increase in the amount of disturbance that had to be compensated for. This manipulation was effective in increasing subjective MWL, according to the results of a post- experimental NASA-TLX questionnaire (F (2, 26) = 14.9, p\0.001) and demonstrated the expected linear trend (F (1, 13) = 23.2, p\0.001). This increase in control effort was also reflected in the amount of joystick activity, which grew linearly across the difficulty conditions (F (1, 13) = 42.2, p\0.001). For the EEG analysis two participants had to be excluded due to noisy electrophysiological recordings (more than 50 % of rejected epochs). A planned contrast revealed that the novelty- P3 was significantly lower in the most difficult condition compared to the baseline viewing condition, where no tracking was done (F (1, 11) = 5.2, p\0.05; see Fig. 1a). Nonetheless, novelty-P3 did not differ significantly between the difficulty conditions (F (2, 22) = 0.13, p = 0.88), nor did it show the expected linear trend (F (1, 11) = 0.02, p = 0.91). Like (Isreal et al. 1980), we find that EEGresponses do not discriminate for MWL that is associated with controlling increased disturbances. It remains to be investigated, whether the novelty-P3 is sensitive for the complexity of controller dynamics, like it has been shown for the P3. The power spectral density of the EEG data around 10 Hz (i.e., alpha) has been suggested by (Smith, Gevins 2005) to index MWL. A post hoc analysis of our current data, at electrode Pz, revealed that alpha power was significantly lower for the medium and hard conditions, relative to the view-only condition (F (1, 11) = 6.081, p\0.05; (F (1, 11) = 6.282, p\0.05). Nonetheless, the expected linear trend across tracking difficulty was not significant (Fig. 1b). To conclude, the current results suggest that a 1-stimulus oddball task ought to be preferred when measuring general MWL with the novelty-P3. Although changes in novelty-P3 can identify the control effort required in our compensatory tracking task, it is not sufficiently sensitive to provide a graded response across different levels of disturbances. In this regard, it may not be as effective as self-reports and joystick activity in denoting control effort. Nonetheless, further research can improve upon the sensitivity of EEG metrics to MWL by investigating other aspects that better correlate to the specific demands of a steering task. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Is the novelty-P3 suitable for indexing mental workload in steering tasks? 15017 15422 PrettoNNLB2014_2 7 P Pretto A Nesti SAE Nooij M Losert HH Bülthoff Paris, France2014-09-00 40.1 40.7 Driving Simulation Conference Europe 2014 In driving simulation, simulator tilt is used to reproduce linear acceleration. In order to feel realistic, this tilt is performed at a rate below the tilt-rate detection threshold, which is usually assumed constant. However, it is known that many factors affect the threshold, like visual information, simulator motion in additional directions, or active vehicle control. Here we investigated the effect of these factors on roll-rate detection threshold during simulated curve driving. Ten participants reported whether they detected roll in multiple trials on a driving simulator. Roll-rate detection thresholds were measured under four conditions. In the first condition, three participants were moved passively through a curve with: (i) roll only in darkness; (ii) combined roll/sway in darkness; (iii) combined roll/sway and visual information. In the fourth condition participants actively drove through the curve. Results showed that roll-rate perception in vehicle simulation is affected by the presence of motion in additional directions. Moreover, an active control task seems to increase the detection threshold, i.e. impair motion sensitivity, but with large individual differences. We hypothesize that this is related to the level of immersion during the task. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/DSC-2014-Pretto.pdf published 0.6 Variable Roll-rate Perception in Driving Simulation 15017 15422 PiryankovaSRdBM2014 7 IV Piryankova JK Stefanucci J Romero S de la Rosa MJ Black BJ Mohler Vancouver, Canada2014-08-09 1 18 ACM Symposium on Applied Perception (SAP '14) The goal of this research was to investigate women’s sensitivity to changes in their perceived weight by altering the body mass index (BMI) of the participants’ personalized avatars displayed on a large-screen immersive display. We created the personalized avatars with a full-body 3D scanner that records both the participants’ body geometry and texture. We altered the weight of the personalized avatars to produce changes in BMI while keeping height, arm length and inseam fixed and exploited the correlation between body geometry and anthropometric measurements encapsulated in a statistical body shape model created from thousands of body scans. In a 2x2 psychophysical experiment, we investigated the relative importance of visual cues, namely shape (own shape vs. an average female body shape with equivalent height and BMI to the participant) and texture (own photo-realistic texture or checkerboard pattern texture) on the ability to accurately perceive own current body weight (by asking them ‘Is the avatar the same weight as you?’). Our results indicate that shape (where height and BMI are fixed) had little effect on the perception of body weight. Interestingly, the participants perceived their body weight veridically when they saw their own photo-realistic texture and significantly underestimated their body weight when the avatar had a checkerboard patterned texture. The range that the participants accepted as their own current weight was approximately a 0.83 to −6.05 BMI% change tolerance range around their perceived weight. Both the shape and the texture had an effect on the reported similarity of the body parts and the whole avatar to the participant’s body. This work has implications for new measures for patients with body image disorders, as well as researchers interested in creating personalized avatars for games, training applications or virtual reality. no notspecified http://www.kyb.tuebingen.mpg.de/ published 17 Can I recognize my body's weight? The influence of shape and texture on the perception of self 15017 1542215017 YukselSBF2014_2 7 B Yüksel C Secchi HH Bülthoff A Franchi Besançon, France2014-07-00 433 440 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2014) In order to properly control the physical interactive behavior of a flying vehicle, the information about the forces acting on the robot is very useful. Force/torque sensors can be exploited for measuring such information but their use increases the cost of the equipment, the weight to be carried by the robot and, consequently, it reduces the flying autonomy. Furthermore, a sensor can measure only the force/torque applied to the point it is mounted in. In order to overcome these limitations, in this paper we introduce a Lyapunov based nonlinear observer for estimating the external forces applied to a quadrotor. Furthermore, we show how to exploit the estimated force for shaping the interactive behavior of the quadrotor using Interconnection and Damping Assignment Passivity Based Controller (IDA-PBC). The results of the paper are validated by means of simulations. no notspecified http://www.kyb.tuebingen.mpg.de/ published 7 A nonlinear force observer for quadrotors and application to physical interactive tasks 15017 15422 StegagnoMB2014 7 P Stegagno C Massidda HH Bülthoff Hong Kong, China2014-06-01 1 3 Workshop on the Centrality of Decentralization in Multi-Robot Systems: Holy Grail or False Idol? (IEEE ICRA 2014) Object recognition is a fundamental topic for the development of robotic systems able to interact with the environment. Most existing methods are based on vision systems and assume a broad point of view over the objects, which are observed in their entirety. This assumption is sometimes difficult to fulfill in practice, and in particular in swarm systems, constituted by a multitude of small robots with limited sensing and computational capabilities. We have developed a method for object recognition with a heterogeneous swarm of low-informative spatially-distributed sensors employing a distributed version of the naive Bayes classifier. Simulation results show the effectiveness of this approach highlighting some nice properties of the developed algorithm. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/ICRA-2014-Stegagno2.pdf published 2 Object Recognition in Swarm Systems: Preliminary Results 15017 15422 StegagnoBBF2014 7 P Stegagno M Basile HH Bülthoff A Franchi Hong Kong, China2014-06-00 3862 3869 IEEE International Conference on Robotics and Automation (ICRA 2014) We present the development of a semi-autonomous quadrotor UAV platform for indoor teleoperation using RGB-D technology as exceroceptive sensor. The platform integrates IMU and Dense Visual Odometry pose estimation in order to stabilize the UAV velocity and track the desired velocity commanded by a remote operator though an haptic interface. While being commanded, the quadrotor autonomously performs a persistent pan-scanning of the surrounding area in order to extend the intrinsically limited field of view. The RGB-D sensor is used also for collision-safe navigation using a probabilistically updated local obstacle map. In the operator visual feedback, pan-scanning movement is real time compensated by an IMU-based adaptive filtering algorithm that lets the operator perform the drive experience in a oscillation-free frame. An additional sensory channel for the operator is provided by the haptic feedback, which is based on the obstacle map and velocity tracking error in order to convey information about the environment and quadrotor state. The effectiveness of the platform is validated by means of experiments performed without the aid of any external positioning system. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014e-SteBasBueFra-preprint.pdf published 7 A Semi-autonomous UAV Platform for Indoor Remote Operation with Visual and Haptic Feedback 15017 15422 GagliardiOBF2014 7 M Gagliardi G Oriolo HH Bülthoff A Franchi Strasbourg, France2014-06-00 1902 1908 13th European Control Conference (ECC 2014) We address the problem of clearing an arbitrary and unknown network of roads using an organized team of Unmanned Aerial Vehicles (UAVs) equipped with a monocular down-facing camera, an altimeter, plus high-bandwidth short-range and low-bandwidth long-range communication systems. We allow the UAVs to possibly split in several subgroups. In each subgroup a leader guides the motion employing a hier- archical coordination. A feature/image-based algorithm guides the subgroup toward the unexplored region without any use of global localization or environmental mapping. At the same time all the entry-points of the the explored region are kept under control, so that any moving object that enters or exits the previously cleared area. Simulative results on real aerial images demonstrate the functionalities and the effectiveness of the proposed algorithm. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014g-GagOriBueFra-preprint.pdf published 6 Image-based road network clearing without localization and without maps using a team of UAVs 15017 15422 YukselSBF2014 7 B Yüksel C Secchi HH Bülthoff A Franchi Hong Kong, China2014-06-00 6258 6265 IEEE International Conference on Robotics and Automation (ICRA 2014) In this paper we propose a controller, based on an extension of Interconnection and Damping Assignment-Passivity Based Control (IDA-PBC) framework, for shaping the whole physical characteristics of a quadrotor and for obtaining a desired interactive behavior between the robot and the environment. In the control design, we shape the total energy (kinetic and potential) of the undamped original system by first excluding external effects. In this way we can assign a new dynamics to the system. Then we apply damping injection to the new system for achieving a desired damped behavior. Then we show how to connect a high-level control input to such system by taking advantage of the new desired physics. We support the theory with extensive simulations by changing the overall behavior of the UAV for different desired dynamics, and show the advantage of this method for sliding on a surface tasks, such as ceiling painting, cleaning or surface inspection. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014d-YueSecBueFra-preprint.pdf published 7 Reshaping the physical properties of a quadrotor through IDA-PBC and its application to aerial physical interaction 15017 15422 MasoneRBF2014 7 C Masone P Robuffo Giordano HH Bülthoff A Franchi Hong Kong, China2014-06-00 6468 6475 IEEE International Conference on Robotics and Automation (ICRA 2014) A new framework for semi-autonomous path planning for mobile robots that extends the classical paradigm of bilateral shared control is presented. The path is represented as a B-spline and the human operator can modify its shape by controlling the motion of a finite number of control points. An autonomous algorithm corrects in real time the human directives in order to facilitate path tracking for the mobile robot and ensures i) collision avoidance, ii) path regularity, and iii) attraction to nearby points of interest. A haptic feedback algorithm processes both human's and autonomous control terms, and their integrals, to provide an information of the mismatch between the path specified by the operator and the one corrected by the autonomous algorithm. The framework is validated with extensive experiments using a quadrotor UAV and a human in the loop with two haptic interfaces. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014c-MasRobBueFra-preprint.pdf published 7 Semi-autonomous Trajectory Generation for Mobile Robots with Integral Haptic Shared Control 15017 15422 FladNBC2014 7 N Flad FM Nieuwenhuizen HH Bülthoff LL Chuang Heraklion, Greece2014-06-00 3 11 11th International Conference on Engineering Psychology and Cognitive Ergonomics (EPCE 2014) Delays between user input and the system’s reaction in control tasks have been shown to have a detrimental effect on performance. This is often accompanied by increases in self-reported workload. In the current work, we sought to identify physiological measures that correlate with pilot workload in a conceptual aerial vehicle that suffered from varying time delays between control input and vehicle response. For this purpose, we measured the skin conductance and heart rate variability of 8 participants during flight maneuvers in a fixed-base simulator. Participants were instructed to land a vehicle while compensating for roll disturbances under different conditions of system delay. We found that control error and the self-reported workload increased with increasing time delay. Skin conductance and input behavior also reflect corresponding changes. Our results show that physiological measures are sufficiently robust for evaluating the adverse influence of system delays in a conceptual vehicle model. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 System Delay in Flight Simulators Impairs Performance and Increases Physiological Workload 15017 15422 GioiosoFSSP2014 7 G Gioioso A Franchi G Salvietti S Scheggi C Prattichizzo Hong Kong, China2014-06-00 4335 4341 IEEE International Conference on Robotics and Automation (ICRA 2014) The flying hand is a robotic hand consisting of a swarm of UAVs able to grasp an object where each UAV contributes to the grasping task with a single contact point at the tooltip. The swarm of robots is teleoperated by a human hand whose fingertip motions are tracked, e.g., using an RGB-D camera. We solve the kinematic dissimilarity of this unique master-slave system using a multi-layered approach that includes: a hand interpreter that translates the fingertip motion in a desired motion for the object to be manipulated; a mapping algorithm that transforms the desired object motions into a suitable set of virtual points deviating from the planned contact points; a compliant force control for the case of quadrotor UAVs that allows to use them as indirect 3D force effectors. Visual feedback is also used as sensory substitution technique to provide a hint on the internal forces exerted on the object. We validate the approach with several human-in-the-loop simulations including the full physical model of the object, contact points and UAVs. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014a-GioFraSalSchPra-preprint.pdf published 6 The flying hand: A formation of UAVs for cooperative aerial tele-manipulation 15017 15422 ScheerNBC2014 7 M Scheer FM Nieuwenhuizen HH Bülthoff LL Chuang Heraklion, Greece2014-06-00 202 211 11th International Conference on Engineering Psychology and Cognitive Ergonomics (EPCE 2014), held as Part of HCI International 2014 Flight simulators are often assessed in terms of how well they imitate the physical reality that they endeavor to recreate. Given that vehicle simulators are primarily used for training purposes, it is equally important to consider the implications of visualization in terms of its influence on the user’s control performance. In this paper, we report that a complex and realistic visual world environment can result in larger performance errors compared to a simplified, yet equivalent, visualization of the same control task. This is accompanied by an increase in subjective workload. A detailed analysis of control performance indicates that this is because the error perception is more variable in a real world environment. no notspecified http://www.kyb.tuebingen.mpg.de/ published 9 The Influence of Visualization on Control Performance in a Flight Simulator 15017 15422 GioiosoRPBF2014 7 G Gioioso M Ryll D Prattichizzo HH Bülthoff A Franchi Hong Kong, China2014-06-00 6278 6284 IEEE International Conference on Robotics and Automation (ICRA 2014) In this paper the problem of a quadrotor that physically interacts with the surrounding environment through a rigid tool is considered. We present a theoretical design that allows to exert an arbitrary 3D force by using a standard near-hovering controller that was originally developed for contact-free flight control. This is achieved by analytically solving the nonlinear system that relates the quadrotor state, the force exerted by the rigid tool on the environment, and the near-hovering controller action at the equilibrium points, during any generic contact. Stability of the equilibria for the most relevant actions (pushing, releasing, lifting, dropping, and left-right shifting) are proven by means of numerical analysis using the indirect Lyapunov method. An experimental platform, including a suitable tool design, has been developed and used to validate the theory with preliminary experiments. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014b-GioRylPraBueFra-preprint.pdf published 6 Turning a near-hovering controlled quadrotor into a 3D force effector 15017 15422 GeluardiNPB2014 7 S Geluardi FM Nieuwenhuizen L Pollini HH Bülthoff Montréal, QC, Canada2014-05-00 1721 1731 70th American Helicopter Society International Annual Forum (AHS 2014) This paper presents the implementation of a Multi-Input Single-Output fully coupled transfer function model of a civil light helicopter in hover. A frequency domain identification method is implemented. It is discussed that the chosen frequency range of excitation allows to capture some important rotor dynamic modes. Therefore, studies that require coupled rotor/body models are possible. The pitch-rate response with respect to the longitudinal cyclic is considered in detail throughout the paper. Different transfer functions are evaluated to compare the capability to capture the main helicopter dynamic modes. It is concluded that models with order less than 6 are not able to model the lead-lag dynamics in the pitch axis. Nevertheless, a transfer function model of the 4th order can provide acceptable results for handling qualities evaluations. The identified transfer function models are validated in the time domain with different input signals than those used during the identification and show good predictive capabilities. From the results it is possible to conclude that the identified transfer function models are able to capture the main dynamic characteristics of the considered light helicopter in hover. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/AHS-2014-Geluardi.pdf published 10 Frequency Domain System Identification of a Light Helicopter in Hover 15017 15422 LacheleVPB2014 7 J Lächele J Venrooij P Pretto HH Bülthoff Montréal, QC, Canada2014-05-00 1777 1785 70th American Helicopter Society International Annual Forum (AHS 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 Motion Feedback Improves Performance in Teleoperating UAVs 15017 15422 WiskemannDPvMB2014 7 CM Wiskemann FM Drop DM Pool MM van Paassen M Mulder HH Bülthoff Montréal, QC, Canada2014-05-00 1706 1720 70th American Helicopter Society International Annual Forum (AHS 2014) This paper describes an experiment conducted to investigate the effects of roll-lateral motion cueing algorithm settings on motion fidelity for helicopter roll-lateral repositioning tasks. A total of 13 motion conditions, comprising two roll gain settings, two degrees of roll-lateral coordination and three roll washout intensities, were tested by four pilots on the CyberMotion Simulator at the Max Planck Institute for Biological Cybernetics. An emphasis was put on the use of objective measurements for motion fidelity determination, in addition to collected subjective handling quality ratings (HQR) and motion fidelity ratings (MFS). Higher roll gains were found to have a beneficial effect on both the subjective and the objective metrics, which is in line with previous findings. Reducing the degree of coordination had a negative effect on subjective ratings, but did not show a consistent negative effect for the considered objective metrics. Stronger roll washout had a large and consistent negative effect on the subjective ratings. This is confirmed by the obtained objective measurements, which show high control activity and less realistic vehicle trajectories during the deceleration and stabilization phase of the maneuver for conditions with strong roll washout. We conclude that roll and lateral gain are more effective than roll washout to attenuate the simulated motion. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/AHS-2014-Drop.pdf published 14 Subjective and Objective Metrics for the Evaluation of Motion Cueing Fidelity for a Roll-Lateral Reposition Maneuver 15017 15422 D039CruzPLCBSGHVAFBKKKPFSBKKMLSGTOC2014 7 M D'Cruz H Patel L Lewis S Cobb M Bues O Stefani T Grobler K Helin J Viitaniemi S Aromaa B Frohlich S Beck A Kunert A Kulik I Karaseitanidis P Psonis N Frangakis M Slater I Bergstrom K Kilteni E Kokkinara B Mohler M Leyrer F Soyka E Gaia D Tedone M Olbert M Cappitelli Minneapolis, MN, USA2014-02-00 167 168 IEEE Virtual Reality (VR 2014) Our vision is that regardless of future variations in the interior of airplane cabins, we can utilize ever-advancing state-of-the-art virtual and mixed reality technologies with the latest research in neuroscience and psychology to achieve high levels of comfort for passengers. Current surveys on passenger's experience during air travel reveal that they are least satisfied with the amount and effectiveness of their personal space, and their ability to work, sleep or rest. Moreover, considering current trends it is likely that the amount of available space is likely to decrease and therefore the passenger's physical comfort during a flight is likely to worsen significantly. Therefore, the main challenge is to enable the passengers to maintain a high level of comfort and satisfaction while being placed in a restricted physical space. no notspecified http://www.kyb.tuebingen.mpg.de/ published 1 Demonstration: VR-HYPERSPACE - The innovative use of virtual reality to increase comfort by changing the perception of self and space 15017 15017 15422 OlivariNBP2014 7 M Olivari FM Nieuwenhuizen HH Bülthoff L Pollini National Harbor, MD, USA2014-01-00 163 173 AIAA Modeling and Simulation Technologies Conference 2014: Held at the SciTech Forum 2014 External aids are required to increase safety and performance during the manual control of an aircraft. Automated systems allow to surpass the performance usually achieved by pilots. However, they suffer from several issues caused by pilot unawareness of the control command from the automation. Haptic aids can overcome these issues by showing their control command through forces on the control device. To investigate how the transparency of the haptic control action in uences performance and pilot behavior, a quantitative comparison between haptic aids and automation is needed. An experiment was conducted in which pilots performed a compensatory tracking task with haptic aids and with automation. The haptic aid and the automation were designed to be equivalent when the pilot was out-of-the-loop, i.e., to provide the same control command. Pilot performance and control effort were then evaluated with pilots in-the-loop and contrasted to a baseline condition without external aids. The haptic system allowed pilots to improve performance compared with the baseline condition. However, automation outperformed the other two conditions. Pilots control effort was reduced by the haptic aid and the automation in a similar way. In addition, the pilot open-loop response was estimated with a non-parametric estimation method. Changes in the pilot response were observed in terms of increased crossover frequency with automation, and decreased neuromuscular peak with haptics. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/SCITECH-2014-Olivari.pdf published 10 An Experimental Comparison of Haptic and Automated Pilot Support Systems 15017 15422 NieuwenhuizenB2014 7 FM Nieuwenhuizen HH Bülthoff National Harbor, MD, USA2014-01-00 154 162 AIAA Modeling and Simulation Technologies Conference 2014: Held at the SciTech Forum 2014 Highway-in-the-sky displays and haptic shared control could provide an easy-to-use control interface for non-expert pilots. In this paper, various display and haptic approaches are evaluated in a ight control task with a personal aerial vehicle. It is shown that a tunnel or a wall representation of the ight trajectory lead to best performance and lowest control activity and effort. Similar results are obtained when haptic guidance cues are based on the error of a predicted position of the vehicle with respect to the ight trajectory. Such haptic cues are also subjectively preferred by the pilots. This study indicates that the combination of a haptic shared control framework and highway-in-the-sky display can provide non-expert pilots with an easy-to-use control interface for ying a personal aerial vehicle. no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/SCITECH-2014-FMN.pdf published 8 Evaluation of Haptic Shared Control and a Highway-in-the-Sky Display for Personal Aerial Vehicles 15017 15422 ThorntonHB2014 7 IM Thornton TS Horowitz HH Bülthoff Leuven, Belgium2014-10-00 1128 Applied Vision Association Christmas Meeting 2013 Multiple Object Tracking (MOT) has proven to be a very useful laboratory tool for exploring the limits of divided attention. Compared to many other attention tasks, MOT appears to capture much of the complexity of our day-to-day environment. Often though, for example when driving or playing sport, we need to act on the environment as well as simply monitor it. In the current work, we asked whether the need to make focused, task-relevant movements, would interfere with the ability to track multiple objects. Sixteen participants completed single-task versions of standard MOT and a new collision-avoidance task that we call interactive multiple object tracking (iMOT). In the iMOT task, which is based on the popular mobile app games Flight Controller and Harbor Master, the goal is to stop objects colliding by using touch control to perturb trajectories. Compared to single-task baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be performed together. Although strategic allocation of resources may partly account for this pattern of cost and benefits, it seems clear that actions can be planned and executed at the same time as tracking multiple objects. no notspecified http://www.kyb.tuebingen.mpg.de/ published -1128 Does action disrupt multiple object tracking? 15017 15422 ChangBd2014_3 7 D-S Chang HH Bülthoff S de la Rosa Tübingen, Germany2014-09-00 S33 S34 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014) Introduction Human actions contain an extensive array of socially relevant information. Previous studies have shown that even brief exposure to visually-observed human actions can lead to accurate predictions of goals or intentions accompanying human actions. For example, motion kinematics can enable predicting the success of a basketball shot, or whether a hand movement is carried out with cooperative or competitive intentions. It has been also reported that gestures accompanying a conversation can serve as a rich source of information for decision making to judge about the trustworthiness of another person. Based on these previous findings we wondered whether humans could actually predict the cooperativeness of another individual by identifying visible social cues. Would it be possible to predict the cooperativeness of a person by just observing everyday actions such as walking or running? Wehypothesized that even brief excerpts of human actions depicted and presented as biological motion cues (i.e. point-light-figures) would provide sufficient information to predict cooperativeness. Using motion-capture technique and a game-theoretical interaction setup we explored whether prediction of cooperation was possible merely by observing biological motion cues of everyday actions, and which actions were enabling these predictions. Methods We recorded six different human actions—walking, running, greeting, table tennis playing, choreographed dancing (Macarena) and spontaneous dancing—in normal participants using an inertia-based motion capture system. We used motion capture technology (MVN Motion Capture Suit from XSense, Netherlands) to record all actions. A total number of 12 participants (6 male, 6 female) participated in motion recording. All actions were then post-processed to short movies (ca. 5 s) showing point light stimuli. These actions were then evaluated by 24 other participants in terms of personality traits such as cooperativeness and trustworthiness, on a Likert scale ranging from 1 to 7. The original participants who provided the recorded actions then returned a few months later to be tested for their actual cooperativeness performance. They were given standard social dilemmas used in game theory such as the give some game, stag hunt game, and public goods game. In those interaction games, they were asked to exchange or give tokens to another player, and depending on their choices they were able to win or lose an additional amount of money. The choice of behavior for each participant was then recorded and coded for cooperativeness. This cooperativeness performance was then compared with the perceived cooperativeness based on the different ratings of their actions performed and evaluated by other participants. Results and Discussion Preliminary results showed a significant correlation between cooperativeness ratings and actual cooperativeness performance. The actions showing a consistent correlation were Walking, Running and Choreographed Dancing (Macarena). No significant correlation was observed for actions such as Greeting, Table tennis playing or Spontaneous Dancing. A similar tendency was consistently observed across all actions, although no significant correlations were found for all social dilemmas. The ratings of different actors and actions were highly consistent across different raters and high inter-rater-reliability was achieved. It seems possible that natural and constrained actions carry more social cues enabling prediction of cooperation than actions showing more variance across different participants. Further studies with higher number of actors and raters are planned to confirm whether accurate prediction of cooperation is really possible. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Actions revealing cooperation: predicting cooperativeness in social dilemmasfrom the observation of everyday actions 15017 15422 MeilingerFMB2014 7 T Meilinger J Frankenstein BJ Mohler HH Bülthoff Tübingen, Germany2014-09-00 S53 S54 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014) Knowledge underlying everyday navigation is distinguished into route and survey knowledge (Golledge 1999). Route knowledge allows re-combining and navigating familiar routes. Survey knowledge is used for pointing to distant locations or finding novel shortcuts. We show that within one’s city of residency route and survey knowledge root in separate memories of the same environment and are represented within different reference frames. Twenty-six Tu¨bingen residents who lived there for seven years in average faced a photo- realistic virtual model of Tübingen and completed a survey task in which they pointed to familiar target locations from various locations and orientations. Each participant’s performance was most accurate when facing north, and errors increased as participants’ deviation from a north-facing orientation increased. This suggests that participants’ survey knowledge was organized within a single, north-oriented reference frame. One week later, 23 of the same participants conducted route knowledge tasks comprising of the very same start and goal locations used in the survey task before. Now participants did not point to a goal location, but used arrow keys of a keyboard to enter route decisions along an imagined route leading to the goal. Deviations from the correct number of left, straight, etc. decisions and response latencies were completely uncorrelated to errors and latencies in pointing. This suggests that participants employed different and independent representations for the matched route and survey tasks. Furthermore, participants made fewer route errors when asked to respond from an imagined horizontal walking perspective rather than from an imagined constant aerial perspective which replaced left, straight, right decisions by up, left, right, down as in a map with the order tasks balanced. This performance advantage suggests that participants did not rely on the single, north-up reference used for pointing. Route and survey knowledge were organized along different reference frames. We conclude that our participants’ route knowledge employed multiple local reference frames acquired from navigation whereas their survey knowledge relied on a single north-oriented reference frame learned from maps. Within their everyday environment, people seem to use map or navigation-based knowledge according to which best suits the task. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 How to remember Tübingen? Reference frames in route and survey knowledge of one’s city of residency 15017 1542215017 GlatzBC2014 7 C Glatz HH Bülthoff LL Chuang Tübingen, Germany2014-09-00 S38 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014) Automated collision avoidance systems promise to reduce accidents and relieve the driver from the demands of constant vigilance. Such systems direct the operator’s attention to potentially critical regions of the environment without compromising steering performance. This raises the question: What is an effective warning cue? Sounds with rising intensities are claimed to be especially salient. By evoking the percept of an approaching object, they engage a neural network that supports auditory space perception and attention (Bach et al. 2008). Indeed, we are aroused by and faster to respond to ‘looming’ auditory tones, which increase heart rate and skin conductance activity (Bach et al. 2009). Looming sounds can differ in terms of their rising intensity profiles. While it can be approximated by a sound whose amplitude increases linearly with time, an approaching object that emits a constant tone is better described as having an amplitude that increases exponentially with time. In a driving simulator study, warning cues that had a veridical looming profile induced earlier braking responses than ramped profiles with linearly increasing loudness (Gray 2011). In the current work, we investigated how looming sounds might serve, during a primary steering task, to alert participants to the appearance of visual targets. Nine volunteers performed a primary steering task whilst occasionally discriminating visual targets. Their primary task was to minimize the vertical distance between an erratically moving cursor and the horizontal mid-line, by steering a joystick towards the latter. Occasionally, diagonally oriented Gabor patches (108 tilt; 18 diameter; 3.1 cycles/deg; 70 ms duration) would appear on either the left or right of the cursor. Participants were instructed to respond with a button-press whenever a pre-defined target appeared. Seventy percent of the time, these visual stimuli were preceded by a 1,500 ms warning tone, 1,000 ms before they appeared. Overall, warning cues resulted in significantly faster and more sensitive detections of the visual target stimuli (F1,8 = 7.72, p\0.05; F1,8 = 9.63, p\0.05). Each trial would present one of three possible warning cues. Thus, a warning cue (2,000 Hz) could either have a constant intensity of 65 dB, a ramped tone with linearly increasing intensity from 60 dB to approximately 75 dB or a comparable looming tone with an exponentially increasing intensity profile. The different warning cues did not vary in their influence of the response times to the visual targets and recognition sensitivity (F2,16 = 3.32, p = 0.06; F2,16 = 0.10, p = 0.90). However, this might be due to our small sample size. It is noteworthy that the different warning tones did not adversely affect steering performance (F2,16 = 1.65, p\0.22). Nonetheless, electroencephalographic potentials to the offset of the warning cues were significantly earlier for the looming tone, compared to both the constant and ramped tone. More specifically, the positive component of the event- related po tential was significantly earlier for the looming tone by about 200 ms, relative to the constant and ramped tone, and sustained for a longer duration (see Fig. 1). The current findings highlight the behavioral benefits of auditory warning cues. More importantly, we find that a veridical looming tone induces earlier event-related potentials than one with a linearly increasing intensity. Future work will investigate how this benefit might diminish with increasing time between the warning tone and the event that is cued for. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Looming auditory warnings initiate earlier event-related potentials in a manual steering task 15017 15422 MeilingerFWBH2014 7 T Meilinger J Frankenstein K Watanabe HH Bülthoff C Hölscher Bremen, Germany2014-09-00 Spatial Cognition 2014 no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Map-based Reference Frames Are Used to Organize Memory of Subsequent Navigation Experience 15017 15422 HohmanndB2014 7 MR Hohmann S de la Rosa HH Bülthoff Tübingen, Germany2014-09-00 S46 S47 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014) Action recognition research has mainly focused on investigating the perceptual processes in the recognition of isolated actions from biological motion patterns. Surprisingly little is known about the cognitive representation underlying action recognition. A fundamental question concerns whether actions are represented independently or interdependently. Here we examined, whether cognitive representation of static (action image) and dynamic (action movie) actions are dependent on each other and whether cognitive representations for static and dynamic actions overlap. Adaptation paradigms are an elegant way to examine the presence of relationship between different cognitive representations. In an adaptation experiment, participants view a stimulus, the adaptor, for a prolonged amount of time and afterwards report their perception of a second, ambiguous test stimulus. Typically, the perception of the second stimulus will be biased away from the adaptor stimulus. The presence of an antagonistic perceptual bias (adaptation effect) is often taken as evidence for the interdependency of the cognitive representation between test and adaptor stimulus. We manipulated the dynamic content (dynamic vs. static) of the test and adaptor stimulus independently. The ambiguous test stimulus was created by a weighted linear morph between the spatial positions of the two adapting actions (hand shake, high five). 30 participants categorized the ambiguous dynamic or static action stimuli after being adapted to dynamic or static actions. Afterwards, we calculated the perceptual bias for each participant by fitting a psychometric function to the data. We found an action-adaptation after-effect in some but not all experimental conditions. Specifically, the effect was only present if the presentation of the adaptor and the test stimulus was congruent, i.e. if both were presented in either a dynamic or a static manner (p\0.001). This action-adaptation after-effect indicates a dependency between cognitive representations when adaptor and test stimuli have the same dynamic content (i.e. both static or dynamic). Future studies are needed to relate those results to other findings in the field of action recognition and to incorporate a neurophysiological perspective. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 On the perception and processing of social actions 15017 15422 SymeonidouOBC2014 7 E-R Symeonidou M Olivari HH Bülthoff LL Chuang Tübingen, Germany2014-09-00 S71 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014) Haptic feedback systems can be designed to assist vehicular steering by sharing manual control with the human operator. For example, direct haptic feedback (DHF) forces, that are applied over the control device, can guide the operator towards an optimized trajectory, which he can either augment, comply with or resist according to his preferences. DHF has been shown to improve performance (Olivari et al. submitted) and increase safety (Tsoi et al. 2010). Nonetheless, the human operator may not always benefit from the haptic support system. Depending on the amount of the haptic feedback, the operator might demonstrate an over- reliance or an opposition to this haptic assistance (Forsyth and MacLean 2006). Thus, it is worthwhile to investigate how different levels of haptic assistance influence shared control performance. The current study investigates how different gain levels of DHF influence performance in a compensatory tracking task. For this purpose, 6 participants were evenly divided into two groups according to their previous tracking experience. During the task, they had to compensate for externally induced disturbances that were visualized as the difference between a moving line and a horizontal reference standard. Briefly, participants observed how an unstable aircraft symbol, located in the middle of the screen, deviated in the roll axis from a stable artificial horizon. In order to compensate for the roll angle, participants were instructed to use the control joystick. Meanwhile, different DHF forces were presented over the control joystick for gain levels of 0, 12.5, 25, 50 and 100 %. The maximal DHF level was chosen according to the procedure described in (Olivari et al. 2014) and represents the best stable performance of skilled human operators. The participants’ performance was defined as the reciprocal of the median of the root mean square error (RMSE) in each condition. Figure 1a shows that performance improved with in- creasing DHF gain, regardless of experience levels. To evaluate the operator’s contribution, relative to the DHF contribution, we calculated the ratio of overall performance to estimated DHF performance without human input. Figure 1b shows that the subject’s contribution in both groups decreased with increasing DHF up to the 50 % condition. The contribution of experienced subjects plateaued between the 50 and 100 % DHF levels. Thus, the increase in performance for the 100 % condition can mainly be attributed to the higher DHF forces alone. In contrast, the inexperienced subjects seemed to completely rely on the DHF during the 50 % condition, since the operator’s contribution approximated 1. However, this changed for the 100 % DHF level. Here, the participants started to actively contribute to the task (operator’s contribution [1). This change in behavior resulted in performance values similar to those of the experienced group Our findings suggest that the increase of haptic support with our DHF system does not necessarily result in over-reliance and can improve performance for both experienced and inexperienced subjects. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 The Role of Direct Haptic Feedback in a Compensatory Tracking Task 15017 15422 FademrechtBd2014_2 7 L Fademrecht I Bülthoff S de la Rosa Beograd, Serbia2014-08-00 103 37th European Conference on Visual Perception (ECVP 2014) Recognizing actions of others in the periphery is required for fast and appropriate reactions to events in our environment (e.g. seeing kids running towards the street when driving). Previous results show that action recognition is surprisingly accurate even in far periphery (<=60° visual angle (VA)) when actions were directed towards the observer (front view). The front view of a person is considered to be critical for social cognitive processes (Schillbach et al., 2013). To what degree does the orientation of the observed action (front vs. profile view) influence the identification of the action and the recognition of the action's valence across the horizontal visual field? Participants saw life-size stick figure avatars that carried out one of six motion-captured actions (greeting actions: handshake, hugging, waving and aggressive actions: slapping, punching and kicking). The avatar was shown on a large screen display at different positions up to 75° VA. Participants either assessed the emotional valence of the action or identified the action either as ‘greeting’ or as ‘attack’. Orientation had no significant effect on accuracy. Reaction times were significantly faster for profile than for front views (p=0.003) for both tasks, which is surprising in light of recent suggestions no notspecified http://www.kyb.tuebingen.mpg.de/ published -103 A matter of perspective: action recognition depends on stimulus orientation in the periphery 15017 15422 delaRosaHB2014 7 S de la Rosa M Hohmann HH Bülthoff Beograd, Serbia2014-08-00 71 37th European Conference on Visual Perception (ECVP 2014) Visual action recognition is a prerequisite for humans to physically interact with other humans. Do we use similar perceptual mechanisms when recognizing actions from a photo (static) or a movie (dynamic)? We used an adaptation paradigm to explore whether static and dynamic action information is processed in separate or interdependent action-sensitive channels. In an adaptation paradigm participants' perception of an ambiguous test stimulus is biased after the prolonged exposure to an adapting stimulus (adaptation aftereffect (AA)). This is often taken as evidence for the existence of interdependent perceptual channels. We used a novel action morphing technique to produce ambiguous test actions that were a weighted linear combination of the two adaptor actions. We varied the dynamics of the content (i.e. static vs. dynamic) of the test and adaptor stimuli independently and were interested in whether the AA was modulated by the congruency of motion information between test and adaptor. The results indicated that the AA only occurred when the dynamics of the content between the test and adaptor were congruent (p<0.05) but not when they were incongruent (p>0.05). These results provide evidence that static and dynamic action information are processed to some degree separately. no notspecified http://www.kyb.tuebingen.mpg.de/ published -71 Actions in motion: Separate perceptual channels for processing dynamic and static action information 15017 15422 PiryankovaSRdBM2014_3 7 IV Piryankova JK Stefanucci J Romero S de la Rosa MJ Black BJ Mohler Vancouver, Canada2014-08-00 41st International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH 2014) no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/SIGGRAPH-2014-Piryankova.pdf published 0 Can I recognize my body's weight? The influence of shape and texture on the perception of self 15017 1542215017 ChangBd2014_2 7 D-S Chang HH Bülthoff S de la Rosa Beograd, Serbia2014-08-00 103 37th European Conference on Visual Perception (ECVP 2014) How does the brain discriminate between different actions? Action recognition has been an active field of research for long time, still little is known about how the representations of different actions in the brain are related to each other. We wanted to find out whether different actions were ordered according to their semantic meaning or kinematic motion by employing a novel visual action adaptation paradigm. A total of 24 participants rated four different social actions in terms of their perceived differences in either semantic meaning or kinematic motion. Then, the specific perceptual bias for each action was determined by measuring the size of the action adaptation aftereffect in each participant. Finally, the meaning and motion ratings were used to predict the measured adaptation aftereffect for each action using linear regression. Semantic meaning and the interaction of meaning and motion significantly predicted the adaptation aftereffects, but kinematic motion alone was not a significant predictor. These results imply that differences between distinct actions are rather encoded in terms of their meaning than motion in the brain. The current experimental paradigm could be a useful method for further mapping the relationship between different actions in the human brain. no notspecified http://www.kyb.tuebingen.mpg.de/ published -103 Does Action Recognition Depend more on the Meaning or Motion of Different Actions? 15017 15422 ZhaoB2014_2 7 M Zhao I Bülthoff Beograd, Serbia2014-08-00 76 37th European Conference on Visual Perception (ECVP 2014) Many studies have demonstrated better recognition of own- than other-race faces. However, little is known about whether memories of unfamiliar own- and other-race faces decay similarly with time. We addressed this question by probing participants’ memory about own- and other-race faces both immediately after learning (immediate test) and one week later (delayed test). In both learning and test phases, participants saw short movie wherein a person was talking in front of the camera (the sound was turned off). Two main results emerged. First, we observed a cross-race deficit in recognizing other-race faces in both immediate and delayed tests, but the cross-race deficit was reduced in the latter. Second, recognizing faces immediately after learning was not better than recognizing them one week later. Instead, overall performance was even better at delayed test than at immediate test. This result was mainly due to improved recognition for other-race female faces, which showed comparatively low performance when tested immediately. These results demonstrate that memories of both own- and other-race faces sustain for a relative long time. Although other-race faces are less well recognized than own-race faces, they seem to be maintained in long-term memory as well as, and even better than, own-race faces. no notspecified http://www.kyb.tuebingen.mpg.de/ published -76 Long-term memory for own- and other-race faces 15017 15422 NooijPBN2014 7 SAE Nooij P Pretto HH Bülthoff A Nesti Amsterdam, The Netherlands2014-06-13 235 15th International Multisensory Research Forum (IMRF 2014) Vestibular models can predict many aspects of self-motion perception. However, it is still not completely understood how linear and angular cues combine to form the overall perception of 3D motion in space. Here, we investigated the perception of heading and travelled path during a circular trajectory. According to model predictions (Merfeld et al. 1993) we expected a bias in perceived heading (i.e., facing outward the curve) and a distorted travelled path when in darkness, but close to veridical perception when visual information was also provided. Participants were moved along a circular trajectory using the MPI CyberMotion Simulator (www.cyberneum.de), either blindfolded or viewing congruent visual motion (random dot cloud). The orientation of the body midline with respect to the motion path (heading) was varied using an adaptive procedure. Participants indicated whether they were facing inward or outward the travelled curve. In a separate session aiming at collecting continuous measures (darkness only), participants continuously pointed towards a distant imaginary earth-fixed target, or towards the direction of perceived motion. They also provided drawings of the perceived travelled trajectory. Results show that heading sensitivity in darkness for curved trajectories was significantly lower than generally found for straight paths. Most of the participants showed a heading bias, but its direction was opposite to the predictions of the Merfeld-model. Perceived heading based on continuous pointing and drawings were not always consistent. These results ask for changes of the current model and show that the various components of perception are not always consistent when investigated in isolation. no notspecified http://www.kyb.tuebingen.mpg.de/ published -235 Nonveridical perception of heading and travelled path during curved trajectories 15017 15422 KimBKCHCP2014 7 J Kim HH Bülthoff S-P Kim YG Chung SW Han S-C Chung J-Y Park Hamburg, Germany2014-06-11 20th Annual Meeting of the Organization for Human Brain Mapping (OHBM 2014) Introduction: Recently multi-voxel pattern analysis (MVPA) has been introduced in the analysis of functional magnetic resonance imaging (fMRI) and allows us to examine distributed spatial patterns of neural activity in response to various tactile stimuli [1]. Taking advantage of its higher sensitivity [2], MVPA has been employed in a wide range of somatosensory research fields as a complement to the traditional univariate analysis. However, current research on tactile MVPA is mostly focused on delineating neuronal activation patterns in response to the tactile stimuli. Relatively less attention has been devoted towards understanding how neural activation patterns underlie diverse human behavioral outcomes during tactile manipulation tasks. In this study, we aim to investigate how the multi-voxel neural patterns varied with the behavioral discriminative performance in a roughness discrimination task. For this purpose, we search for the brain regions carrying roughness discriminative information using searchlight MVPA [3] and how each region correlates with the human behavioral performance. Methods: Sixteen subjects participated in this study approved by Korea University Institutional Review Board (KU-IRB-11-46-A-0). Anatomical (T1-weighted 3D MPRAGE) and functional images (T2*-weighted gradient EPI, TR = 3,000 ms, voxel size = 2.0×2.0×2.0 mm) were obtained using a Siemens 3T scanner (Magnetom TrioTim). Before the fMRI scanning, all the participants performed the behavioral roughness discrimination task. Five different roughness levels of aluminum-oxide abrasive papers (Sumitomo 3-M), which were validated and employed in the previous study [4], were used. In each trial of the task, the participants explored two randomly presented abrasive papers with the index fingertip of right hand and reported which of them felt rougher. Behavioral discriminative sensitivity was measured as the difference of roughness values between 25th and 75th percentile of a psychometric function. This was referred to the just noticeable difference (JND) [5]. An fMRI scanning consisted of five blocks with twenty trials. Each trial was made up two consecutive periods; an exploration for 6 s followed by a resting for 15 s. Following instructions, the participants explored a presented abrasive paper with their index fingertip of right hand. Brain signals were analyzed using a searchlight MVPA approach [3] and decoding accuracy of each significant cluster was obtained. Finally, we evaluated a correlation between the JND and the decoding accuracy using the Pearson correlation coefficient. Results: A random-effects group analysis revealed that four clusters exhibited statistically significant decoding capabilities to differentiate five distinct roughness levels (p<0.0001 uncorr., cluster size>30). These four clusters were located in the superior portion of the bilateral temporal pole (STP), supplementary motor area (SMA), and contralateral postcentral gyrus (S1). Decoding accuracies for roughness discrimination were significantly exceeded the chance level (=20%) for every clusters (SMA: 40.3±4.4%; contralateral S1: 38.0±6.7%; contralateral STP: 33.6±4.7%; ipsilateral STP: 33.1±3.7%). Among these clusters, the significant Pearson correlation coefficient was obtained only for SMA (r=-0.547, p<0.05). Conclusions: In this study, we statistically assessed each set of multi-voxel patterns across the whole brain and revealed that bilateral STP, SMA, and contralateral S1 exhibited neural activity patterns specific to the roughness discrimination. Remarkably, decoding performance using SMA activity showed a significant correlation with the behavioral performance. The negative correlation in SMA indicates that individuals with higher decoding accuracy of roughness from SMA also show better performance in a roughness discrimination task. Our findings suggest that the pattern of activity in SMA may be closely related to the ability to discriminate tactile roughness. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 A correlation study of behavioral and neural decoding performance for roughness discrimination 15017 15422 RoheN2014 7 T Rohe U Noppeney Hamburg, Germany2014-06-11 20th Annual Meeting of the Organization for Human Brain Mapping (OHBM 2014) Introduction: To form a reliable percept of the multisensory environment, the brain integrates signals across the senses. However, it should integrate signals only when caused by a common source, but segregate those from different sources (Shams and Beierholm, 2010). Bayesian Causal inference provides a rational strategy to arbitrate between information integration and segregation: In the case of a common source, signals should be integrated weighted by their sensory reliability (Ernst and Banks, 2002; Alais and Burr, 2004; Fetsch et al., 2012). In case of separate sources, they should be processed independently. Yet, in everyday life, the brain does not know whether signals come from common or different sources, but needs to infer the probabilities of these casual structures from the sensory signals. A final estimate can then be obtained by averaging the estimates under the two causal structures weighted by their posterior probabilities (i.e. model averaging). Indeed, human observers locate audiovisual signal sources by combining the spatial estimates under the assumptions of common and separate sources weighted by their probabilities (Kording et al., 2007). Yet, the neural basis of Bayesian Causal Inference during spatial localization remains unknown. This study combines Bayesian Modeling and multivariate fMRI decoding to characterize how Bayesian Causal Inference is performed by the auditory and visual cortical hierarchies (Fig. 1A-C). Methods: Participants (N = 5) were presented with auditory and visual signals that were independently sampled from four locations along the azimuth. The spatial reliability of the visual signal was high or low. In a selective attention paradigm, participants localized either the auditory or the visual spatial signal. After fitting the Bayesian causal inference model to participants' localization responses, we obtained condition-specific auditory and visual spatial estimates under the assumption of (i) common (SAV,C=1) and (ii) separate sources (SA,C=2, SV,C=2) and (iii) the final combined spatial estimate after model averaging (SA, SV), i.e. five spatial estimates in total (Fig. 1C). Using cross-validation, we trained a support vector regression model to decode these auditory or visual spatial estimates from fMRI voxel response patterns in regions along the visual and auditory cortical hierarchies. We evaluated the decoding accuracy for each spatial estimate in terms of the correlation coefficient between the spatial estimate decoded from fMRI and predicted from the Bayesian Causal Inference model. To determine the spatial estimate that is primarily encoded in a region, we next computed the exceedance probability that a correlation coefficient of one spatial estimate was greater than any of the other spatial estimates (Fig. 1D). Results: Bayesian Causal Inference emerged along the auditory and visual hierarchies: Lower level visual and auditory areas encoded auditory and visual estimates under the assumption of separate sources (i.e. information segregation). Posterior intraparietal sulcus (IPS1-2) represented the reliability-weighted average of the signals under common source assumptions. Anterior IPS (IPS3-4) represented the task-relevant auditory or visual spatial estimate obtained from model averaging. Conclusions: This is the first demonstration that the computational operations underlying Bayesian Causal Inference are performed by the human brain in a hierarchical fashion. Critically, the brain explicitly encodes not only the spatial estimates under the assumption of full segregation (primary visual and auditory areas), but also under forced fusion (IPS1-2). These spatial estimates under the causal structures of common and separate sources are then averaged into task-relevant auditory or visual estimates according to model averaging (IPS3-4). Our study provides a novel hierarchical perspective on multisensory integration in human neocortex. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 A cortical hierarchy performs Bayesian Causal Inference for multisensory perception 15017 1542215017 18826 LeitaoTTN2014 7 J Leitao A Thielscher J Tuennerhoff U Noppeney Hamburg, Germany2014-06-11 20th Annual Meeting of the Organization for Human Brain Mapping (OHBM 2014) Introduction: Despite sustained attention, weak sensory event often evade our perceptual awareness. The neural mechanisms that determine whether a stimulus is consciously perceived remain poorly understood. Conscious visual perception is thought to rely on a widespread neural system encompassing primary and higher order visual areas, frontoparietal areas and subcortical regions such as the thalamus. This concurrent TMS-fMRI study applied TMS to the right anterior intraparietal sulcus (IPS) and in a sham control to investigate how perturbations to IPS influence the neural systems underlying visual perception of weak sensory events. Methods: 7 subjects took part in the concurrent TMS-fMRI experiment (3T Siemens Magnetom Tim Trio System, GE-EPI, TR = 3290ms, TE = 35ms, 40 axial slices, size = 3mm x 3mm x 3.3mm). The 2x2x2 factorial design manipulated: (i) visual target (present, absent), (ii) visual percept (yes, no) and (ii) TMS condition (IPS, Sham). In a visual target detection task, subjects fixated a cross in the centre of the screen. On 50% of the trials a weak visual target was presented in their left lower visual field. Subjects were instructed to answer 'yes' only when completely sure. Visual stimuli were individually tailored to yield a detection threshold of 70% in visual present trials. Bursts of 4 TMS pulses (10Hz) were applied in image acquisition gaps at 100ms after each trial onset over the right IPS (x=42.3, y=-50.3, z=64.4) and during a sham condition using a MagPro X100 stimulator (MagVenture, Denmark) and a MR-compatible figure of eight TMS coil (MRi-B88). Stimulation intensity was 69% for IPS and was adjusted during Sham stimulation to evoke similar side effects. Trials were presented in blocks of 12 that were interleaved with baseline periods of 13s. Each run consisted of 7 blocks with 4 runs per TMS condition, giving a total of 168 trials per condition. Each TMS condition was performed in different sessions and all conditions were counterbalanced across subjects. Behavioral responses were categorized in hit, miss, false alarm and correct rejection (CR). Performance measures for each category were computed separately for IPS- and Sham-TMS and averaged across subjects. While each condition was modelled at the 1st level (using SPM8), 2nd level random effects analyses (one-sample t-tests) were restricted to target present trials (i.e. hits, misses). We tested for the main effects of TMS, visual percept and their interaction. Results are reported at p<0.05 at cluster level corrected for the whole brain using an auxiliary uncorrected voxel threshold of p=0.01. Conclusions: Visual detection involves perceptual decisions based on uncertain sensory representations. As participants set a high criterion for determining whether they are aware of targets, missed trials were associated with more uncertainty as indexed by long response times and thereby placed more demands on decisional processes. TMS to IPS perturbed this neural system involved in perceptual decisions and awareness. Critically, while the right precentral/middle frontal gyrus associated with the frontal eye field usually discriminates between hits and misses, TMS-IPS abolishes this difference in activation indicating that IPS-FEF closely interact in perceptual awareness and decisions. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Using TMS-fMRI to investigate the neural correlates of visual perception 15017 1542215017 1882615017 18821 FademrechtBd2014_3 7 L Fademrecht I Bülthoff S de la Rosa Tübingen, Germany2014-06-00 6th International Conference on Brain and Cognitive Engineering (BCE 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Peripheral Vision and Action Recognition 15017 15422 ZhaoHB2014_3 7 M Zhao WG Hayward I Bülthoff Tübingen, Germany2014-06-00 6th International Conference on Brain and Cognitive Engineering (BCE 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Race of Face Affects Various Face Processing Tasks Differently 15017 15422 KimSRWWB2014 7 J Kim J Schultz T Rohe C Wallraven S W HH Bülthoff Tübingen, Germany2014-06-00 6th International Conference on Brain and Cognitive Engineering (BCE 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Supramodal Representations of Associated Emotions 15017 15422 SymeonidouOBC2014_2 7 E-R Symeonidou M Olivari HH Bülthoff LL Chuang Tübingen, Germany2014-06-00 6th International Conference on Brain and Cognitive Engineering (BCE 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 The Role of Direct Haptic Feedback in a Compensatory Tracking Task 15017 15422 JuW2014 7 L Ju C Wallraven Tübingen, Germany2014-06-00 6th International Conference on Brain and Cognitive Engineering (BCE 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 User Experience in Stereoscopic Driving Games 15017 15422 Chang2014_2 7 D-S Chang HH Bülthoff S de la Rosa Tübingen, Germany2014-06-00 6th International Conference on Brain and Cognitive Engineering (BCE 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Visual Adaptation to Social Actions: The Role of Meaning vs. Motion for Action Recognition 15017 15422 Yuksel2014 7 B Yüksel C Secchi A Franchi Hong Kong, China2014-05-31 ICRA 2014 Workshop: Aerial robots physically interacting with the environment no notspecified http://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/ICRA-WS-2014-Yueksel.pdf published 0 Aerial Physical Interaction via Reshaping of the Physical Properties: Passivity-based Control Methods and Nonlinear Force Observers 15017 15422 EsinsBS2014 7 J Esins I Bülthoff J Schultz St. Pete Beach, FL, USA2014-05-21 1436 14th Annual Meeting of the Vision Sciences Society (VSS 2014) Humans rely strongly on the shape of other people’s faces to recognize them. However, faces also change appearance between encounters, for example when people put on glasses or change their hair-do. This can affect face recognition in certain situations, e.g. when recognizing faces that we do not know very well or for congenital prosopagnosics. However, additional cues can be used to recognize faces: faces move as we speak, smile, or shift gaze, and this dynamic information can help to recognize other faces (Hill & Johnston, 2001). Here we tested if and to what extent such dynamic information can help congenital prosopagnosics to improve their face recognition. We tested 15 congenital prosopagnosics and 15 age- and gender matched controls with a test created by Raboy et al. (2010). Participants learned 18 target identities and then performed an old-new-judgment on the learned faces and 18 distractor faces. During the test phase, half the target faces exhibited everyday changes (e.g. modified hairdo, glasses added, etc.) while the other targets did not change. Crucially, half the faces were presented as short film sequences (dynamic stimuli) while the other half were presented as five random frames (static stimuli) during learning and test. Controls and prosopagnosics recognized identical better than changed targets. While controls recognized faces better in the dynamic than in the static condition, prosopagnosics’ performance was not better for dynamic compared to static stimuli. This difference between groups was significant. The absence of a dynamic advantage in prosopagnosics suggests that dysfunctions in congenital prosopagnosia might not only be restricted to ventral face-processing regions, but might also involve lateral temporal regions where facial motion is known to be processed (e.g. Haxby et al., 2000). no notspecified http://www.kyb.tuebingen.mpg.de/ published -1436 Facial motion does not help face recognition in congenital prosopagnosics 15017 15422 ZhaoB2014 7 M Zhao I Bülthoff St. Pete Beach, FL, USA2014-05-20 1262 14th Annual Meeting of the Vision Sciences Society (VSS 2014) Previous studies have shown that face race influences various aspects of face processing, including face identification (Meissner & Brigham, 2001), holistic processing (Michel et al., 2006), and processing of featural and configural information (Hayward et al., 2008). However, whether these various aspects of other-race effects (ORE) arise from the same underlying mechanism or from independent ones remain unclear. To address this question, we measured those manifestations of ORE with different tasks, and tested whether the magnitude of those OREs are related to each other. Each participant performed three tasks. (1) The original and a Chinese version of Cambridge Face Memory Tests (CFMT, Duchaine & Nakayama, 2006; McKone et al., 2012), which were used to measure the ORE in face memory. (2) A part/whole sequential matching task (Tanaka et al., 2004), which was used to measure the ORE in face perception and in holistic processing. (3) A scrambled/blurred face recognition task (Hayward et al., 2008), which was used to measure the ORE in featural and configural processing. We found a better recognition performance for own-race than other-race faces in all three tasks, confirming the existence of an ORE across various tasks. However, the size of the ORE measured in all three tasks differed; we found no correlation between the OREs in the three tasks. More importantly, the two measures of the ORE in configural and holistic processing tasks could not account for the individual differences in the ORE in face memory. These results indicate that although face race always influence face recognition as well as configural and featural processing, different underlying mechanisms are responsible for the occurrence of ORE for each aspect of face processing tested here. no notspecified http://www.kyb.tuebingen.mpg.de/ published -1262 Face Race Affects Various Types of Face Processing, but Affects Them Differently 15017 15422 FademrechtBd2014 7 L Fademrecht I Bülthoff S de la Rosa St. Pete Beach, FL, USA2014-05-20 1006 14th Annual Meeting of the Vision Sciences Society (VSS 2014) The recognition of actions is critical for human social functioning and provides insight into both the active and the inner states (e.g. valence) of another person. Although actions often appear in the visual periphery little is known about action recognition beyond foveal vision. Related previous research showed that object recognition and object valence (i.e. positive or negative valence) judgments are relatively unaffected by presentations up to 13° visual angle (VA) (Calvo et al. 2010). This is somewhat surprising given that recognition performance of words and letters sharply decline in the visual periphery. Here participants recognized an action and evaluated its valence as a function of eccentricity. We used a large screen display that allowed presentation of stimuli over a visual field from -60 to +60° VA. A life-size stick figure avatar carried out one of six motion captured actions (3 positive actions: handshake, hugging, waving; 3 negative actions: slapping, punching and kicking). 15 participants assessed the valence of the action (positive or negative action) and another 15 participants identified the action (as fast and as accurately as possible). We found that reaction times increased with eccentricity to a similar degree for the valence and the recognition task. In contrast, accuracy performance declined significantly with eccentricity for both tasks but declined more sharply for the action recognition task. These declines were observed for eccentricities larger than 15° VA. Thus, we replicate the findings of Calvo et al. (2010) that recognition is little affected by extra-foveal presentations smaller than 15° VA. Yet, we additionally demonstrate that visual recognition performance of actions declined significantly at larger eccentricities. We conclude that large eccentricities are required to assess the effect of peripheral presentation on visual recognition. no notspecified http://www.kyb.tuebingen.mpg.de/ published -1006 Influence of eccentricity on action recognition 15017 15422 delaRosaFB2014 7 S de la Rosa G Fuller HH Bülthoff St. Pete Beach, FL, USA2014-05-20 1005 14th Annual Meeting of the Vision Sciences Society (VSS 2014) Physical interactions with other people (social interactions) are an integral part of human social life. Surprisingly, little is known about the visual processes underlying social interaction recognition. Many studies have examined visual processes underlying the recognition of individual actions and only a few examined the visual recognition of social interactions (Dittrich, 1993; de la Rosa et al. 2013, Neri et al. 2007; Manera et al. 2011a,b). An important question concerns to what degree the recognition of individual actions and social interactions share visual processes. We addressed this question in two experiments (15 participants each) using a visual adaptation paradigm in which participants saw an action (handshake or high 5) carried out by one individual (individual action) for a prolonged amount of time during the adaptation period. According to previous adaptation results, we expected that the subsequent perception of an ambiguous test stimulus (an action-morph between handshake and high 5) would be biased away from the adapting stimulus (action adaptation aftereffect (AAA)). Using these stimuli, participants were adapted to individual actions and tested on individual actions in experiment 1. In line with previous studies, we expected an adaptation effect in experiment 1. In experiment 2, participants were adapted to individual actions and tested on social interactions (two instead of one individual carrying out the actions of experiment 1). If social interaction recognition requires completely different or additional visual processes to the ones employed in the recognition of individual actions, we expected the AAA in experiment 2 to be absent or smaller than in experiment 1. In contrast, we found a significant AAA in both experiments (p<0.001) that did not differ across the two experiments (p=0.130). Social interaction and individual action recognition seem to be based on similar visual processes if paying attention to the interaction is not enforced. no notspecified http://www.kyb.tuebingen.mpg.de/ published -1005 Social interaction recognition: the whole is not greater than the sum of its parts 15017 15422 SaultonDBd2014 7 A Saulton T Dodds HH Bülthoff S de la Rosa St. Pete Beach, FL, USA2014-05-19 845 14th Annual Meeting of the Vision Sciences Society (VSS 2014) Stored representations of body size and shape as derived from somatosensation (body model) are considered to be critical components of perception and action. It is commonly believed that the body model can be measured using a localization task and be distinguished from other visual representations of the body using a visual template matching task. Specifically, localization tasks have shown distorted hand representations consisting of an overestimation of hand width and an underestimation of finger length [Longo and Haggard, 2010, PNAS,107 (26), 11727-11732]. In contrast, template matching tasks indicate that visual hand representations (body image) do not show such distortions [Longo and Haggard, 2012, Acta Psychologica, 141, 164-168]. We examined the specificity of the localization and visual template matching tasks to measure body related representations. Participants conducted a localization and template matching task with objects (box, post-it, rake) and their own hand. The localization task revealed that all items' dimensions were significantly distorted (all p <.0018) except for the width of the hand and rake. In contrast, the template matching task indicated no significant differences between the estimated and actual item's shape for all items (all p>0.05) except for the box (p<0.01) suggesting that the visual representation of items is almost veridical. Moreover, the performance across these tasks was significantly correlated for the hand and rake (p<.001). Overall, these results show that effects considered to be body-specific, i.e. distortions of the body model, are actually more general than previously thought as they are also observed with objects. Because localizing points on an object is unlikely to be aided by somatosensation, the assessed representations are unlikely to be mainly based on somatosensation but might reflect more general cognitive processes e.g. visual memory. These findings have important implications for the nature of the body image and the body model. no notspecified http://www.kyb.tuebingen.mpg.de/ published -845 Body and objects representations are associated with similar distortions 15017 15422 NestiBPB2014_2 7 A Nesti K Beykirch P Pretto HH Bülthoff St. Pete Beach, FL, USA2014-05-18 485 14th Annual Meeting of the Vision Sciences Society (VSS 2014) Whilst moving through the environment humans use vision to discriminate different self-motion intensities and to control their action, e.g. maintaining balance or controlling a vehicle. Yet, the way different intensities of the visual sensory stimulus affect motion sensitivity is still an open question. In this study we investigate human sensitivity to visually induced circular self-motion perception (vection) around the vertical (yaw) axis. The experiment is conducted on a motion platform equipped with a projection screen (70 x 90 degrees FoV). Stimuli consist of a realistic virtual environment (360 degrees panoramic color picture of a forest) rotating at constant velocity around participants’ head. Visual rotations are terminated by participants only after vection arises. Vection is facilitated by the use of mechanical vibrations of the participant’s seat. In a two-interval forced choice task, participants discriminate a reference velocity from a comparison velocity (adjusted in amplitude after every presentation) by indicating which rotation felt stronger. Motion sensitivity is measured as the smallest perceivable change in stimulus velocity (differential threshold) for 8 participants at 5 rotation velocities (5, 15, 30, 45 and 60 deg/s). Differential thresholds for circular vection increase with stimulus intensity, following a trend best described by a power law with an exponent of 0.64. The time necessary for vection to arise is significantly longer for the first stimulus presentation (average 11.6 s) than for the second (9.1 s), and does not depend on stimulus velocity. Results suggest that lower sensitivity (i.e. higher differential thresholds) for increasing velocities reflects prior expectations of small rotations, more common than large rotations during everyday experience. A probabilistic model is proposed that combines sensory information with prior knowledge of the expected motion in a statistically optimal fashion. Results also suggest that vection rise is facilitated by a recent exposure. no notspecified http://www.kyb.tuebingen.mpg.de/ published -485 Human self-motion sensitivity to visual yaw rotations 15017 15422 PrettoNNLB2014 7 P Pretto A Nesti SAE Nooij M Losert HH Bülthoff St. Pete Beach, FL, USA2014-05-17 279 14th Annual Meeting of the Vision Sciences Society (VSS 2014) In vehicle simulation (flight, driving) simulator tilt is used to reproduce sustained acceleration. In order to feel realistic, this tilt is performed at a rate below the tilt-rate detection threshold, which is usually measured in darkness, and assumed constant. However, it is known that many factors affect the threshold, like visual information, simulator motion in additional directions, or active vehicle control. Since all these factors come together in vehicle simulation, we aimed at investigating the effect of each of these factors on roll-rate detection threshold during simulated curve driving. The experiment was conducted on a motion-based driving simulator. Roll-rate detection thresholds were determined under four conditions: (i) roll only in darkness; (ii) combined roll/sway in darkness; (iii) combined roll/sway and visual information whilst passively moved through a curve; (iv) combined roll/sway and visual information whilst actively driving around a curve. For all conditions, motion was repeatedly provided and ten participants reported the detection of roll in a yes-no task. Thresholds were measured by adjusting roll-rate saturation value according to a single-interval adjustment matrix (SIAM) at every trial. Mean detection threshold for roll-rate increased from 0.7 deg/s with roll only (i) to 6.3 deg/s in active driving (iv) (mean threshold was 3.9 deg/s and 3.3 deg/s in conditions (ii) and (iii) respectively). However, large differences between participants were observed: for some the threshold did not increase from passive to active driving; while for others about 3 times higher threshold was measured, and lower level of attention was reported on questionnaires. We conclude that tilt-rate perception in vehicle simulation is affected by the combination of different simulator motions. Similarly, an active control task seems to increase detection threshold for tilt-rate, i.e. impair motion sensitivity. Results suggest that this is related to the level of attention during the task. no notspecified http://www.kyb.tuebingen.mpg.de/ published -279 Tilt-rate perception in vehicle simulation: the role of motion, vision and attention 15017 15422 NoainIWSVPMLMSSMGSB2014 7 D Noain LL Imbach E Werth SR Schreglmann PO Valko M Penner M Morawska T Li A Maric E Symeonidou J Stover L Mica YV Gavrilov TE Scammell CR Baumann Luzern, Switzerland2014-05-16 Joint Annual Meeting 2014 Swiss Headache Society, Swiss Society for Sleep Research, Sleep Medicine and Chronobiology (SKG 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Increased sleep need after traumatic brain injury: A comparative behavioural and histological study in rats and humans 15017 15422 ChangBd2014 7 D-S Chang HH Bülthoff S de la Rosa New York, NY, USA2014-05-00 79th Cold Spring Harbor Symposium on Quantitative Biology: Cognition no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Visual Adaptation to Social Actions: The Role of Meaning vs. Motion for Action Recognition 15017 15422 NestiBPB2014 7 A Nesti KA Beykirch P Pretto HH Bülthoff Amsterdam, The Netherlands2014-04-00 24th Annual Meeting of the Society for the Neural Control of Movements (NCM 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Human sensitivity to visual-inertial self-motion 15017 15422 NooijPNB2014 7 SAE Nooij P Pretto A Nesti HH Bülthoff Amsterdam, The Netherlands2014-04-00 24th Annual Meeting of the Society for the Neural Control of Movements (NCM 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Perception of heading and travelled path during curvilinear trajectories 15017 15422 ReichenbachBBT2014 7 A Reichenbach J-P Bresciani HH Bülthoff A Thielscher Amsterdam, The Netherlands2014-04-00 24th Annual Meeting of the Society for the Neural Control of Movements (NCM 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Reaching with the sixth sense: vestibulomotor control in the human right parietal cortex 15017 1542215017 18821 FladC2014 7 N Flad LL Chuang Günne, Germany2014-03-00 Interdisciplinary College: Cognition 3.0 - the social mind in the connected world (IK 2014) no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Setting up a high-fidelity flight simulator to study closed-loop control and physiological workload 15017 15422 Geluardi2014 7 S Geluardi Pisa, Italy2014-01-09 Presentazione attività anno 2013: Scuola di Ingegneria: Università di Pisa Dottorati di Ricerca in Automatica, Robotica e Bioingegneria, Ingegneria Meccanica, Ingegneria Chimica, Ingegneria Aerospaziale, Ingegneria Nucleare Congestion problems in the transportation system have led to regulators considering implementing drastic changes in methods of transportation for the general public. A possible solution would be to combine the best of ground-based and air-based transportation and produce a personal air transport system. This project aims to investigate the interaction between a pilot with limited flying skills and augmented vehicles, that are part of the personal air transport system, and to verify if it is possible to reach similar performance to a highly-trained pilot, also in dangerous environmental or demanding conditions. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Augmented Systems for Personal Air Vehicles 15017 15422 Olivari2014 7 M Olivari Pisa, Italy2014-01-09 Presentazione attività anno 2013: Scuola di Ingegneria: Università di Pisa Dottorati di Ricerca in Automatica, Robotica e Bioingegneria, Ingegneria Meccanica, Ingegneria Chimica, Ingegneria Aerospaziale, Ingegneria Nucleare Haptic aids have been largely used in manual control tasks to complement the visual information through the sense of touch. To analytically design the haptic aid, adequate knowledge is needed about how pilots adapt their visual response and the biomechanical properties of their arm to a generic haptic aid. Two novel identification methods were proposed to estimate the pilot dynamic responses. The two methods were applied to experimental data from closed-loop control tasks with pilots, with the aim of estimating the pilot responses to different external aids. Different haptic aids were designed and tested during the experiments: a Direct Haptic Aid (DHA) and an Indirect Haptic Aid (IHA). Furthermore, an automated system was designed to be equivalent to the haptic aids when the pilot was out-of-the-loop, i.e., to provide the same control command as the haptic aid. All the experimental conditions with the external aids were contrasted to a baseline condition without external aids. no notspecified http://www.kyb.tuebingen.mpg.de/ published 0 Human-Centered Design of Haptic Aids for Aerial Vehicles 15017 15422 Dobs2014 15 K Dobs 2014-12-00 no notspecified published Behavioral and Neural Mechanisms Underlying Dynamic Face Perception 15017 15422 Rohe2014 15 T Rohe 2014-12-00 no notspecified published Causal inference in multisensory perception and the brain 15017 15422 Kaulard2014 15 K Kaulard 2014-12-00 no notspecified published Visual perception of emotional and conversational facial expressions 15017 15422 Piryankova2014 15 I Piryankova 2014-11-28 no notspecified published The influence of a self-avatar on space and body perception in immersive virtual reality 15017 1542215017 Volkova2014 15 E Volkova 2014-11-00 no notspecified published Perception of Emotional Body Expressions in Narrative Scenarios and Across Cultures 15017 1542215017 Browatzki2014 15 B Browatzki 2014-10-00 no notspecified published Multimodal object perception for robotics 15017 15422 Leyrer2014_2 15 M Leyrer 2014-10-00 no notspecified published Understanding and Manipulating Eye Height to Change the User's Experience of Perceived Space in Virtual Reality 15017 1542215017 Masone2014 15 C Masone 2014-07-16 no notspecified published Planning and control for robotic tasks with a human-in-the-loop [Planung und Steuerung von Roboter-Mensch Systemen] 15017 15422 Giani2014 15 A Giani 2014-05-00 no notspecified published From multiple senses to perceptual awareness 15017 15422 Venrooij2014 15 J Venrooij 2014-03-21 no notspecified published Measuring, modeling and mitigating biodynamic feedthrough 15017 15422 Grabe2014 15 V Grabe 2014-03-00 no notspecified published Towards Robust Visual-Controlled Flight of Single and Multiple UAVs in GPS-Denied Indoor Environments 15017 15422 Bieg2014_2 15 H-J Bieg 2014-02-00 no notspecified published On the coordination of saccades with hand and smooth pursuit eye movements 15017 15422 Bulthoff2014_11 41 HH Bülthoff Bulthoff2014_10 10 I Bülthoff Bulthoff2014_8 10 HH Bülthoff Bulthoff2014_9 10 HH Bülthoff NieuwenhuizenB2014_2 10 FM Nieuwenhuizen HH Bülthoff Katliar2014 10 M Katliar Bulthoff2014_5 10 HH Bülthoff Venrooij2014_3 10 J Venrooij Bulthoff2014_6 10 HH Bülthoff Nieuwenhuizen2014 10 FM Nieuwenhuizen delaRosa2014 10 S de la Rosa delaRosa2014_2 10 S de la Rosa Tesch2014 10 J Tesch delaRosa2016_2 10 S de la Rosa Meilinger2014_2 10 T Meilinger ChangBd2014_4 10 D-S Chang HH Bülthoff S de la Rosa Bulthoff2014_7 10 HH Bülthoff Bulthoff2014_4 10 HH Bülthoff MeilingerHBM2014 10 T Meilinger A Henson HH Bülthoff HA Mallot Meilinger2014_4 10 T Meilinger Chuang2014_2 10 L Chuang delaRosa2014_3 10 S de la Rosa Chuang2014 10 LL Chuang BulthoffZ2014 10 I Bülthoff M Zhao Wallraven2014_2 10 C Wallraven Meilinger2014_3 10 T Meilinger Venrooij2014_4 10 J Venrooij Venrooij2014_2 10 J Venrooij Bulthoff2014_3 10 HH Bülthoff Soyka2014 10 F Soyka ChiovettoCEG2014 10 E Chiovetto C Curio D Endres M Giese delaRosaSB2014 10 S de la Rosa S Streuber HH Bülthoff delaRosaB2014 10 S de la Rosa HH Bülthoff Breidt2014 10 M Breidt ChuangFSNB2014 10 LL Chuang N Flad M Scheer FM Nieuwenhuizen HH Bülthoff O039MalleyBM2014 10 M O'Malley HH Bülthoff T Meilinger Meilinger2014 10 T Meilinger K Watanabe Bulthoff2014_2 10 HH Bülthoff Bulthoff2014 10 I Bülthoff Dobricki2014_2 10 M Dobricki delaRosaC2014 10 D-S Chang S de la Rosa Dobricki2014 10 M Dobricki BulthoffR2014 HH Bülthoff P Robuffo Giordano 2014-01-21 The invention relates to a teleoperation method and a human robot interface for remote control of a machine by a human operator (5) using a remote control unit, particularly for remote control of a drone, wherein a vestibular feedback is provided to the operator (5) to enhance the situational awareness of the operator (5), wherein the vestibular feedback represents a real motion of the remote-controlled machine. no notspecified http://www.kyb.tuebingen.mpg.de/ published Teleoperation method and human robot interface for remote control of a machine by a human operator 15017 15422 Soyka2013 1 F Soyka Logos Verlag Berlin, Germany 2013-00-00 Self-motion describes the motion of our body through the environment and is an essential part of our everyday life. The aim of this thesis is to improve our understanding of how humans perceive self-motion, mainly focusing on the role of the vestibular system. Following a cybernetic approach, this is achieved by systematically gathering psychophysical data and then describing it based on mathematical models of the vestibular sensors. Three studies were performed investigating perceptual thresholds for translational and rotational motions and reaction times to self-motion stimuli. Based on these studies, a model is introduced which is able to describe thresholds for arbitrary motion stimuli varying in duration and acceleration profile shape. This constitutes a significant addition to the existing literature since previous models only took into account the effect of stimulus duration, neglecting the actual time course of the acceleration profile. In the first and second study model parameters were identified based on measurements of direction discrimination thresholds for translational and rotational motions. These models were used in the third study to successfully predict differences in reaction times between varying motion stimuli proving the validity of the modeling approach. This work can allow for optimizing motion simulator control algorithms based on self-motion perception models and developing perception based diagnostics for patients suffering from vestibular disorders. Tübingen, Univ., Diss. 2013 no notspecified http://www.kyb.tuebingen.mpg.de/ published 96 A Cybernetic Approach to Self-Motion Perception 15017 15422 SteinickeVCL2013 1 F Steinicke Y Visell J Campos A Lécuyer Springer New York, NY, USA 2013-00-00 no notspecified http://www.kyb.tuebingen.mpg.de/ published 402 Human walking in virtual environments: perception, technology, and applications 15017 1882415017 15422 Streuber2013 1 S Streuber Logos Verlag Berlin, Germany 2013-00-00 Humans are social beings and they often act jointly together with other humans (joint actions) rather than alone. Prominent theories of joint action agree on visual information being critical for successful joint action coordination but are vague about the exact source of visual information being used during a joint action. Knowing which sources of visual information are used, however, is important for a more detailed characterization of the functioning of action coordination in joint actions. The current Ph.D. research examines the importance of different sources of visual information on joint action coordination under realistic settings. In three studies I examined the influence of different sources of visual information (Study 1), the functional role of different sources of visual information (Study 2), and the effect of social context on the use of visual information (Study 3) in a table tennis game. The results of these studies revealed that (1) visual anticipation of the interaction partner and the interaction object is critical in natural joint actions, (2) different sources of visual information are critical at different temporal phases during the joint action, and (3) the social context modulates the importance of different sources of visual information. In sum, this work provides important and new empirical evidence about the importance of different sources of visual information in close-to-natural joint actions. Tübingen, Univ., Diss., 2013 no notspecified http://www.kyb.tuebingen.mpg.de/ published 114 The Influence of Different Sources of Visual Information on Joint Action Performance 15017 15422 MohlerRSS2013 28 B Mohler B Raffin H Saito O Staadt VenrooijPMvB2013 3 J Venrooij MD Pavel M Mulder FCT van der Helm HH Bülthoff 2013-12-00 4 4 421 432 CEAS Aeronautical Journal Biodynamic feedthrough (BDFT) occurs when vehicle accelerations feed through the pilot’s body and cause involuntary motions of limbs, resulting in involuntary control inputs. BDFT can severely reduce ride comfort, control accuracy and, above all, safety during the operation of rotorcraft. Furthermore, BDFT can cause and sustain rotorcraft-pilot couplings. Despite many different studies conducted in past decades—both within and outside of the rotorcraft community—BDFT is still a poorly understood phenomenon. The complexities involved in BDFT have kept researchers and manufacturers in the rotorcraft domain from developing robust ways of dealing with its effects. A practical BDFT pilot model, describing the amount of involuntary control inputs as a function of accelerations, could pave the way to account for adverse BDFT effects. In the current paper, such a model is proposed. Its structure is based on the model proposed by Mayo (15th European Rotorcraft Forum, Amsterdam, pp. 81-001–81-012 1989), and its accuracy and usability are improved by incorporating insights from recently obtained experimental data. An evaluation of the model performance shows that the model describes the measured data well and that it provides a considerable improvement to the original Mayo model. Furthermore, the results indicate that the neuromuscular dynamics have an important influence on the BDFT model parameters. no notspecified http://www.kyb.tuebingen.mpg.de/ published 11 A practical biodynamic feedthrough model for helicopters 15017 15422 WallravenH2013 3 C Herdtweck C Wallraven 2013-12-00 12 8 1 14 PLoS ONE We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine 15017 15422 DropPDVM2012 3 FM Drop DM Pool HJ Damveld MM van Paassen M Mulder 2013-12-00 6 43 1936 1949 IEEE Transactions on Cybernetics In the manual control of a dynamic system, the human controller (HC) often follows a visible and predictable reference path. Compared with a purely feedback control strategy, performance can be improved by making use of this knowledge of the reference. The operator could effectively introduce feedforward control in conjunction with a feedback path to compensate for errors, as hypothesized in literature. However, feedforward behavior has never been identified from experimental data, nor have the hypothesized models been validated. This paper investigates human control behavior in pursuit tracking of a predictable reference signal while being perturbed by a quasi-random multisine disturbance signal. An experiment was done in which the relative strength of the target and disturbance signals were systematically varied. The anticipated changes in control behavior were studied by means of an ARX model analysis and by fitting three parametric HC models: two different feedback models and a combined feedforward and feedback model. The ARX analysis shows that the experiment participants employed control action on both the error and the target signal. The control action on the target was similar to the inverse of the system dynamics. Model fits show that this behavior can be modeled best by the combined feedforward and feedback model. no notspecified http://www.kyb.tuebingen.mpg.de/ published 13 Identification of the Feedforward Component of Manual Control in Tasks with Predictable Target Signals 15017 15422 MichelRBHV2013 3 C Michel B Rossion I Bülthoff WG Hayward QC Vuong 2013-12-00 9-10 21 1202 1223 Visual Cognition Faces from another race are generally more difficult to recognize than faces from one's own race. However, faces provide multiple cues for recognition and it remains unknown what are the relative contribution of these cues to this “other-race effect”. In the current study, we used three-dimensional laser-scanned head models which allowed us to independently manipulate two prominent cues for face recognition: the facial shape morphology and the facial surface properties (texture and colour). In Experiment 1, Asian and Caucasian participants implicitly learned a set of Asian and Caucasian faces that had both shape and surface cues to facial identity. Their recognition of these encoded faces was then tested in an old/new recognition task. For these face stimuli, we found a robust other-race effect: Both groups were more accurate at recognizing own-race than other-race faces. Having established the other-race effect, in Experiment 2 we provided only shape cues for recognition and in Experiment 3 we provided only surface cues for recognition. Caucasian participants continued to show the other-race effect when only shape information was available, whereas Asian participants showed no effect. When only surface information was available, there was a weak pattern for the other-race effect in Asians. Performance was poor in this latter experiment, so this pattern needs to be interpreted with caution. Overall, these findings suggest that Asian and Caucasian participants rely differently on shape and surface cues to recognize own-race faces, and that they continue to use the same cues for other-race faces, which may be suboptimal for these faces. no notspecified http://www.kyb.tuebingen.mpg.de/ published 21 The contribution of shape and surface information in the other-race face effect 15017 15422 Dobrickid2013 3 M Dobricki S de la Rosa 2013-12-00 12 8 1 9 PLoS ONE Previous research suggests that bodily self-identification, bodily self-localization, agency, and the sense of being present in space are critical aspects of conscious full-body self-perception. However, none of the existing studies have investigated the relationship of these aspects to each other, i.e., whether they can be identified to be distinguishable components of the structure of conscious full-body self-perception. Therefore, the objective of the present investigation is to elucidate the structure of conscious full-body self-perception. We performed two studies in which we stroked the back of healthy individuals for three minutes while they watched the back of a distant virtual body being synchronously stroked with a virtual stick. After visuo-tactile stimulation, participants assessed changes in their bodily self-perception with a custom made self-report questionnaire. In the first study, we investigated the structure of conscious full-body self-perception by analyzing the responses to the questionnaire by means of multidimensional scaling combined with cluster analysis. In the second study, we then extended the questionnaire and validated the stability of the structure of conscious full-body self-perception found in the first study within a larger sample of individuals by performing a principle components analysis of the questionnaire responses. The results of the two studies converge in suggesting that the structure of conscious full-body self-perception consists of the following three distinct components: bodily self-identification, space-related self-perception (spatial presence), and agency. no notspecified http://www.kyb.tuebingen.mpg.de/ published 8 The structure of conscious bodily self-perception during full-body illusions 15017 15422 HeydrichDAHBMB2013_2 3 L Heydrich TJ Dodds JE Aspell B Herbelin HH Bülthoff BJ Mohler O Blanke 2013-12-00 946 4 1 15 Frontiers in Psychology In neurology and psychiatry the detailed study of illusory own body perceptions has suggested close links between bodily processing and self-consciousness. One such illusory own body perception is heautoscopy where patients have the sensation of being reduplicated and to exist at two or even more locations. In previous experiments, using a video head-mounted display, self-location and self-identification were manipulated by applying conflicting visuo-tactile information. Yet the experienced singularity of the self was not affected, i.e., participants did not experience having multiple bodies or selves. In two experiments presented in this paper, we investigated self-location and self-identification while participants saw two virtual bodies (video-generated in study 1 and 3D computer generated in study 2) that were stroked either synchronously or asynchronously with their own body. In both experiments, we report that self-identification with two virtual bodies was stronger during synchronous stroking. Furthermore, in the video generated setup with synchronous stroking participants reported a greater feeling of having multiple bodies than in the control conditions. In study 1, but not in study 2, we report that self-location – measured by anterior posterior drift – was significantly shifted towards the two bodies in the synchronous condition only. Self-identification with two bodies, the sensation of having multiple bodies, and the changes in self-location show that the experienced singularity of the self can be studied experimentally. We discuss our data with respect to ownership for supernumerary hands and heautoscopy. We finally compare the effects of the video and 3D computer generated head-mounted display technology and discuss the possible benefits of using either technology to induce changes in illusory self-identification with a virtual body. no notspecified http://www.kyb.tuebingen.mpg.de/ published 14 Visual capture and the experience of having two bodies: evidence from two different virtual reality techniques 15017 15422 deWinkelSBBGW2013 3 KN de Winkel F Soyka M Barnett-Cowan HH Bülthoff EL Groen PJ Werkhoven 2013-11-00 2 231 209 218