Fademrecht20171LFademrechtLogos VerlagBerlin, Germany2017-00-00Humans are social beings that interact with others in their surroundings. In a public space, for example on a train platform, one can observe the wide array of social actions humans express in their daily lives. There are for instance people hugging each other, waving to one another or shaking hands. A large part of our social behavior consists of carrying out such social actions and the recognition of those actions facilitates our interactions with other people. Therefore, action recognition has become more and more popular as a research topic over the years. Actions do not only appear at our point of fixation but also in the peripheral visual field. The current Ph.D. thesis aims at understanding action recognition in the human central and peripheral vision.
To this end, action recognition processes have been investigated under more naturalistic conditions than has been done so far. This thesis extends the knowledge about action recognition processes into more realistic scenarios and the far visual periphery. In four studies, life size action stimuli were used (I) to examine the action categorization abilities of central and peripheral vision, (II) to investigate the viewpoint-dependency of peripheral action representations, (III) to behaviorally measure the perceptive field sizes of action sensitive channels and (IV) to investigate the influence of additional actors in the visual scene on action recognition processes. The main results of the different studies can be summarized as follows.
In Study I a high categorization performance for social actions throughout the visual field with a nonlinear performance decline towards the visual periphery was shown. Study II revealed a viewpoint-dependence of action recognition only in far visual periphery. In Study III large perceptive fields for action recognition were measured that decrease in size towards the periphery. And in Study IV no influence of a surrounding crowd of people on the recognition of actions in central vision and the visual periphery was shown. In sum, this thesis provides evidence that the abilities of peripheral vision have been underestimated and that peripheral vision might play a more important role in daily life than merely triggering gaze saccades to events in our environment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published143Action Recognition in the Visual Periphery1501715422Saulton20171ASaultonLogos VerlagBerlin, Germany2017-00-00Accurate information about body structure and posture is fundamental for effective control of our actions. It is often assumed that healthy adults have accurate representations of their body. Although people's abilities to visually recognize their own body size and shape are relatively good, the implicit spatial representation of their body is extremely distorted when measured in proprioceptive localization tasks. The aim of this thesis is to understand the nature of spatial distortions of the body model measured in those localization tasks. We especially investigate the perceptual-cognitive components contributing to distortions of implicit representation of the human hand and compare those distortions with the one found on objects in similar tasks.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published93Understanding the nature of the body model underlying position sense1501715422BurchCFSW201628MBurchLChuangBFischerASchmidtDWeiskopfFademrechtBd20173LFademrechtIBülthoffSde la Rosa2017-06-0013510–15Recognizing actions of others across the whole visual field is required for social interactions. In a previous study, we have shown that recognition is very good even when life size avatars who were facing the observer carried out actions (e.g. waving) were presented very far away from the fovea (Fademrecht, Bülthoff, & de la Rosa, 2016). We explored the possibility whether this remarkable performance was owed to life size avatars facing the observer, which - according to some social cognitive theories (e.g. Schilbach et al., 2013) - could potentially activate different social perceptual processes as profile facing avatars. Participants therefore viewed a life-size stick-figure avatar that carried out motion-captured social actions (greeting actions: handshake, hugging, waving; attacking actions: slapping, punching and kicking) in frontal and profile view. Participants' task was to identify the actions as 'greeting' or as 'attack' or to assess the emotional valence of the actions. While recognition accuracy for frontal and profile views did not differ, reaction times were significantly faster in general for profile views (i. e. the moving avatar was seen profile on) than for frontal views (i.e. the action was directed toward the observer). Our results suggest that the remarkable well action recognition performance in the visual periphery was not owed to a more socially engaging front facing view. Although action recognition seems to depend on viewpoint, action recognition in general remains remarkable accurate even far into the visual periphery.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-10Action recognition is viewpoint-dependent in the visual periphery1501715422SpetterMBLvSSPVH20173MSSpetterRMalekshahiNBirbaumerMLührsAHvan der VeerKSchefflerSSpucktiHPreisslRVeitMHallschmid2017-05-00112188–195Obese subjects who achieve weight loss show increased functional connectivity between dorsolateral prefrontal cortex (dlPFC) and ventromedial prefrontal cortex (vmPFC), key areas of executive control and reward processing. We investigated the potential of real-time functional magnetic resonance imaging (rt-fMRI) neurofeedback training to achieve healthier food choices by enhancing self-control of the interplay between these brain areas. We trained eight male individuals with overweight or obesity (age: 31.8 ± 4.4 years, BMI: 29.4 ± 1.4 kg/m2) to up-regulate functional connectivity between the dlPFC and the vmPFC by means of a four-day rt-fMRI neurofeedback protocol including, on each day, three training runs comprised of six up-regulation and six passive viewing trials. During the up-regulation runs of the four training days, participants successfully learned to increase functional connectivity between dlPFC and vmPFC. In addition, a trend towards less high-calorie food choices emerged from before to after training, which however was associated with a trend towards increased covertly assessed snack intake. Findings of this proof-of-concept study indicate that overweight and obese participants can increase functional connectivity between brain areas that orchestrate the top-down control of appetite for high-calorie foods. Neurofeedback training might therefore be a useful tool in achieving and maintaining weight loss.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-188Volitional regulation of brain responses to food stimuli in overweight and obese subjects: a real-time fMRI feedback study15017188211501715422SaultonBdD20173ASaultonHHBülthoffSde la RosaTJDodds2017-04-00412112Cultural differences in spatial perception have been little investigated, which gives rise to the impression that spatial cognitive processes might be universal. Contrary to this idea, we demonstrate cultural differences in spatial volume perception of computer generated rooms between Germans and South Koreans. We used a psychophysical task in which participants had to judge whether a rectangular room was larger or smaller than a square room of reference. We systematically varied the room rectangularity (depth to width aspect ratio) and the viewpoint (middle of the short wall vs. long wall) from which the room was viewed. South Koreans were significantly less biased by room rectangularity and viewpoint than their German counterparts. These results are in line with previous notions of general cognitive processing strategies being more context dependent in East Asian societies than Western ones. We point to the necessity of considering culturally-specific cognitive processing strategies in visual spatial cognition research.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11Cultural differences in room size perception1501715422NestmeyerRBF20163TNestmeyerPRobuffo GiordanoHHBülthoffAFranchi2017-04-00441989–1011This paper presents a novel decentralized control strategy for a multi-robot system that enables parallel multi-target exploration while ensuring a time-varying connected topology in cluttered 3D environments. Flexible continuous connectivity is guaranteed by building upon a recent connectivity maintenance method, in which limited range, line-of-sight visibility, and collision avoidance are taken into account at the same time. Completeness of the decentralized multi-target exploration algorithm is guaranteed by dynamically assigning the robots with different motion behaviors during the exploration task. One major group is subject to a suitable downscaling of the main traveling force based on the traveling efficiency of the current leader and the direction alignment between traveling and connectivity force. This supports the leader in always reaching its current target and, on a larger time horizon, that the whole team realizes the overall task in finite time. Extensive Monte Carlo simulations with a group of several quadrotor UAVs show the scalability and effectiveness of the proposed method and experiments validate its practicability.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-989Decentralized simultaneous multi-target exploration using a connected network of multiple robots1501715422NooijPOHB20173SAENooijPPrettoDOberfeldHHechtHHBülthoff2017-04-00412119This study investigated the role of vection (i.e., a visually induced sense of self-motion), optokinetic nystagmus (OKN), and inadvertent head movements in visually induced motion sickness (VIMS), evoked by yaw rotation of the visual surround. These three elements have all been proposed as contributing factors in VIMS, as they can be linked to different motion sickness theories. However, a full understanding of the role of each factor is still lacking because independent manipulation has proven difficult in the past. We adopted an integrative approach to the problem by obtaining measures of potentially relevant parameters in four experimental conditions and subsequently combining them in a linear mixed regression model. To that end, participants were exposed to visual yaw rotation in four separate sessions. Using a full factorial design, the OKN was manipulated by a fixation target (present/absent), and vection strength by introducing a conflict in the motion direction of the central and peripheral field of view (present/absent). In all conditions, head movements were minimized as much as possible. Measured parameters included vection strength, vection variability, OKN slow phase velocity, OKN frequency, the number of inadvertent head movements, and inadvertent head tilt. Results show that VIMS increases with vection strength, but that this relation varies among participants (R2 = 0.48). Regression parameters for vection variability, head and eye movement parameters were not significant. These results may seem to be in line with the Sensory Conflict theory on motion sickness, but we argue that a more detailed definition of the exact nature of the conflict is required to fully appreciate the relationship between vection and VIMS.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published18Vection is the main contributor to motion sickness induced by visual yaw rotation: Implications for conflict and eye movement theories1501715422BrooksT20173JBrooksAThaler2017-04-00Epub aheadA reliable mechanism to predict the heaviness of an object is important for manipulating an object under environmental uncertainty. Recently, Cashaback et al. (Journal of Neurophysiol 117: 260-274, 2017) showed that for object lifting, the sensorimotor system uses a strategy that minimizes prediction error when the object's weight is uncertain. Previous research demonstrates that visually guided reaching is similarly optimised. Although this suggests a unified strategy of the sensorimotor system for object manipulation, the selected strategy appears to be task dependent and subject to change in response to the degree of environmental uncertainty.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/accepted0The sensorimotor system minimizes prediction error for object lifting when the object's weight is uncertain1501715422HongLBS20163AHongDGLeeHHBülthoffHISon2017-03-0011167–80Better situational awareness helps understand remote environments and achieve better performance in the teleoperation of multiple mobile robots (e.g., a group of unmanned aerial vehicles). Visual and force feedbacks are the most common ways of perceiving the environments accurately and effectively; however, accurate and adequate sensors for global localization are impractical in outdoor environments. Lack of this information hinders situational awareness and operating performance. In this paper, a visual and force feedback method is proposed for enhancing the situational awareness of human operators in outdoor multi-robot teleoperation. Using only the robots’ local information, the global view is fabricated from individual local views, and force feedback is determined by the velocity of individual units. The proposed feedback method is evaluated via two psychophysical experiments: maneuvering and searching tests using a human/hardware-in-the-loop system with simulated environments. In the tests, several quantitative measures are also proposed to assess the human operator’s maneuverability and situational awareness. Results of the two experiments show that the proposed multimodal feedback enhances only situational awareness of the operator.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-67Multimodal feedback for teleoperation of multiple mobile robots in an outdoor environment1501715422ZhaoB2016_43MZhaoIBülthoff2017-02-00Epub aheadHumans’ face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability—holistic face processing—remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based face processing by manipulating the presence of facial motion during study and at test in a composite face task. The results showed that rigidly moving faces were processed as holistically as static faces (Experiment 1). Holistic processing of moving faces persisted whether facial motion was presented during study, at test, or both (Experiment 2). Moreover, when faces were inverted to eliminate the contributions of both an upright face template and observers’ expertise with upright faces, rigid facial motion facilitated holistic face processing (Experiment 3). Thus, holistic processing represents a general principle of face perception that applies to both static and dynamic faces, rather than being limited to static faces. These results support an emerging view that both perceiver-based and face-based factors contribute to holistic face processing, and they offer new insights on what underlies holistic face processing, how information supporting holistic face processing interacts with each other, and why facial motion may affect face recognition and holistic face processing differently.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2017/ZhaoBulthoff_JEPLMC_2016.pdfpublished0Holistic Processing of Static and Moving Faces1501715422NestidB20173ANestiKde WinkelHHBülthoff2017-01-00112114While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation1501715422deWinkelKB20173KNde WinkelMKatliarHHBülthoff2017-01-00112120A large body of research shows that the Central Nervous System (CNS) integrates multisensory information. However, this strategy should only apply to multisensory signals that have a common cause; independent signals should be segregated. Causal Inference (CI) models account for this notion. Surprisingly, previous findings suggested that visual and inertial cues on heading of self-motion are integrated regardless of discrepancy. We hypothesized that CI does occur, but that characteristics of the motion profiles affect multisensory processing. Participants estimated heading of visual-inertial motion stimuli with several different motion profiles and a range of intersensory discrepancies. The results support the hypothesis that judgments of signal causality are included in the heading estimation process. Moreover, the data suggest a decreasing tolerance for discrepancies and an increasing reliance on visual cues for longer duration motions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published19Causal Inference in Multisensory Heading Estimation1501715422Saultond20173ASaultonSde la Rosa2017-01-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/accepted0Conceptual biases explain distortion differences between hand and objects in localization tasks1501715422KatliarFFDTB20177MKatliarJFischerGFrisonMDiehlHTeufelHHBülthoffToulouse, France2017-07-12In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that desired acceleration and rotational velocity references at a defined point in simulator’s cabin are tracked while satisfying constraints due to working space and allowed cable forces of the robot. Reference tracking performance and computation time of the algorithm are investigated in computer simulations. Furthermore, we investigate the maximum possible improvement of motion simulation fidelity that can be potentially achieved by employing a reference prediction algorithm.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/accepted0Nonlinear Model Predictive Control of a Cable-Robot-Based Motion Simulator1501715422TognonYBF20177MTognonBYükselGBuondonnoAFranchiSingapore2017-06-0018We present a control methodology for underactuated aerial manipulators that is both easy to implement on real systems and able to achieve highly dynamic behaviours. The method is composed by two parts, a nominal input/state generator that takes into account the full-body nonlinear and coupled dynamics of the system, and a decentralized feedback controller acting on the actuated degrees of freedom that confers the needed robustness to the closed-loop system. We show how to apply the method to Protocentric Aerial Manipulators (PAM) by first using their differential flatness property on the vertical 2D plane in order to generate dynamical input/state trajectories, then statically extending the 2D structure to the 3D, and finally closing the loop with a decentralized controller having the dual task of both ensuring the preservation of the proper static 3D immersion and tracking the dynamic trajectory on the vertical plane. We demonstrate that the proposed controller is able to precisely track dynamic trajectories when implemented on a standard hardware composed by a quadrotor and a robotic arm with servo-controlled joints (even if no torque control is available). Comparative experiments clearly show the benefit of using the nominal input/state generator, and also the fact that the use of just static gravity compensation might surprisingly perform worse, in dynamic maneuvers, than the case of no compensation at all. We complement the experiments with additional realistic simulations testing the applicability of the proposed method to slightly non-protocentric aerial manipulators.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/submitted7Dynamic Decentralized Control for Protocentric Aerial Manipulators1501715422KarolusWCS20177JKarolusPWWozniakLLChuangASchmidtDenver, CO, USA2017-05-0029983010We are often confronted with information interfaces designed in an unfamiliar language, especially in an increasingly globalized world, where the language barrier inhibits interaction with the system. In our work, we explore the design space for building interfaces that can detect the user's language proficiency. Specifically, we look at how a user's gaze properties can be used to detect whether the interface is presented in a language they understand. We report a study (N=21) where participants were presented with questions in multiple languages, whilst being recorded for gaze behavior. We identified fixation and blink durations to be effective indicators of the participants' language proficiencies. Based on these findings, we propose a classification scheme and technical guidelines for enabling language proficiency awareness on information displays using gaze data.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Robust Gaze Features for Enabling Language Proficiency Awareness1501715422FladDSBC20167NFladJCDitzASchmidtHHBülthoffLLChuangBaltimore, MD, USA2017-02-0015Unrestricted gaze tracking that allows for head and body movements can enable us to understand interactive gaze behavior with large-scale visualizations. Approaches that support this, by simultaneously recording eye- and user-movements, can either be based on geometric or data-driven regression models. A data-driven approach can be implemented more flexibly but its performance can suffer with poor quality training data. In this paper, we introduce a pre-processing procedure to remove training data for periods when the gaze is not fixating the presented target stimuli. Our procedure is based on a velocity-based filter for rapid eye-movements (i.e., saccades). Our results show that this additional procedure improved the accuracy of our unrestricted gaze-tracking model by as much as 56 %. Future improvements to data-driven approaches for unrestricted gaze-tracking are proposed, in order to allow for more complex dynamic visualizations.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4Data-driven approaches to unrestricted gaze-tracking benefit from saccade filtering1501715422AllsopGBC20167JAllsopRGrayHHBülthoffLChuangBaltimore, MD, USA2017-02-005559Previous research has rarely examined the combined inﬂuence of anxiety and cognitive load on gaze behavior and performance whilst undertaking complex perceptual-motor tasks. In the current study, participants performed an aviation instrument landing task in neutral and anxiety conditions, while performing a low or high cognitive load auditory n-back task. Both self-reported anxiety and heart rate increased from neutral conditions indicating that anxiety was successfully manipulated. Response accuracy and reaction time for the auditory task indicated that cognitive load was also successfully manipulated. Cognitiveloadnegativelyimpactedﬂightperformance and the frequency of gaze transitions between areas of interest. Performance was maintained in anxious conditions,with a concomitant decrease in n-back reaction time suggesting that this was due to an increase in mental effort. Analyses of individual responses to the anxiety manipulation revealed that changes in anxiety levels from neutral to anxiety conditions were positively correlated with changes in visual scanning entropy, which isa measure of the randomness of gaze behavior, but only when cognitive load was high. This ﬁnding lends support for an interactive effect of cognitive anxiety and cognitive load on attentional control.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4Effects of Anxiety and Cognitive Load on Instrument Scanning Behavior in a Flight Simulation1501715422GerboniGVJFB20177CAGerboniSGeluardiJVenrooijAJoosWFichterHHBülthoffGrapevine, TX, USA2017-01-11116nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published15Development of model-following control laws for helicopters to achieve personal aerial vehicle's handling qualities1501715422D039IntinoOGVPB20177GD'IntinoMOlivariSGeluardiJVenrooijLPolliniHHBülthoffGrapevine, TX, USA2017-01-11110nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Experimental evaluation of haptic support systems for learning a 2-DoF tracking task1501715422FladFBC20157NFladTFominaHHBülthoffLLChuangChicago, IL, USA2017-00-00151167Eye-movements are typically measured with video cameras and image recognition algorithms. Unfortunately, these systems are susceptible to changes in illumination during measurements. Electrooculography (EOG) is another approach for measuring eye-movements that does not suffer from the same weakness. Here, we introduce and compare two methods that allow us to extract the dwells of our participants from EOG signals under presentation conditions that are too difficult for optical eye tracking. The first method is unsupervised and utilizes density-based clustering. The second method combines the optical eye-tracker’s methods to determine fixations and saccades with unsupervised clustering. Our results show that EOG can serve as a sufficiently precise and robust substitute for optical eye tracking, especially in studies with changing lighting conditions. Moreover, EOG can be recorded alongside electroencephalography (EEG) without additional effort.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published16Unsupervised clustering of EOG as a viable substitute for optical eye-tracking1501715422LockenBMGTDGACB20172ALöckenSSBorojeniHMüllerTMGableSTribertiCDielsCGlatzIAlvarezLChuangSBollSpringer International PublishingCham, Switzerland2017-02-00325348Informing a driver of a vehicle’s changing state and environment is a major challenge that grows with the introduction of in-vehicle assistant and infotainment systems. Even in the age of automation, the human will need to be in the loop for monitoring, taking over control, or making decisions. In these cases, poorly designed systems could lead to needless attentional demands imparted on the driver, taking it away from the primary driving task. Existing systems are offering simple and often unspecific alerts, leaving the human with the demanding task of identifying, localizing, and understanding the problem. Ideally, such systems should communicate information in a way that conveys its relevance and urgency. Specifically, information useful to promote driver safety should be conveyed as effective calls for action, while information not pertaining to safety (therefore less important) should be conveyed in ways that do not jeopardize driver attention. Adaptive ambient displays and peripheral interactions have the potential to provide superior solutions and could serve to unobtrusively present information, to shift the driver’s attention according to changing task demands, or enable a driver to react without losing the focus on the primary task. In order to build a common understanding across researchers and practitioners from different fields, we held a “Workshop on Adaptive Ambient In-Vehicle Displays and Interactions” at the AutomotiveUI‘15 conference. In this chapter, we discuss the outcomes of this workshop, provide examples of possible applications now or in the future and conclude with challenges in developing or using adaptive ambient interactions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published23Towards Adaptive Ambient In-Vehicle Displays and Interactions: Insights and Design Guidelines from the 2015 AutomotiveUI Dedicated Workshop1501715422TognonYBF2017_246MTognonBYükselGBuondonnoAFranchi2017-03-00KimPYKPK20177JKimYParkJYeonJKimJ-YParkS-PKimVancouver, BC, Canada2017-06-27Introduction:
Tactile sensation is essential for humans to manipulate objects by hands. During object manipulation, many different physical properties of an object are sensed and processed by the human somatosensory system, supporting exquisite perceptual sensitivities . Tactile sensation of different physical properties can be depicted in the several tactile perceptual dimensions, including roughness, hardness, stickiness and warmth . To date, a number of human neuroimaging studies have unveiled neural mechanisms underlying roughness  and warmth perception . Yet, relatively little is known about how the human brain subserves the perception of tactile hardness. Previous studies have suggested that slowly adapting type-1 (SA1) afferents are primarily responsible for perceiving hardness from the surface of an object  and the Brodmann areas (BA) 3b and 1 may contribute to perceiving hardness . However, it remains elusive how the different levels of hardness are represented in the human brain during the dexterous manipulation of an object. Therefore, this study aims to investigate neural responses to tactile stimuli with the same shape and surface texture but different levels of hardness when people grip on the object with their hand. Functional magnetic resonance imaging (fMRI) is used to identify brain regions related with tactile hardness.
Twelve right-handed subjects (8 female, mean age 23.1 years old) participated in the study. Experimental protocols were approved by the ethical committee of Ulsan National Institute of Science and Technology (UNISTIRB-15-16-A). Tactile stimuli with the same shape (oval) were prepared and grouped into four sets according to their hardness levels (level 1 to 4). Participants first performed a behavioral task in which they were given a pair of stimuli with eyes closed and asked to report the degree of a difference in hardness between them. Afterward, participants performed the fMRI experimental task in which they repetitively gripped on and released a given object (used in the behavioral task) for fifteen seconds followed by a nine-second rest. There were also trials in which participants executed the same grip-and-release motion without objects as a control task. Functional images (T2*-weighted gradient EPI, covering the whole depth of somatosensory area, TR = 3,000 ms, voxel size = 2.0 × 2.0 × 2.0 mm3) were obtained during the fMRI task using a Siemens 3T scanner (Magnetom TrioTim). The functional image analysis was performed using the general linear model (GLM) in SPM8 with a canonical hemodynamic response function to estimate blood-oxygen-level-dependent (BOLD) responses to each stimulus.
The analysis of behavioral experimental data showed that participants could correctly find differences in hardness levels among stimuli. The GLM analysis for individuals revealed activations in the contralateral postcentral gyrus in most participants modulated with different levels of hardness (p<0.001 uncorrected). Also, a random-effect group analysis of fMRI data revealed a cluster in the Rolandic operculum activated by the perception of tactile hardness (p<0.001 uncorrected). In addition, the cluster size and maximum activation peak was increased as the hardness level increased.
Our study demonstrated that brain regions over the postcentral gyrus (S1) and Rolandic operculum might be related to the perception of tactile hardness. We also observed that the degree of activation in these regions, reflected by the size of the activated area (cluster size) and the level of activation (maximum peak) was proportional to the level of tactile hardness. Our results suggest that neural assemblies in the contralateral S1 and Roland operculum may play a role in sensing tactile hardness during dexterous object manipulation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Investigation of cortical activity related to perception of tactile hardness1501715422DelongGARCWN20177PDelongAGianiMAllerTRoheVConradMWatanabeUNoppeneyBirmingham, UK2017-04-1027Information integration across the senses is fundamental for effective interactions with our environment. A controversial question is whether signals from different senses can interact in the absence of awareness. Models of global workspace would predict that unaware signals are confined to processing in low level sensory areas and thereby prevented from interacting with signals from
other senses in higher order association areas. Yet, accumulating evidence suggests that multisensory interactions can emerge – at least to some extent- already at the primary cortical level . These low level interactions may thus potentially mediate interactions
between sensory signals in the absence of awareness.
Combining the spatial ventriloquist illusion and dynamic continuous flash suppression (dCSF)  we investigated whether visual signals that observers did not consciously perceive can influence spatial perception of sounds. Importantly, dCFS obliterated visual awareness only on a fraction of trials allowing us to compare spatial ventriloquism for physically identical flashes that were judged visible or invisible.
Our results show a stronger ventriloquist effect for visible than invisible flashes. Yet, a robust ventriloquist effect also emerged for flashes judged invisible. This ventriloquist effect for invisible flashes was even preserved in participants that were not better than chance when locating flashes they judged ‘invisible’.
Collectively, our findings demonstrate that physically identical visual signals influence the perceived location of concurrent sounds depending on their subjective visibility. Even visual signals that participants are not aware of can alter sound perception. These results suggest that audiovisual signals are integrated into spatial representations to some extent in the absence of perceptual
awareness.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-27The invisible ventriloquist: can unaware flashes alter sound perception?15017154221501715421delaRosa2017_27Sde la RosaWien, Austria2017-03-24We report a novel high level adaptation aftereffect: the prolonged viewing of a fist-bump action causes participants to perceive an ambiguous morphed action as a punch. We show evidence that this visual adaptation effect is the result of a change in perception rather than a mere response bias.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Action adaptation: A new visual illusion that transforms a hug into a push1501715422delaRosa20177Sde la RosaWien, Austria2017-03-24Psychological experiments often require the recruitment of participants and the coordination of experimental equipment that is shared between experimenters (e.g. fMRI scanner). Here we present a free online tool that allows the rapid recruitment of participants and manages the booking of required equipment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Banto: An online participant recruitment and equipment management tool1501715422KeilmanndCM201710FKeilmannSde la RosaUCressTMeilingerFademrechtdB201710LFademrechtSde la RosaHHBülthoffSchultzKPDBFBGB201710JSchultzKKaulardPPilzKDobsIBülthoffAFernandez-CruzBBrockhausJGardnerHHBülthoffBulthoffN201710IBülthoffFNNewelldelaRosa2017_310Sde la RosaMeilinger201710TMeilingerNooij201710SAENooijdeWinkel201710Kde WinkelHinterecker201710THintereckerChang2016_21D-SChangRowohlt PolarisReinbek bei Hamburg, Germany2016-09-00Was kann unser Hirn verraten?
Warum begegnen wir Fremden mit Vorurteilen? Warum spielt Religion eine wichtige Rolle dabei, wie wir die Welt wahrnehmen? Warum sehen für Europäer Asiaten meist gleich aus? Warum wählen wir manchmal unfähige Politiker? Unser Gehirn sucht immer nach Erklärungen. Erklärungen, wie die Welt funktioniert, wie wir selbst funktionieren und wie andere Menschen funktionieren. Doch jedes Gehirn findet eben seine eigenen Antworten – warum das so ist und ob wir diesen Antworten immer trauen können, erfahren Sie in diesem Buch.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published252Mein Hirn hat seinen eigenen Kopf: Wie wir andere und uns selbst wahrnehmen1501715422Drop20161FMDropLogos VerlagBerlin, Germany2016-00-00Understanding how humans control a vehicle (cars, aircraft, bicycles, etc.) enables engineers to design faster, safer, more comfortable, more energy efficient, more versatile, and thus better vehicles. In a typical control task, the Human Controller (HC) gives control inputs to a vehicle such that it follows a particular reference path (e.g., the road) accurately. The HC is simultaneously required to attenuate the effect of disturbances (e.g., turbulence) perturbing the intended path of the vehicle. To do so, the HC can use a control organization that resembles a closed-loop feedback controller, a feedforward controller, or a combination of both.
Previous research has shown that a purely closed-loop feedback control organization is observed only in specific control tasks, that do not resemble realistic control tasks, in which the information presented to the human is very limited. In realistic tasks, a feedforward control strategy is to be expected; yet, almost all previously available HC models describe the human as a pure feedback controller lacking the important feedforward response. Therefore, the goal of the research described in this thesis was to obtain a fundamental understanding of feedforward in human manual control. First, a novel system identification method was developed, which was necessary to identify human control dynamics in control tasks involving realistic reference signals. Second, the novel identification method was used to investigate three important aspects of feedforward through human-in-the-loop experiments which resulted in a control-theoretical model of feedforward in manual control.
The central element of the feedforward model is the inverse of the vehicle dynamics, equal to the theoretically ideal feedforward dynamics. However, it was also found that the HC is not able to apply a feedforward response with these ideal dynamics, and that limitations in the perception, cognition, and action loop need to be modeled by additional model elements: a gain, a time delay, and a low-pass filter. Overall, the thesis demonstrated that feedforward is indeed an essential part of human manual control behavior and should be accounted for in many human-machine applications.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published300Control-Theoretic Models of Feedforward in Manual Control1501715422Geluardi20161SGeluardiLogos VerlagBerlin, Germany2016-00-00The research described in this thesis was inspired by the results of the myCopter project, a European project funded by the European Commission in 2011. The myCopter project's aim was to identify new concepts for air transport that could be used to achieve a Personal Aerial Transport (PAT) system in the second half of the 21st century. Although designing a new vehicle was not among the project's goal, it was considered important to assess vehicle response types and handling qualities that Personal Aerial Vehicles (PAVs) should have to be part of a PAT. In this thesis it is proposed to consider civil light helicopters as possible PAVs candidates. The goal of the thesis is to investigate whether it is possible to transform civil light helicopters into PAVs through the use of system identification methods and control techniques. The transformation here is envisaged in terms of vehicle dynamics and handling qualities.
To achieve this goal, three main steps are considered. The first step focuses on the identification of a Robinson R44 Raven II helicopter model in hover. The second step consists of augmenting the identified helicopter model to achieve response types and handling qualities defined for PAVs. The third step consists of assessing the magnitude of discrepancy between the two implemented augmented systems and the PAV reference model. An experiment is conducted for this purpose, consisting of piloted closed-loop control tasks performed in the MPI CyberMotion Simulator by participants without any prior flight experience.
Results, evaluated in terms of objective and subjective workload and performance, show that both augmented control systems are able to resemble PAVs handling qualities and response types in piloted closed-loop control tasks. This result demonstrates that it is possible to transform helicopter dynamics into PAVs dynamics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published198Identification and augmentation of a civil light helicopter: transforming helicopters into Personal Aerial Vehicles1501715422KonigSKGKWLEBWKNMBWBK20163SUKönigFSchumannJKeyserCGoekeCKrauseSWacheALytochkinMEbertVBrunschBWahnKKasparSKNagelTMeilingerHHBülthoffTWolbersCBüchelPKönig2016-12-001211135Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published34Learning New Sensorimotor Contingencies: Effects of Long-Term Use of Sensory Augmentation on the Brain and Conscious Perception1501715422KimCCBK20163JKimYGChungS-CChungHHBülthoffS-PKim2016-12-0049455464As the use of wearable haptic devices with vibrating alert features is commonplace, an understanding of the perceptual categorization of vibrotactile frequencies has become important. This understanding can be substantially enhanced by unveiling how neural activity represents vibrotactile frequency information. Using functional magnetic resonance imaging (fMRI), this study investigated categorical clustering patterns of the frequency-dependent neural activity evoked by vibrotactile stimuli with gradually changing frequencies from 20 to 200 Hz. First, a searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions exhibiting neural activities associated with frequency information. We found that the contralateral postcentral gyrus (S1) and the supramarginal gyrus (SMG) carried frequency-dependent information. Next, we applied multidimensional scaling (MDS) to find low-dimensional neural representations of different frequencies obtained from the multi-voxel activity patterns within these regions. The clustering analysis on the MDS results showed that neural activity patterns of 20-100 Hz and 120-200 Hz were divided into two distinct groups. Interestingly, this neural grouping conformed to the perceptual frequency categories found in the previous behavioral studies. Our findings therefore suggest that neural activity patterns in the somatosensory cortical regions may provide a neural basis for the perceptual categorization of vibrotactile frequency.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Neural Categorization of Vibrotactile Frequency in Flutter and Vibration Stimulations: an fMRI Study1501715422KimCCBK2016_23JKimYGChungS-CChungHHBülthoffS-PKim2016-11-0016271232–1236In this functional MRI study, we investigated how the human brain activity represents tactile location information evoked by pressure stimulation on fingers. Using the searchlight multivoxel pattern analysis, we looked for local activity patterns that could be decoded into one of four stimulated finger locations. The supramarginal gyrus (SMG) and the thalamus were found to contain distinct multivoxel patterns corresponding to individual stimulated locations. In contrast, the univariate general linear model analysis contrasting stimulation against resting phases for each finger identified activations mainly in the primary somatosensory cortex (S1), but not in SMG or in thalamus. Our results indicate that S1 might be involved in the detection of the presence of pressure stimuli, whereas the SMG and the thalamus might play a role in identifying which finger is stimulated. This finding may provide additional evidence for hierarchical information processing in the human somatosensory areas.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1232Decoding pressure stimulation locations on the fingers from human neural activation patterns1501715422YukselF20163BYükselAFranchi2016-11-00In this paper we present the dynamic Lagrangian modeling, system analysis, and nonlinear control of a robot constituted by a planar-vtol (PVTOL) underactuated aerial vehicle equipped with a rigid-or an elastic-joint arm, which constitutes an aerial manipulator. For the design of the aerial manipulator, we first consider generic offsets between the center of mass (CoM) of the PVTOL, and the attachment point of the joint-arm. Later we consider a model in which these two points are the coinciding. It turns out to be that the choice of this attachment point is significantly affecting the capabilities of the platform. Furthermore, in both cases we consider the rigid-and elastic-joint arm configurations. For each of the resulting four cases we formally assess the presence of exact linearizing and differentially flat outputs and the possibility of using the dynamic feedback linearization (DFL) controller. Later we formalize an optimal control problem exploiting the differential flatness property of the systems, which is applied, as an illustrative example, to the aerial throwing task. Finally we provide extensive and realistic simulation results for comparisons between different robot models in different robotic tasks such as aerial grasping and aerial throwing, and a discussion on the applicability of computationally simpler controllers for the coinciding-point models to generic-point ones. Further exhaustive simulations on the trajectory tracking and the high-speed arm swinging capabilities are provided in a technical attachment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/submitted0PVTOL Aerial Manipulators with a Rigid or an Elastic
Joint: Analysis, Control, and Comparison1501715422StegagnoCOBF20163PStegagnoMCognettiGOrioloHHBülthoffAFranchi2016-10-0053211331151We present a decentralized algorithm for estimating mutual poses (relative positions and orientations) in a group of mobile robots. The algorithm uses relative-bearing measurements, which, for example, can be obtained from onboard cameras, and information about the motion of the robots, such as inertial measurements. It is assumed that all relative-bearing measurements are anonymous; i.e., each specifies a direction along which another robot is located but not its identity. This situation, which is often ignored in the literature, frequently arises in practice and remarkably increases the complexity of the problem. The proposed solution is based on a two-step approach: in the first step, the most likely unscaled relative configurations with identities are computed from anonymous measurements by using geometric arguments, while in the second step, the scale is determined by numeric Bayesian filtering based on the motion model. The solution is first developed for ground robots in SE (2) and then for aerial robots in SE (3). Experiments using Khepera III ground mobile robots and quadrotor aerial robots confirm that the proposed method is effective and robust w.r.t. false positives and negatives of the relative-bearing measuring process.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published18Ground and Aerial Mutual Localization Using Anonymous Relative-Bearing Measurements1501715422HintereckerKJ20163THintereckerMKnauffPNJohnson-Laird2016-10-00104216061620We report 3 experiments investigating novel sorts of inference, such as: A or B or both. Therefore, possibly (A and B). Where the contents were sensible assertions, for example, Space tourism will achieve widespread popularity in the next 50 years or advances in material science will lead to the development of antigravity materials in the next 50 years, or both. Most participants accepted the inferences as valid, though they are invalid in modal logic and in probabilistic logic too. But, the theory of mental models predicts that individuals should accept them. In contrast, inferences of this sort—A or B but not both. Therefore, A or B or both—are both logically valid and probabilistically valid. Yet, as the model theory also predicts, most reasoners rejected them. The participants’ estimates of probabilities showed that their inferences tended not to be based on probabilistic validity, but that they did rate acceptable conclusions as more probable than unacceptable conclusions. We discuss the implications of the results for current theories of reasoning.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published14Modality, probability, and mental models1501715422MeilingerSB20163TMeilingerMStrickrodtHHBülthoff2016-10-0015577–95Two classes of space define our everyday experience within our surrounding environment: vista spaces, such as rooms or streets which can be perceived from one vantage point, and environmental spaces, for example, buildings and towns which are grasped from multiple views acquired during locomotion. However, theories of spatial representations often treat both spaces as equal. The present experiments show that this assumption cannot be upheld. Participants learned exactly the same layout of objects either within a single room or spread across multiple corridors. By utilizing a pointing and a placement task we tested the acquired configurational memory. In Experiment 1 retrieving memory of the object layout acquired in environmental space was affected by the distance of the traveled path and the order in which the objects were learned. In contrast, memory retrieval of objects learned in vista space was not bound to distance and relied on different ordering schemes (e.g., along the layout structure). Furthermore, spatial memory of both spaces differed with respect to the employed reference frame orientation. Environmental space memory was organized along the learning experience rather than layout intrinsic structure. In Experiment 2 participants memorized the object layout presented within the vista space room of Experiment 1 while the learning procedure emulated environmental space learning (movement, successive object presentation). Neither factor rendered similar results as found in environmental space learning. This shows that memory differences between vista and environmental space originated mainly from the spatial compartmentalization which was unique to environmental space learning. Our results suggest that transferring conclusions from findings obtained in vista space to environmental spaces and vice versa should be made with caution.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-77Qualitative differences in memory for vista and environmental spaces are caused by opaque borders, not movement or successive presentation1501715422VenrooijMMAvvB20163JVenrooijMMulderMMulderDAAbbinkMMvan PaassenFCTvan der HelmHHBülthoff2016-09-00Epub aheadBiodynamic feedthrough (BDFT) refers to the feedthrough of vehicle accelerations through the human body, leading to involuntary control device inputs. BDFT impairs control performance in a large range of vehicles under various circumstances. Research shows that BDFT strongly depends on adaptations in the neuromuscular admittance dynamics of the human body. This paper proposes a model-based approach of BDFT mitigation that accounts for these neuromuscular adaptations. The method was tested, as proof-of-concept, in an experiment where participants inside a motion simulator controlled a simulated vehicle through a virtual tunnel. Through evaluating tracking performance and control effort with and without motion disturbance active and with and without cancellation active, the effectiveness of the cancellation was evaluated. Results show that the cancellation approach is successful: the detrimental effects of BDFT were largely removed.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/Venrooij_2016_IEEETCyb_AdmittanceAdaptiveModelBasedApproachToMitigateBDFT.pdfpublished0Admittance-Adaptive Model-Based Approach to Mitigate Biodynamic Feedthrough1501715422DobsBS20163KDobsIBülthoffJSchultz2016-09-0034301619Facial movements convey information about many social cues, including identity. However, how much information about a person’s identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Identity information content depends on the type of facial movement1501715422Ahmad20163AAhmadHHBülthoff2016-09-0083275–286In this article we present an online estimator for multirobot cooperative localization and target tracking based on nonlinear least squares minimization. Our method not only makes the rigorous optimization-based approach applicable online but also allows the estimator to be stable and convergent. We do so by employing a moving horizon technique to nonlinear least squares minimization and a novel design of the arrival cost function that ensures stability and convergence of the estimator. Through an extensive set of real robot experiments, we demonstrate the robustness of our method as well as the optimality of the arrival cost function. The experiments include comparisons of our method with i) an extended Kalman filter-based online-estimator and ii) an offline-estimator based on full-trajectory nonlinear least squares.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-275Moving-horizon Nonlinear Least Squares-based Multirobot Cooperative Perception1501715422DropPVMB20163FMDropDMPoolMMvan PaassenMMulderHHBülthoff2016-09-00Epub aheadRealistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: “false-positive” feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Objective Model Selection for Identifying the Human Feedforward Response in Manual Control1501715422NooijNBP20163SAENooijANestiHHBülthoffPPretto2016-08-0082342323–2337When in darkness, humans can perceive the direction and magnitude of rotations and of linear translations in the horizontal plane. The current paper addresses the integrated perception of combined translational and rotational motion, as it occurs when moving along a curved trajectory. We questioned whether the perceived motion through the environment follows the predictions of a self-motion perception model (e.g., Merfeld et al. in J Vestib Res 3:141–161, 1993; Newman in A multisensory observer model for human spatial orientation perception, 2009), which assume linear addition of rotational and translational components. For curved motion in darkness, such models predict a non-veridical motion percept, consisting of an underestimation of the perceived rotation, a distortion of the perceived travelled path, and a bias in the perceived heading (i.e., the perceived instantaneous direction of motion with respect to the body). These model predictions were evaluated in two experiments. In Experiment 1, seven participants were moved along a circular trajectory in darkness while facing the motion direction. They indicated perceived yaw rotation using an online tracking task, and perceived travelled path by drawings. In Experiment 2, the heading was systematically varied, and six participants indicated, in a 2-alternative forced-choice task, whether they perceived facing inward or outward of the circular path. Overall, we found no evidence for the heading bias predicted by the model. This suggests that the sum of the perceived rotational and translational components alone cannot adequately explain the overall perceived motion through the environment. Possibly, knowledge about motion dynamics and familiar stimuli combinations may play an important additional role in shaping the percept.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-2323Perception of rotation, path, and heading in circular trajectories1501715422GeussMS20163MNGeussMJMcCardellJKStefanucci2016-07-00711119Previous research has demonstrated an influence of one’s emotional state on estimates of spatial layout. For example, estimates of heights are larger when the viewer is someone typically afraid of heights (trait fear) or someone who, in the moment, is experiencing elevated levels of fear (state fear). Embodied perception theories have suggested that such a change in perception occurs in order to alter future actions in a manner that reduces the likelihood of injury. However, other work has argued that when acting, it is important to have access to an accurate perception of space and that a change in conscious perception does not necessitate a change in action. No one has yet investigated emotional state, perceptual estimates, and action performance in a single paradigm. The goal of the current paper was to investigate whether fear influences perceptual estimates and action measures similarly or in a dissociable manner. In the current work, participants either estimated gap widths (Experiment 1) or were asked to step over gaps (Experiment 2) in a virtual environment. To induce fear, the gaps were placed at various heights up to 15 meters. Results showed an increase in gap width estimates as participants indicated experiencing more fear. The increase in gap estimates was mirrored in participants’ stepping behavior in Experiment 2; participants stepped over fewer gaps when experiencing higher state and trait fear and, when participants actually stepped, they stepped farther over gap widths when experiencing more fear. The magnitude of the influence of fear on both perception and action were also remarkably similar (5.3 and 3.9 cm, respectively). These results lend support to embodied perception claims by demonstrating an influence on action of a similar magnitude as seen on estimates of gap widths.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published18Fear Similarly Alters Perceptual Estimates of and Actions over Gaps150171542215017MachullaDE20163T-KMachullaMDi LucaMOErnst2016-07-0074210261038Crossmodal judgments of relative timing commonly yield a nonzero point of subjective simultaneity (PSS). Here, we test whether subjective simultaneity is coherent across all pairwise combinations of the visual, auditory, and tactile modalities. To this end, we examine PSS estimates for transitivity: If Stimulus A has to be presented x ms before Stimulus B to result in subjective simultaneity, and B y ms before C, then A and C should appear simultaneous when A precedes C by z ms, where z = x + y. We obtained PSS estimates via 2 different timing judgment tasks—temporal order judgments (TOJs) and synchrony judgments (SJs)—thus allowing us to examine the relationship between TOJ and SJ. We find that (a) SJ estimates do not violate transitivity, and that (b) TOJ and SJ data are linearly related. Together, these findings suggest that both TOJ and SJ access the same perceptual representation of simultaneity and that this representation is globally coherent across the tested modalities. Furthermore, we find that (b) TOJ estimates are intransitive. This is consistent with the proposal that while the perceptual representation of simultaneity is coherent, relative timing judgments that access this representation can at times be incoherent with each other because of postperceptual response biases.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12The consistency of crossmodal synchrony perception across the visual, auditory, and tactile senses15017154221501718824NestiNLBP20163ANestiSAENooijMLosertHHBülthoffPPretto2016-05-00592417426In driving simulation, simulator tilt is used to reproduce sustained linear acceleration. In order to feel realistic, this tilt is performed at a rate below the human tilt rate detection threshold, which is usually assumed constant. However, it is known that many factors affect the threshold, such as visual information, simulator motion in additional directions, or the driver’s active effort required for controlling the vehicle. Here we investigated the effect of these factors on the roll rate detection threshold during simulated curve driving. Ten participants reported whether they detected roll motion in multiple trials during simulated curve driving, while roll rate was varied over trials. Roll rate detection thresholds were measured under four conditions. In the first three conditions, participants were moved passively through a curve with the following: (i) roll only in darkness; (ii) combined roll/sway in darkness; (iii) combined roll/sway and visual information. In the fourth (iv) condition participants actively drove through the curve. The results showed that roll rate thresholds in simulated curve driving increase, that is, sensitivity decreases, when the roll tilt is combined with sway motion. Moreover, an active control task seemed to further increase the detection threshold, that is, impair motion sensitivity, but with large individual differences. We hypothesize that this is related to the level of immersion during the task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Roll rate perceptual thresholds in active and passive curve
driving simulation1501715422KrupchankaK20163DKrupchankaMKatliar2016-05-00342600607Background: There is evidence of a positive association between insight and depression among patients with schizophrenia. Self-stigma was shown to play a mediating role in this association. We attempted to broaden this concept by investigating insight as a potential moderator of the association between depressive symptoms amongst people with schizophrenia and stigmatizing views towards people with mental disorders in their close social environment. Method: In the initial sample of 120 pairs, data were gathered from 96 patients with a diagnosis of “paranoid schizophrenia” and 96 of their nearest relatives (80% response rate). In this cross-sectional study data were collected by clinical interview using the following questionnaires: “The Scale to Assess Unawareness of Mental Disorder,” “Calgary Depression Scale for Schizophrenia,” and “Brief Psychiatric Rating Scale.” The stigmatizing views of patients’ nearest relatives towards people with mental disorders were assessed with the “Mental Health in Public Conscience” scale. Results: Among patients with schizophrenia depressive symptom severity was positively associated with the intensity of nearest relatives’ stigmatizing beliefs (“Nonbiological vision of mental illness,” τ = 0.24; P < .001). The association was moderated by the level of patients’ awareness of presence of mental disorder while controlling for age, sex, duration of illness and psychopathological symptoms. Conclusions: The results support the hypothesis that the positive association between patients’ depression and their nearest relatives’ stigmatizing views is moderated by patients’ insight. Directions for further research and practical implications are discussed.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7The Role of Insight in Moderating the Association Between Depressive Symptoms in People With Schizophrenia and Stigma Among Their Nearest Relatives: A Pilot Study1501715422ZhaoBB20153MZhaoHHBülthoffIBülthoff2016-04-00442584597Faces are processed holistically, so selective attention to 1 face part without any influence of the others often fails. In this study, 3 experiments investigated what type of facial information (shape or surface) underlies holistic face processing and whether generalization of holistic processing to nonexperienced faces requires extensive discrimination experience. Results show that facial shape information alone is sufficient to elicit the composite face effect (CFE), 1 of the most convincing demonstrations of holistic processing, whereas facial surface information is unnecessary (Experiment 1). The CFE is eliminated when faces differ only in surface but not shape information, suggesting that variation of facial shape information is necessary to observe holistic face processing (Experiment 2). Removing 3-dimensional (3D) facial shape information also eliminates the CFE, indicating the necessity of 3D shape information for holistic face processing (Experiment 3). Moreover, participants show similar holistic processing for faces with and without extensive discrimination experience (i.e., own- and other-race faces), suggesting that generalization of holistic processing to nonexperienced faces requires facial shape information, but does not necessarily require further individuation experience. These results provide compelling evidence that facial shape information underlies holistic face processing. This shape-based account not only offers a consistent explanation for previous studies of holistic face processing, but also suggests a new ground-in addition to expertise-for the generalization of holistic processing to different types of faces and to nonface objects.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13A shape-based account for holistic face processing1501715422SaultonMBD20163ASaultonBMohlerHHBülthoffTJDodds2016-04-006:216116The elongation of a figure or object can induce a perceptual bias regarding its area or volume estimation. This bias is notable in Piagetian experiments in which participants tend to consider elongated cylinders to contain more liquid than shorter cylinders of equal volume. We investigated whether similar perceptual biases could be found in volume judgments of surrounding indoor spaces and whether those judgments were viewpoint dependent. Participants compared a variety of computer-generated rectangular rooms with a square room in a psychophysical task. We found that the elongation bias in figures or objects was also present in volume comparison judgments of indoor spaces. Further, the direction of the bias (larger or smaller) depended on the observer's viewpoint. Similar results were obtained from a monoscopic computer display (Experiment 1) and stereoscopic head-mounted display with head tracking (Experiment 2). We used generalized linear mixed-effect models to model participants' volume judgments using a function of room depth and width. A good fit to the data was found when applying weight on the depth relative to the width, suggesting that participants' judgments were biased by egocentric properties of the space. We discuss how biases in comparative volume judgments of rooms might reflect the use of simplified strategies, such as anchoring on one salient dimension of the space.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published15Egocentric biases in comparative volume judgments of rooms150171542215017MeilingerW20163TMeilingerKWatanabe2016-04-00411122Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also
heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published21Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory1501715422ScheerBC20163MScheerHHBülthoffLLChuang2016-03-007310115The current study investigates the demands that steering places on mental resources. Instead of a conventional dual-task paradigm, participants of this study were only required to perform a steering task while task-irrelevant auditory distractor probes (environmental sounds and beep tones) were intermittently presented. The event-related potentials (ERPs), which were generated by these probes, were analyzed for their sensitivity to the steering task’s demands. The steering task required participants to counteract unpredictable roll disturbances and difficulty was manipulated either by adjusting the bandwidth of the roll disturbance or by varying the complexity of the control dynamics. A mass univariate analysis revealed that steering selectively diminishes the amplitudes of early P3, late P3, and the re-orientation negativity (RON) to task-irrelevant environmental sounds but not to beep tones. Our findings are in line with a three-stage distraction model, which interprets these ERPs to reflect the post-sensory detection of the task-irrelevant stimulus, engagement, and re-orientation back to the steering task. This interpretation is consistent with our manipulations for steering difficulty. More participants showed diminished amplitudes for these ERPs in the ‘hard’ steering condition relative to the ‘easy’ condition. To sum up, the current work identifies the spatiotemporal ERP components of task-irrelevant auditory probes that are sensitive to steering demands on mental resources. This provides a non-intrusive method for evaluating mental workload in novel steering environments.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published14Steering demands diminish the early-P3, late-P3 and RON components of the event-related potential of task-irrelevant environmental sounds1501715422JungTWdBBM20163EJungKTakahashiKWatanabeSde la RosaMVButzHHBülthoffTMeilinger2016-03-00217719People maintain larger distances to other peoples’ front than to their back. We investigated if humans also judge another person as closer when viewing their front than their back. Participants watched animated virtual characters (avatars) and moved a virtual plane towards their location after the avatar was removed. In Experiment 1, participants judged avatars, which were facing them as closer and made quicker estimates than to avatars looking away. In Experiment 2, avatars were rotated in 30 degree steps around the vertical axis. Observers judged avatars roughly facing them (i.e., looking max. 60 degrees away) as closer than avatars roughly looking away. No particular effect was observed for avatars directly facing and also gazing at the observer. We conclude that body orientation was sufficient to generate the asymmetry. Sensitivity of the orientation effect to gaze and to interpersonal distance would have suggested involvement of social processing, but this was not observed. We discuss social and lower-level processing as potential reasons for the effect.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/Frontiers-Psychol-2016-Jung.pdfpublished8The Influence of Human Body Orientation on Distance Judgments1501715422delaRosa20163Sde la RosaYFerstlHHBülthoff2016-03-0023829618A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Visual adaptation dominates bimodal visual-motor action adaptation1501715422delaRosaEB20163Sde la RosaMEkramniaHHBülthoff2016-02-00561016The ability to discriminate between different actions is essential for action recognition and social interaction. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g. left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g. when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently either categorized the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Action Recognition and Movement Direction Discrimination Tasks Are Associated with Different Adaptation Patterns1501715422FademrechtBd20163LFademrechtIBülthoffSde la Rosa2016-02-003:3316114Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 0° (fovea/fixation), 15°, 30°, 45°, and 60° of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 30° of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Action recognition in the visual periphery1501715422ZhaoBB2015_23MZhaoHHBülthoffIBülthoff2016-02-00227213222Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific to faces or objects of expertise. Although current theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Nonface objects cannot elicit facelike holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. Moreover, weakening the saliency of Gestalt information in these patterns reduced holistic processing of them, which indicates that Gestalt information plays a crucial role in holistic processing. Therefore, holistic processing can be achieved not only via a top-down route based on expertise, but also via a bottom-up route relying merely on object-based information. The finding that facelike holistic processing can extend beyond the domains of faces and objects of expertise poses a challenge to current dominant theories.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Beyond Faces and Expertise: Facelike Holistic Processing of Nonface Objects in the Absence of Expertise1501715422FranchiSO20153AFranchiPStegagnoGOriolo2016-02-00240245265We present a control framework for achieving encirclement of a target moving in 3D using a multi-robot system. Three variations of a basic control strategy are proposed for different versions of the encirclement problem, and their effectiveness is formally established. An extension ensuring maintenance of a safe inter-robot distance is also discussed. The proposed framework is fully decentralized and only requires local communication among robots; in particular, each robot locally estimates all the relevant global quantities. We validate the proposed strategy through simulations on kinematic point robots and quadrotor UAVs, as well as experiments on differential-drive wheeled mobile robots.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/encirclement.pdfpublished20Decentralized Multi-Robot Encirclement of a 3D Target with Guaranteed Collision Avoidance1501715422delaRosaSBSU20163Sde la RosaFLSchillingerHHBülthoffJSchultzKUludag2016-02-007810110Mirror Neurons (MNs) are considered to be the supporting neural mechanism for action understanding. MNs have been identified in monkey’s area F5. The identification of MNs in the human homologue of monkeys’ area F5 (BA 44/45) has been proven methodologically difficult. Cross-modal fMRI adaptation studies supporting the existence of MNs restricted their analysis to a-priori candidate regions, whereas studies that failed to find evidence used non-object-directed actions. We tackled these limitations by using object-directed actions differing only in terms of their object directedness in combination with a cross-modal adaptation paradigm and a whole-brain analysis. Additionally, we tested voxels’ BOLD response patterns for several properties previously reported as typical mirror neuron response properties. Our results revealed 52 voxels in left inferior frontal gyrus (particularly BA 44/45), which respond to both motor and visual stimulation and exhibit cross-modal adaptation between the execution and observation of the same action. These results demonstrate that part of human inferior frontal gyrus (IFG), specifically BA 44/45, has BOLD response characteristics very similar to monkey’s area F5.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9fMRI adaptation between action observation and action execution reveals cortical areas with mirror neuron properties in human BA 44/4515017154221501718821DahlRBC20163CDDahlMJRaschIBülthoffC-CCheng2016-02-0020247619A face recognition system ought to read out information about the identity, facial expression and invariant properties of faces, such as sex and race. A current debate is whether separate neural units in the brain deal with these face properties individually or whether a single neural unit processes in parallel all aspects of faces. While the focus of studies has been directed toward the processing of identity and facial expression, little research exists on the processing of invariant aspects of faces. In a theoretical framework we tested whether a system can deal with identity in combination with sex, race or facial expression using the same underlying mechanism. We used dimension reduction to describe how the representational face space organizes face properties when trained on different aspects of faces. When trained to learn identities, the system not only successfully recognized identities, but also was immediately able to classify sex and race, suggesting that no additional system for the processing of invariant properties is needed. However, training on identity was insufficient for the recognition of facial expressions and vice versa. We provide a theoretical approach on the interconnection of invariant facial properties and the separation of variant and invariant facial properties.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Integration or separation in the processing of facial properties: a computational view1501715422MeilingerFSBB20153TMeilingerJFrankensteinNSimonHHBülthoffJ-PBresciani2016-02-00123246252Reference frames in spatial memory encoding have been examined intensively in recent years. However, their importance for recall has received considerably less attention. In the present study, passersby used tags to arrange a configuration map of prominent city center landmarks. It has been shown that such configurational knowledge is memorized within a north-up reference frame. However, participants adjusted their maps according to their body orientations. For example, when participants faced south, the maps were likely to face south-up. Participants also constructed maps along their location perspective—that is, the self–target direction. If, for instance, they were east of the represented area, their maps were oriented west-up. If the location perspective and body orientation were in opposite directions (i.e., if participants faced away from the city center), participants relied on location perspective. The results indicate that reference frames in spatial recall depend on the current situation rather than on the organization in long-term memory. These results cannot be explained by activation spread within a view graph, which had been used to explain similar results in the recall of city plazas. However, the results are consistent with forming and transforming a spatial image of nonvisible city locations from the current location. Furthermore, prior research has almost exclusively focused on body- and environment-based reference frames. The strong influence of location perspective in an everyday navigational context indicates that such a reference frame should be considered more often when examining human spatial cognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/meilinger_et_al_2016_situated_maps_pre_final_version.pdfpublished6Not all memories are the same: Situational context influences spatial recall within one’s city of residency1501715422SaultonLWBd20163ASaultonMRLongoHYWongHHBülthoffSde la Rosa2016-02-00164103–111Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-103The role of visual similarity and memory in body model distortions1501715422EsinsSSKB20163JEsinsJSchultzCStemperIKennerknechtIBülthoff2016-01-0017137Congenital prosopagnosia, the innate impairment in recognizing faces, is a very heterogeneous disorder with different phenotypical manifestations. To investigate the nature of prosopagnosia in more detail, we tested 16 prosopagnosics and 21 controls with an extended test battery addressing various aspects of face recognition. Our results show that prosopagnosics exhibited significant impairments in several face recognition tasks: impaired holistic processing (they were tested amongst others with the Cambridge Face Memory Test (CFMT)) as well as reduced processing of configural information of faces. This test battery also revealed some new findings. While controls recognized moving faces better than static faces, prosopagnosics did not exhibit this effect. Furthermore, prosopagnosics had significantly impaired gender recognition—which is shown on a groupwise level for the first time in our study. There was no difference between groups in the automatic extraction of face identity information or in object recognition as tested with the Cambridge Car Memory Test. In addition, a methodological analysis of the tests revealed reduced reliability for holistic face processing tests in prosopagnosics. To our knowledge, this is the first study to show that prosopagnosics showed a significantly reduced reliability coefficient (Cronbach’s alpha) in the CFMT compared to the controls. We suggest that compensatory strategies employed by the prosopagnosics might be the cause for the vast variety of response patterns revealed by the reduced test reliability. This finding raises the question whether classical face tests measure the same perceptual processes in controls and prosopagnosics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published36Face Perception and Test Reliabilities in Congenital Prosopagnosia in Seven Tests1501715422MeilingerSFHLMB20163TMeilingerJSchulte-PelkumJFrankensteinGHardiessNLaharnarHAMallotHHBülthoff2016-01-0076717Establishing verbal memory traces for non-verbal stimuli was reported to facilitate or inhibit memory for the non-verbal stimuli. We show that these effects are also observed in a domain not indicated before – wayfinding. Fifty-three participants followed a guided route in a virtual environment. They were asked to remember half of the intersections by relying on the visual impression only. At the other 50% of the intersections, participants additionally heard a place name, which they were asked to memorize. For testing, participants were teleported to the intersections and were asked to indicate the subsequent direction of the learned route. In Experiment 1, intersections’ names were arbitrary (i.e., not related to the visual impression). Here, participants performed more accurately at unnamed intersections. In Experiment 2, intersections’ names were descriptive and participants’ route memory was more accurate at named intersections. Results have implications for naming places in a city and for wayfinding aids.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/Frontiers-Psychol-2016-Meilinger.pdfpublished6How to best name a place? Facilitation and inhibition of route learning due to descriptive and arbitrary location labels1501715422ReichenbachBBT20153AReichenbachJ-PBrescianiHHBülthoffAThielscher2016-01-00Part A124869–875The vestibular system constitutes the silent sixth sense: It automatically triggers a variety of vital reflexes to maintain postural and visual stability. Beyond their role in reflexive behavior, vestibular afferents contribute to several perceptual and cognitive functions and also support voluntary control of movements by complementing the other senses to accomplish the movement goal. Investigations into the neural correlates of vestibular contribution to voluntary action in humans are challenging and have progressed far less than research on corresponding visual and proprioceptive involvement. Here, we demonstrate for the first time with event-related TMS that the posterior part of the right medial intraparietal sulcus processes vestibular signals during a goal-directed reaching task with the dominant right hand. This finding suggests a qualitative difference between the processing of vestibular vs. visual and proprioceptive signals for controlling voluntary movements, which are pre-dominantly processed in the left posterior parietal cortex. Furthermore, this study reveals a neural pathway for vestibular input that might be distinct from the processing for reflexive or cognitive functions, and opens a window into their investigation in humans.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-869Reaching with the sixth sense: Vestibular contributions to voluntary motor control in the human right parietal cortex15017154221501718821BreidtBC20167MBreidtHHBülthoffCCurioRio de Janeiro, Brazil2016-11-0012611268Reliable and accurate car driver head pose estimation is an important function for the next generation of Advanced Driver Assistance Systems that need to consider the driver state in their analysis. For optimal performance, head pose estimation needs to be non-invasive, calibration-free and accurate for varying driving and illumination conditions. In this pilot study we investigate a 3D head pose estimation system that automatically fits a statistical 3D face model to measurements of a driver's face, acquired with a low-cost depth sensor on challenging real-world data. We evaluate the results of our sensor-independent, driver-adaptive approach to those of a state-of-the-art camera-based 2D face tracking system as well as a non-adaptive 3D model relative to own ground-truth data, and compare to other 3D benchmarks. We find large accuracy benefits of the adaptive 3D approach. Our system shows a median error of 5.99 mm for position and 2.12° for rotation while delivering a full 6-DOF pose with very little degradation from strong illumination changes or out-of-plane rotations of more than 50°. In terms of accuracy, 95% of all our results have a position error of less than 9.50 mm, and a rotation error of less than 4.41°. Compared to the 2D method, this represents a 59.7% reduction of the 95% rotation accuracy threshold, and a 56.1% reduction of the median rotation error.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Accurate 3D Head Pose Estimation under Real-World Driving Conditions: A Pilot Study1501715422RienerJAPMCT20167ARienerMPJeonIAlvarezBPflegingAMirnigMTschelgliLChuangAnn Arbor, MI, USA2016-10-00217220On July 1st 2016, the first automated vehicle fatality became headline news  and caused a nationwide wave of concern. Now we have at least one situation in which a controlled automated vehicle system failed to detect a life threatening situation. The question still remains: How can an autonomous system make ethical decisions that involve human lives? Control negotiation strategies require prior encoding of ethical conventions into decision making algorithms, which is not at all an easy task -- especially considering that actually coming up with ethically sound decision strategies in the first place is often very difficult, even for human agents. This workshop seeks to provide a forum for experts across different backgrounds to voice and formalize the ethical aspects of automotive user interfaces in the context of automated driving. The goal is to derive working principles that will guide shared decision-making between human drivers and their automated vehicles.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published31st Workshop on Ethically Inspired User Interfaces
for Automated Driving1501715422McCallBPBAMMTCT20167RMcCallMBaumannIPolitisSSBorojeniIAlvarezAMirnigAMeschtcherjakovMTscheligiLChuangJTerkenAnn Arbor, MI, USA2016-10-00233236This workshop will focus on the problem of occupant and vehicle situational awareness with respect to automated vehicles when the driver must take over control. It will explore the future of fully automated and mixed traffic situations where vehicles are assumed to be operating at level 3 or above. In this case, all critical driving functions will be handled by the vehicle with the possibility of transitions between manual and automated driving modes at any time. This creates a driver environment where, unlike manual driving, there is no direct intrinsic motivation for the driver to be aware of the traffic situation at all times. Therefore, it is highly likely that when such a transition occurs, the driver will not be able to transition either safely or within an appropriate period of time. This workshop will address this challenge by inviting experts and practitioners from the automotive and related domains to explore concepts and solutions to increase, maintain and transfer situational awareness in semi-automated vehicles.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published31st Workshop on Situational Awareness in Semi-Automated Vehicles1501715422YukselSF20167BYükselNStaubAFranchiDaejeon, Korea2016-10-0016671672We present the dynamic modeling, analysis, and control design of a Planar-Vertical Take-Off and Landing (PVTOL) underactuated aerial vehicle equipped either with a rigid- or an elastic-joint arm. We prove that in both cases the system is exactly linearizable with a dynamic feedback and differentially flat for the same set of outputs (but different controllers). We compare the two cases with extensive and realistic simulations, which show that the rigid-joint case outperforms the elastic-joint case for aerial grasping tasks while the converse holds for link-velocity amplification tasks. We present preliminary experimental results using a actuated joint with variable stiffness (VSA) on a quadrotor platform.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/IROS-2016-Yueksel.pdfpublished5Aerial Robots with Rigid/Elastic-joint Arms: Single-joint Controllability Study and Preliminary Experiments1501715422BorojeniCHB20167SSBorojeniLChuangWHeutenSBollAnn Arbor, MI, USA2016-10-00237244Take-over situations in highly automated driving occur when
drivers have to take over vehicle control due to automation
shortcomings. Due to high visual processing demand of the
driving task and time limitation of a take-over maneuver, appropriate user interface designs for take-over requests (TOR) are needed. In this paper, we propose applying ambient TORs, which address the peripheral vision of a driver. Conducting an experiment in a driving simulator, we tested a) ambient displays as TORs, b) whether contextual information could be conveyed through ambient TORs, and c) if the presentation pattern (static, moving) of the contextual TORs has an effect on take-over behavior. Results showed that conveying contextual information through ambient displays led to shorter reaction times and longer times to collision without increasing the workload. The presentation pattern however, did not have an effect on take-over performance.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Assisting Drivers with Ambient Take-Over Requests in Highly Automated Driving1501715422MasoneBS20167CMasoneHHBülthoffPStegagnoDaejeon, Korea2016-10-0016231630This paper addresses the problem of cooperative aerial transportation of an object using a team of quadrotors. The approach presented to solve this problem accounts for the full dynamics of the system and it is inspired by the literature on reconfigurable cable-driven parallel robots (RCDPR). Using the modelling convention of RCDPR it is derived a direct relation between the motion of the quadrotors and the motion of the payload. This relation makes explicit the available internal motion of the system, which can be used to automatically achieve additional tasks. The proposed method does not require to specify a priory the forces in the cables and uses a tension distribution algorithm to optimally distribute them among the robots. The presented framework is also suitable for online teleoperation. Physical simulations with a human-in-the-loop validate the proposed approach.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Cooperative transportation of a payload using quadrotors: A reconfigurable cable-driven parallel robot1501715422YukselBF20167BYükselGBuondonnoAFranchiDaejeon, Korea2016-10-00561566In this paper we introduce a particularly relevant class of aerial manipulators that we name protocentric. These robots are formed by an underactuated aerial vehicle, a planar-Vertical Take-Off and Landing (PVTOL), equipped with any number of different parallel manipulator arms with the only property that all the first joints are attached at the Center of Mass (CoM) of the PVTOL, while the center of actuation of the PVTOL can be anywhere. We prove that protocentric aerial manipulators (PAMs) are differentially flat systems regardless the number of joints of each arm and their kinematic and dynamic parameters. The set of flat outputs is constituted by the CoM of the PVTOL and the absolute orientation angles of all the links. The relative degree of each output is equal to four. More amazingly, we prove that PAMs are differentially flat even in the case that any number of the joints are elastic, no matter the internal distribution between elastic and rigid joints. The set of flat outputs is the same but in this case the total relative degree grows quadratically with the number of elastic joints. We validate the theory by simulating object grasping and transportation tasks with unknown mass and parameters and using a controller based on dynamic feedback linearization.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/IROS-2016-Yueksel-2.pdfpublished5Differential flatness and control of protocentric aerial manipulators with any number of arms and mixed rigid-/elastic-joints1501715422D039IntinoOGVIBP20167GD'IntinoMOlivariSGeluardiJVenrooijMInnocentiHHBülthoffLPolliniBudapest, Hungary2016-10-00002169002174Haptic guidance has previously been employed to improve human performance in control tasks. This paper presents an experiment to evaluate whether haptic feedback can be used to help humans learn a compensatory tracking task. In the experiment, participants were divided into two groups: the haptic group and the no-aid group. The haptic group performed a first training phase with haptic feedback
and a second evaluation phase without haptic feedback. The no-aid group performed the whole experiment without haptic feedback. Results indicated that haptic group achieved better performance than the no aid group during the training phase. Furthermore, performance of haptic group did not worsen in the evaluation phase when the haptic feedback was turned off. On the other hand, the no-aid group needed more experimental trials to achieve similar performance to the haptic group. These findings indicate that haptic feedback helped participants learn the task quicker.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Evaluation of Haptic Support System for Training Purposes in a Tracking Task1501715422MiermeisterLBMSTKTPB20167PMiermeisterMLächeleRBossCMasoneCSchenkJTeschMKergerHTeufelAPottHHBülthoffDaejeon, Korea2016-10-0030243029This paper introduces the CableRobot simulator, which was developed at the Max Planck Institute for Biological Cybernetics in cooperation with the Fraunhofer Institute for Manufacturing Engineering and Automation IPA. The simulator is a completely novel approach to the design of motion simulation platforms in so far as it uses cables and winches for actuation instead of rigid links known from hexapod simulators. This approach allows to reduce the actuated mass, scale up the workspace significantly, and provides great flexibility to switch between system configurations in which the robot can be operated. The simulator will be used for studies in the field of human perception research and virtual reality applications. The paper dicusses some of the issues arising from the usage of cables and provides a system overview regarding kinematics and system dynamics as well as giving a brief introduction into possible application use cases.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5The CableRobot Simulator: Large Scale Motion Platform Based on Cable Robot Technology1501715422KarolusWC20167JKarolusPWWoźniakLLChuangGöteborg, Sweden2016-10-00118Humans are inherently skilled at using subtle physiological cues from other persons, for example gaze direction in a conversation. Personal computers have yet to explore this implicit input modality. In a study with 14 participants, we investigate how a user's gaze can be leveraged in adaptive computer systems. In particular, we examine the impact of different languages on eye movements by presenting simple questions in multiple languages to our participants. We found that fixation duration is sufficient to ascertain if a user is highly proficient in a given language. We propose how these findings could be used to implement adaptive visualizations that react implicitly on the user's gaze.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-118Towards Using Gaze Properties to Detect Language Proficiency1501715422BorojeniCLGB20167SSBorojeniLChuangALöckenCGlatzSBollAnn Arbor, MI, USA2016-10-00213215Managing drivers’ distraction and directing their attention has been a challenge for automotive UI researchers both in industry and academia. The objective of this half-day tutorial is to provide an overview of methodologies for design, development, and evaluation of in-vehicle attention-directing user interfaces. The tutorial will introduce specifics and challenges of shifting drivers’ attention and managing distractions in semi- and highly automated driving context. The participants will be familiarized with methods for requirement elicitation, participatory design, setting up experiments, and evaluation of interaction concepts using tools such as eye-tracker and EEG/ERP.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published2Tutorial on Design and Evaluation Methods for Attention
Directing Cues1501715422VenrooijCKPBS20167JVenrooijDCleijMKatliarPPrettoHHBülthoffDSteffenFWHoffmeyerH-PSchönerParis, France2016-09-083138This paper describes a driving simulation experiment, executed on the Daimler Driving Simulator (DDS), in which a filter-based and an optimization-based motion cueing algorithm (MCA) were compared using a newly developed motion cueing quality rating method. The goal of the comparison was to investigate whether optimization-based MCAs have, compared to filter-based approaches, the potential to improve the quality of motion simulations. The paper describes the two algorithms, discusses their strengths and weaknesses and describes the experimental methods and results. The MCAs were compared in an experiment where 18 participants rated the perceived motion mismatch, i.e., the perceived mismatch between the motion felt in the simulator and the motion one would expect from a drive in a real car. The results show that the quality of the motion cueing was rated better for the optimization-based MCA than for the filter-based MCA, indicating that there exists a potential to improve the quality of the motion simulation with optimization-based methods. Furthermore, it was shown that the rating method provides reliable and repeatable results within and between participants, which further establishes the utility of the method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/DSC-2016-Venrooij.pdfpublished7Comparison between filter- and optimization-based motion cueing in the Daimler Driving Simulator1501715422VenrooijOB20167JVenrooijMOlivariHHBülthoffKyoto, Japan2016-08-00120–125Biodynamic feedthrough (BDFT) occurs when vehicle accelerations feed through the body of a human operator, causing involuntary limb motions, which in turn result in involuntary control inputs. Manual control of many different vehicles is known to be vulnerable to BDFT effects, such as that of helicopters, aircraft, electric wheelchairs and hydraulic excavators. This paper provides a brief review of BDFT literature, which serves as a basis for identifying the fundamental challenges that remain to be addressed in future BDFT research. One of these challenges, time-variant BDFT identification, is discussed in more detail. Currently, it is often assumed that BDFT dynamics are (quasi)linear and time-invariant. This assumption can only be justified when measuring BDFT under carefully crafted experimental conditions, which are very different from real-world situations. As BDFT dynamics depend on neuromuscular dynamics, they are typically time-varying. This paper investigates the suitability of a recently developed time-variant identification approach, based on a recursive least-squares algorithm, which has been successfully used to identify time-varying neuromuscular dynamics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/IFAC-2016-Venrooij.pdfpublished-120Biodynamic Feedthrough: Current Status and Open Issues1501715422DropPMB20167FMDropDMPoolMMulderHHBülthoffKyoto, Japan2016-08-00712The human controller (HC) can greatly improve target-tracking performance by utilizing a feedforward operation on the target signal, in addition to a feedback response. System identification methods are used to determine the correct HC model structure: purely feedback or a combined feedforward/feedback model. In this paper, we investigate three central issues that complicate this objective. First, the identification method should not require prior assumptions regarding the dynamics of the feedforward and feedback components. Second, severe biases might be introduced by high levels of noise in the data measured under closed-loop conditions. To address the first two issues, we will consider two identification methods that make use of linear ARX models: the classic direct method and the two-stage indirect method of van den Hof and Schrama (1993). Third, model complexity should be considered in the selection of the ‘best’ ARX model to prevent ‘false-positive’ feedforward identification. Various model selection criteria, that make an explicit trade-off between model quality and model complexity, are considered. Based on computer simulations with a HC model, we conclude that 1) the direct method provides more accurate estimates in the frequency range of interest, and 2) existing model selection criteria do not prevent false-positive feedforward identification.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Constraints in Identification of Multi-Loop Feedforward Human Control Models1501715422PolliniROBMPPNIB20167LPolliniMRazzanelliMOlivariABrandimartiMMaimeriPPazzagliaGPittiglioRNutiMInnocentiHHBülthoffKyoto, Japan2016-08-0078–83Shared control is becoming widely used in many manual control tasks as a mean for improving performance and safety. Designing an effective shared control system requires extensive testing and knowledge of how operators react to the haptic sensations provided by the control device shared with the support system. Commercial general purpose haptic devices may be unfit to reproduce the operational situation typical of the control task under study, like car driving or airplane flying. Thus specific devices are needed for research on specific task; this market niche exists but is characterized by expensive products. This paper presents the development of a complete low cost haptic stick, of its initial characterization and inner loop and impedance control systems design, and finally proposes an evaluation with two test cases: pilot admittance identification with the classical tasks, and an entire haptic experiment. In particular this latter experiment tries to study what happens when a system failure happens in a pilot support system built using a classical embedded controller, compared to a system built following the haptic shared control paradigm.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-78Design, Realization and Experimental Evaluation of a Haptic Stick for Shared Control Studies1501715422SchenkMMB20167CSchenkCMasonePMiermeisterHHBülthoffNingbo, China2016-08-00454461In this paper we study if approximated linear models are accurate enough to predict the vibrations of a cable of a Cable-Driven Parallel Robot (CDPR) for different pretension levels. In two experiments we investigated the damping of a thick steel cable from the Cablerobot simulator  and measured the motion of the cable when a sinusoidal force is applied at one end of the cable. Using this setup and power spectral density analysis we measured the natural frequencies of the cable and compared these results to the frequencies predicted by two linear models: i) the linearization of partial differential equations of motion for a distributed cable, and ii) the discretization of the cable using a finite elements model. This comparison provides remarkable insights into the limits of approximated linear models as well as important properties of vibrating cables used in CDPR.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Modeling and analysis of cable vibrations for a cable-driven parallel robot1501715422DropDMB20167FMDropRDe VriesMMulderHHBülthoffKyoto, Japan2016-08-00177–182In the manual control of a dynamic system, the human controller (HC) is often required to follow a visible and predictable reference path. Using the predictable aspect of a reference signal, through applying feedforward control, the HC can significantly improve performance as compared to a purely feedback control strategy. A proper definition of a signal’s predictability, however, is never given in literature. This paper investigates the predictability of a sum-of-sinusoids target signal, as a function of the number of sinusoid components and the fact whether the sinusoid frequencies are harmonic, or not. A human-in-the-loop experiment was done, with target signals varying for these two signal characteristics. A combined feedback-feedforward HC model was identified and parameters were estimated. It was found that for all experimental conditions, subjects used a feedforward strategy. Results further showed that subjects were able to perform better for harmonic signals as compared to non-harmonic signals, for signals with roughly the same frequency content.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-177The Predictability of a Target Signal Affects Manual Feedforward Control1501715422OdelgaSB20167MOdelgaPStegagnoHHBülthoffBanff, Alberta, Canada2016-07-13306311Equipped with four actuators, quadrotor Unmanned Aerial Vehicles belong to the family of underactuated systems. The lateral motion of such platforms is strongly coupled with their orientation and consequently it is not possible to track an arbitrary 6D trajectory in space. In this paper, we propose a novel quadrotor design in which the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. Since the velocity of the controlled tilt angles of the propellers does not appear directly in the derived dynamic model, the system cannot be static feedback linearized. Nevertheless, the system is linearizable at a higher differential order, leading to a dynamic feedback linearization controller. Simulations confirm the theoretical findings, highlighting the improved motion capabilities with respect to standard quadrotors.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5A fully actuated quadrotor UAV with a propeller tilting mechanism: Modeling and control1501715422AhmadRB20167AAhmadERuffHHBülthoffHeidelberg, Germany2016-07-0017281734In this article we present a new method for multi-robot cooperative target tracking based on dynamic baseline stereo vision. The core novelty of our approach includes a computationally light-weight scheme to compute the 3D stereo measurements that exactly satisfy the epipolar constraints and a covariance intersection (CI)-based method to fuse the 3D measurements obtained by each individual robot. Using CI we are able to systematically integrate the robot localization uncertainties as well as the uncertainties in the measurements generated by the monocular camera images from each individual robot into the resulting stereo measurements. Through an extensive set of simulation and real robot results we show the robustness and accuracy of our approach with respect to ground truth. The source code related to this article is publicly accessible on our website and the datasets are available on request.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Dynamic baseline stereo vision-based cooperative target tracking1501715422SoykaLSFRM20167FSoykaMLeyrerJSmallwoodCFergusonBERieckeBJMohlerAnaheim, CA, USA2016-07-008588Chronic stress is one of the major problems in our current fast paced society. The body reacts to environmental stress with physiological changes (e.g. accelerated heart rate), increasing the activity of the sympathetic nervous system. Normally the parasympathetic nervous system should bring us back to a more balanced state after the stressful event is over. However, nowadays we are often under constant pressure, with a multitude of stressful events per day, which can result in us constantly being out of balance. This highlights the importance of effective stress management techniques that are readily accessible to a wide audience. In this paper we present an exploratory study investigating the potential use of immersive virtual reality for relaxation with the purpose of guiding further design decisions, especially about the visual content as well as the interactivity of virtual content. Specifically, we developed an underwater world for head-mounted display virtual reality. We performed an experiment to evaluate the effectiveness of the underwater world environment for relaxation, as well as to evaluate if the underwater world in combination with breathing techniques for relaxation was preferred to standard breathing techniques for stress management. The underwater world was rated as more fun and more likely to be used at home than a traditional breathing technique, while providing a similar degree of relaxation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Enhancing stress management techniques using virtual reality150171501715422RoggenkamperPDvM20167NRoggenkämperDMPoolFMDropMMvan PaassenMMulderWashington, DC, USA2016-06-16787803Fundamental research carried out by McRuer et al. 1, 2 in the 1960s still forms the basis for the mathematical representation of pilot-vehicle systems today. Expressing human skills in the same control engineering terms as the vehicle to be controlled enables scientists to
quantitatively evaluate human operators' manual control behavior. Decades of research have not only proven the validity of functional models as accurate descriptions of human tracking behavior during compensatory tracking tasks, 2–5 but also the suitability of ...nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published16Objective ARX Model Order Selection for Multi-Channel Human Operator Identification1501715422RajappaMBS20167SRajappaCMasoneHHBülthoffPStegagnoStockholm, Sweden2016-05-0029712977In this paper we present a robust quadrotor controller for tracking a reference trajectory in presence of
uncertainties and disturbances. A Super Twisting controller
is implemented using the recently proposed gain adaptation
law , , which has the advantage of not requiring the
knowledge of the upper bound of the lumped uncertainties. The controller design is based on the regular form of the quadrotor dynamics, without separation in two nested control loops for position and attitude. The controller is further extended by a feedforward dynamic inversion control that reduces the effort of the sliding mode controller. The higher order quadrotor dynamic model and proposed controller are validated using a SimMechanics physical simulation with initial error, parameter uncertainties, noisy measurements and external perturbations.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/ICRA-2016-Rajappa.pdfpublished6Adaptive Super Twisting Controller for a Quadrotor UAV1501715422LacheleVPB20167JLächeleJVenrooijPPrettoHHBülthoffWest Palm Beach, FL, USA2016-05-0033103316In this paper we present the results of two experiments performed using a teleoperation setup where operators control a simulated quadrotor in a virtual environment while perceiving visual and inertial motion feedback. Participants of this study performed a series of precision hover tasks. The experiments focused on how different motion feedback definitions affect operator performance and control effort. In the first experiment the effect of including different components of the quadrotor motion in the motion feedback was studied (referred to as "vehicle-related" motion feedback). In the second experiment, the effect of including task-related information in the motion feedback, in the form of a roll motion representing the offset between the desired and actual quadrotor position, was investigated (referred to as "task-related" motion feedback). In both experiments the effects of degraded visual quality was investigated. For both vehicle-related lateral motion feedback and task-related roll motion feedback, we found a significant increase in operator performance. Vehicle-related roll motion feedback showed no effect on operator performance. Control effort, defined as the overall stick deflection during the trials, decreased in vehicle-state roll motion conditions and increased in task-related motion feedback. The results show the applicability and benefits of providing task-related motion feedback in teleoperation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Effects of vehicle- and task-related motion feedback on operator performance in teleoperation1501715422PicardiGOB20167GPicardiSGeluardiMOlivariHHBülthoffWest Palm Beach, FL, USA2016-05-0017701777The aim of this study is to augment the uncertain dynamics of the helicopter in order to resemble the dynamics of a new kind of vehicle, the so called Personal Aerial Vehicle. To achieve this goal a two step procedure is proposed. First, the helicopter model dynamics is augmented with a PID-based dynamic controller. Such controller implements a model following on the nominal helicopter model without uncertainties. Then, an L1 adaptive controller is designed to restore the nominal responses of the augmented helicopter when variations in the identified parameters are considered. The performance of the adaptive controller is evaluated via Montecarlo simulations. The results show that the application of the adaptive controller to the augmented helicopter dynamics can significantly reduce the effects of uncertainty due to the identification of the helicopter model. For implementation reasons the adaptive controller was applied to a subset of the outputs of the system. However, the under actuation typical of helicopters makes the tracking of the nominal responses good also on the not directly adapted outputs.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7L1-based Model Following Control of an Identified Helicopter Model in Hover1501715422OdelgaBS20167MOdelgaHHBülthoffPStegagnoStockholm, Sweden2016-05-0029842990In this paper, we present a collision-free indoor
navigation algorithm for teleoperated multirotor Unmanned
Aerial Vehicles (UAVs). Assuming an obstacle rich environment, the algorithm keeps track of detected obstacles in the local surroundings of the robot. The detection part of the algorithm is based on measurements from an RGB-D camera and a Bin-Occupancy filter capable of tracking an unspecified number of targets. We use the estimate of the robot’s velocity to update the obstacles state when they leave the direct field of view of the sensor. The avoidance part of the algorithm is based on the Model Predictive Control approach. By predicting the
possible future obstacles states, it filters the operator commands to prevent collisions. The method is validated on a platform equipped with its own computational unit, which makes it selfsufficient in terms of external CPUs.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/ICRA-2016-Odelga.pdfpublished6Obstacle Detection, Tracking and Avoidance for a Teleoperated UAV1501715422FlemingMRBB20167RFlemingBJMohlerJRomeroMJBlackMBreidtRoma, Italy2016-02-00333343Advances in 3D scanning technology allow us to create realistic virtual avatars from full body 3D scan data.
However, negative reactions to some realistic computer generated humans suggest that this approach might
not always provide the most appealing results. Using styles derived from existing popular character designs,
we present a novel automatic stylization technique for body shape and colour information based on a statistical
3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived
appeal with two different experiments: One focuses on body shape alone, the other investigates the additional
role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on
the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was most effective, increasing average appeal ratings by approximately 34%.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/GRAPP-2016-Fleming.pdfpublished10Appealing female avatars from 3D body scans: Perceptual effects of stylization150171542215017GerboniVNJF20167CAGerboniJVenrooijFMNieuwenhuizenAJoosWFichterHHBülthoffSan Diego, CA, USA2016-01-0010021012In this paper an augmentation strategy is implemented with the goal of making the behavior of an actual helicopter similar to that of a new class of aerial systems called Personal Aerial Vehicles (PAVs). PAVs are meant to be own by flight-naïve pilots, i.e., pilots with minimal flight experience. One feature required for achieving this goal, is to have a Translation Rate Command (TRC) response type in the hover and low-speed regime. In this paper, a TRC response type is obtained for a UH-60 helicopter simulation model in hover and low-speed regime through the implementation of nonlinear back stepping control. The responses of the rotorcraft with TRC response type are evaluated with the metrics defined in the Aeronautical Design Standard ADS-33. E-PRF. Simulations show the efficiency of the control scheme in tracking the reference velocities and the achievement of the requirements to have level 1 Handling Qualities (HQ) for the TRC response type.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published10Control Augmentation Strategies for Helicopters used as Personal Aerial Vehicles in Low-speed Regime1501715422OlivariVNPB20167MOlivariJVenrooijFMNieuwenhuizenLPolliniHHBülthoffSan Diego, CA, USA2016-01-00385399Methods for identifying pilot's responses commonly assume time-invariant dynamics. However, humans are likely to vary their responses during realistic control scenarios. In this work an identification method is developed for estimating time-varying responses to visual and force feedback during a compensatory tracking task. The method describes pilot's responses with finite impulse response filters and use a Regularized Recursive Least Squares (RegRLS) algorithm to simultaneously estimate filter coefficients. The method was validated in a Monte-Carlo simulation study with different levels of remnant noise. With low levels of remnant noise, estimates were accurate and tracked the time-varying behaviour of the simulated responses. On the other hand, estimates showed high variability in case of large remnant noise. However, parameters of the RegRLS could be further optimized to improve robustness to large remnant noise. Taken together, these findings suggest that the novel RegRLS algorithm could be used to estimate time-varying pilot's responses in real human-in-the-loop experiments.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published14Identifying Time-Varying Pilot Responses: A Regularized Recursive Least-Squares Algorithm1501715422GerboniNB20167CAGerboniFMNieuwenhuizenHHBülthoffSan Diego, CA, USA2016-01-0010271040The paper describes the implementation and validation of a nonlinear model of the UH-60 helicopter. The implemented model is based on a physical vehicle and includes various important subsystems in order to increase the model fidelity. The validation is carried out through a Handling Qualities (HQ) evaluation and a comparison with flight data. Various standardized tests have been performed in the time and frequency domain for hover and the forward flight condition. Results obtained have been analyzed according to the criteria defined by the Aeronautical Design Standard ADS-33E-PRF. The behavior of our helicopter model is very similar to flight test data of the UH-60 in hover and in forward flight, although some coupling effects are not well described. Overall the model provides a reliable basis for use in motion-base simulators and as framework for conducting studies on control augmentation systems.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Implementation and Validation of a 6 Degrees-of-Freedom Nonlinear Helicopter Model1501715422MaimeriOBP20167MMaimeriMOlivariHHBülthoffLPolliniSan Diego, CA, USA2016-01-00373384External aids are required to increase safety and performance during the manual control of an aircraft. Automated systems allow to surpass the performance usually achieved by pilots. However, they suffer from several issues caused by pilot unawareness of the control command from the automation. Haptic aids can overcome these issues by showing their control command through forces on the control device. It is possible to design Haptic aids that allow pilots to improve performance compared with the baseline condition, even if these are usually outperformed by automation. It is not very well understood yet however, what happens to performance in the event of a failure of the Pilot support system. To investigate how and if a pilot can recovery its performance after a failure of the haptic or automated support system, a quantitative comparison is needed. An experiment was conducted in which pilots performed a compensatory tracking task with haptic aids and with automation. Half of the runs were affected by a failure of the support system, resulting in complete removal of the support action. The haptic aid and the automation were designed to be equivalent when the pilot was out-of-the-loop, i.e., to provide the same control command. Pilot performance and control effort were then evaluated with pilots in-the-loop and compared to a baseline condition without external aids. As expected pilots performance is better with the automated support system, than with Haptic when no failure happens. When a Failure happens, pilots experience a sudden decrease of performance in both cases, but loss of performance is much higher in the automation case. In addition and somehow surprisingly, after the initial loss of performance, pilots flying with the Haptic aid return approximately to the performance level they had just before the failure, while pilots flying with Automation cannot re-gain pre-failure levels of performance, at least in the time span of the experiment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11On Effects of Failures in Haptic and Automated Pilot Support Systems1501715422SpedicatoNBF20137SSpedicatoGNotarstefanoHHBülthoffAFranchiSingapore2016-00-0095112In this paper we design a nonlinear controller for aggressive maneuvering of a quadrotor. We take a maneuver regulation perspective. Differently from the classical trajectory tracking approach, maneuver regulation does not require following a timed reference state, but a geometric “path” with a velocity (and possibly orientation) profile assigned on it. The proposed controller relies on three main ideas. Given a desired maneuver, i.e., a set of state trajectories equivalent under time translations, the system dynamics is decomposed into dynamics longitudinal and transverse to the maneuver. A space-dependent version of the transverse dynamics is derived, by using the longitudinal state, i.e., the arc-length of the path, as an independent variable. Then the controller is obtained as a function of the arc-length consisting of two terms: a feed forward term, being the nominal input to apply when on the path at the current arc-length, and a feedback term exponentially stabilizing the state-dependent transverse dynamics. Numerical computations are presented to prove the effectiveness of the proposed strategy. The controller performances are tested in presence of uncertainty of the model parameters and input noise and saturations. The controller is also tested in a realistic simulation environment validated against an experimental test-bed.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2013/2013n-SpeNotBueFra-preprint.pdfpublished17Aggressive maneuver regulation of a quadrotor UAV1501715422BulthoffWG20162HHBülthoffCWallravenMAGieseSpringerBerlin, Germany2016-00-0020952114Robots that share their environment with humans need to be able to recognize and manipulate objects and users, perform complex navigation tasks, and interpret and react to human emotional and communicative gestures. In all of these perceptual capabilities, the human brain, however, is still far ahead of robotic systems. Hence, taking clues from the way the human brain solves such complex perceptual tasks will help to design better robots. Similarly, once a robot interacts with humans, its behaviors and reactions will be judged by humans – movements of the robot, for example, should be fluid and graceful, and it should not evoke an eerie feeling when interacting with a user. In this chapter, we present Perceptual Robotics as the field of robotics that takes inspiration from perception research and neuroscience to, first, build better perceptual capabilities into robotic systems and, second, to validate the perceptual impact of robotic systems on the user.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published19Perceptual Robotics1501715422YukselSF2016_246BYükselNStaubAFranchi2016-10-0017YukselBF2016_246BYükselGBuondonnoAFranchi2016-10-00SymeonidouOVBC20167ERSymeonidouMOlivariJVenrooijHHBülthoffLLChuangSan Diego, CA, USA2016-11-13The oscillatory suppression of sensorimotor-mu power (i.e., 10-12 Hz) is a robust EEG correlate of motor control. Simply imagining voluntary limb movement can result in consistent suppression of mu-power, especially in contralateral electrode sites. This is typically exploited by neuroprostheses (e.g., BCI-controlled wheelchairs; Huang et al., 2012) that seek to restore movement to spinal-cord injury patients. In some examples, levels of mu-suppression have also been treated as an index of motor control effort (e.g., Mann et al., 1996). However, mu-suppression in contralateral sites can also be observed during passive limb movements, namely in the absence of voluntary control effort (Formaggio et al., 2013). In this study, we investigate whether patterns of oscillatory EEG activity across contralateral (C3) and ipsilateral (C4) sites discriminate for voluntary control and limb movement. In our study, EEG measurements were taken of ten participants who were required to either actively follow or resist the deflections of a control-loaded side-stick, this respectively required voluntary control in the presence and absence of limb movement. In contrast, they were also tested in conditions with passive or no limb movements, which respectively required them to simply hold on to a moving or stationary side-stick. A repeated-measures 2 x 2 x 2 ANOVA for the factors of electrode site (contralateral vs. ipsilateral), control (active vs. passive), and movement (movement vs stationary) revealed the following. To begin, there was a significant main effect of lateralized mu-suppression. Suppression of mu-power is larger in the contralateral site compared to the ipsilateral site (F(1,9)=5.10, p=0.05). More importantly, three significant interactions were found, movement x control (F(1,9)=13.1, p<0.01), electrode x movement (F(1,9)=5.78, p=0.04) and for electrode x control (F(1,9)=5.81, p=0.039). Limb movement resulted in selective mu-suppression of only the contralateral electrode. Voluntary control resulted in mu-suppression in both contralateral and ipsilateral electrodes, albeit to a lesser extent in the ipsilateral site. Overall, active resistance against side-stick deflections resulted in the largest levels of mu-suppression. The current results suggest that active voluntary resistance can result in high levels of mu-suppression that do not exhibit strong lateralization. This might go unnoticed in brain-computer-interface and experimental paradigms that estimate control effort by contrasting contralateral to ipsilateral mu-suppression.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0EEG oscillatory modulations (10-12 Hz) discriminate for voluntary motor control and limb movement1501715422SaultonBd20167ASaultonHHBülthoffSde la RosaSan Diego, CA, USA2016-11-12Stored representations of body size and shape as derived from somatosensation are considered to be critical components of perception and action. Recent research has shown the presence of large hand distortions in proprioceptive localization tasks consisting of an overestimation of hand width and an underestimation of finger length. Those results were interpreted as reflecting specific somatosensory perceptual distortion bound to a body model underlying position sense. One important prerequisite to this interpretation is that measured localization task distortions actually stem from body representation. In this study, we re-examine hand distortions underlying positon sense and investigate whether these distortions are body specific or due to non-perceptual factors, e.g. conceptual knowledge. Participants made localization judgments regarding the spatial position of various landmarks on occluded items including their own hand. Our results show that larger hand distortions in localization tasks are likely to be induced by participants' incorrect conceptual knowledge about hand landmarks rather than proprioceptive or somatosensory influences. Moreover, we show that once we account for such incorrect conceptual knowledge, hand distortions in localization tasks are statistically similar to those of other objects. These results suggest that localization task distortions are not specific to the hand and call for caution when interpreting localization task distortions in terms of body specific effects.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Objects vs. hand: the effect of knuckle misconceptions on localization task distortions1501715422Flad20167NFladParis, France2016-10-07nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0When does the Brain Respond to Information during Visual Scanning?1501715422MeilingerFBMSB20167TMeilingerJFrankensteinJ-PBrescianiBMohlerNSimonHHBülthoffLeipzig, Germany2016-09-19nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Wie erinnern wir räumliches Wissen unseres Wohnortes?150171542215017SrismithZB20167DSrismithMZhaoIBülthoffBarcelona, Spain2016-09-01313People are good at recognising faces, particularly familiar faces. However, little is known about how precisely familiar faces are represented and how increasing familiarity improves the precision of face representation. Here we investigated the precision of face representation for two types of familiar faces: personally familiar faces (i.e. faces of colleagues) and visually familiar faces (i.e. faces learned from viewing photographs). For each familiar face, participants were asked to select the
original face among an array of faces, which varied from highly caricatured (þ50%) to highly anticaricatured (50%) along the facial shape dimension. We found that for personally familiar faces, participants selected the original faces more often than any other faces. In contrast, for visually familiar faces, the highly anti-caricatured (50%) faces were selected more often than others, including the original faces. Participants also favoured anti-caricatured faces more than caricatured
faces for both types of familiar faces. These results indicate that people form very precise representation for personally familiar faces, but not for visually familiar faces. Moreover, the more familiar a face is, the more its corresponding representation shifts from a region close to
the average face (i.e. anti-caricatured) to its veridical location in the face space.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-313Precise Representation of Personally, but not Visually, Familiar Faces1501715422delaRosaFB2016_27Sde la RosaYFerstlHHBülthoffBarcelona, Spain2016-09-01294Accurately associating an action with its actor’s identity is fundamental for many – if not all- social cognitive functions. What are the visual processes supporting this ability? Previous research suggest separate neural substrates are supporting the recognition of facial identity and actions. Here we revisited this widely held assumption and examined the sensitivity of neural action recognition processes to facial identity using behavioral adaptation. We reasoned that if action recognition and
facial identity were mediated by independent visual processes then action adaptation effects should not be modulated by the actor’s facial identity. We used action morphing and an augmented reality setup to examine the neural correlates of action recognition processes within an action adaptation paradigm under close-to-natural conditions. Contrary to the hypothesis that action recognition and facial identity are processed independently, we showed in three experiments that action
adaptation effects in an action categorization tasks are modulated by facial identity and not by clothing. These findings strongly suggest that action recognition processes are sensitive to facial identity and thereby indicate a close link between actions and facial identity. Such identity sensitive action recognition mechanisms might support the fundamental social cognitive skill of associating an action with the actor’s identity.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-294The face of actions: Evidence for neural action recognition processes being sensitive for facial identity1501715422ZhaoB2016_27MZhaoIBülthoffBarcelona, Spain2016-08-2927Unlike most everyday objects, faces are processed holistically—they tend to be perceived as indecomposable wholes instead of a collection of independent facial parts. While holistic face processing has been demonstrated with a variety of behavioral tasks, it is predominantly observed
with static faces. Here we investigated three questions about holistic processing of moving faces:
(1) are rigidly moving faces processed holistically? (2) does rigid motion reduces the magnitudes of holistic processing? and (3) does holistic processing persist when study and test faces differ in terms of facial motion? Participants completed two composite face tasks (using a complete design), one with static faces and the other with rigidly moving faces. We found that rigidly moving faces
are processed holistically. Moreover, the magnitudes of holistic processing effect observed for moving faces is similar to that observed for static faces. Finally, holistic processing still holds even when the study face is static and the test face is moving or vice versa. These results provide convincing evidence that holistic processing is a general face processing mechanism that applies to both static and moving faces. These findings indicate that rigid facial motion neither promotes partbased
face processing nor eliminates holistic face processing.
Funding: The study was supported by the Max Planck Society.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-27Holistic Processing of Static and Rigidly Moving Faces1501715422FademrechtBd2016_27LFademrechtHHBülthoffSde la RosaBarcelona, Spain2016-08-2965A central question in visual neuroscience concerns the degree to which visual representations of
actions are used for action execution. Previously, we have shown that during simultaneous action
observation and action execution, visual action recognition relies on visual but not motor
processes. This research suggests a primacy of visual processes in social interaction scenarios.
Here, we provide further evidence for visual processes dominating perception and action in social
interactions. We examined the influence of visual processes on motor control. 16 participants
were tested in a 3D virtual environment setup. Participants were visually adapted to an action (fist
bump or punch) and subsequently categorized an ambiguous morphed action as either fist bump
or punch in three experimental conditions. In the first condition participants responded via key
press after having seen the entire test stimulus. In the second, participants responded with carrying
out the complementary action after having seen the entire test stimulus. In the third (social
interaction condition) participants carried out the complementary action while observing the
test stimulus. We found an antagonistic bias of movement trajectories towards the non-adapted
action (adaptation aftereffect) only in the social interaction condition. Our results highlight the
importance of visual processes in social interactions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-65Visual processes dominate perception and action during social interactions1501715422O039MalleyBM20167MO'MalleyHHBülthoffTMeilingerPhiladelphia, PA, USA2016-08-04nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Spatial integration within environmental spaces: Testing predictions from mental walk and mental model1501715422StrickrodtBM20167MStrickrodtHHBülthoffTMeilingerPhiladelphia, PA, USA2016-08-02nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Beyond the border: Separation of space influences memory structure of an object layout1501715422HintereckerLZBBM20167THintereckerCLeroyMZhaoMButzHHBülthoffTMeilingerPhiladelphia, PA, USA2016-08-02nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Gravity as a universal reference direction? Influences on spatial memory for vertical object locations1501715422MolbertTMSBKZG2016_27SMölbertAThalerBMohlerSStreuberMJBlackH-OKarnathSZipfelKEGielTübingen, Germany2016-07-27norexia nervosa (AN) is a serious eating disorder that goes along with underweight and high rates of psychological and physical comorbidity. Body image disturbance is a core
symptom of AN, but as yet distinctive features of this disturbance unknown. This study uses individual 3D-avatars in virtual reality to investigate the following questions: (1) Do women with AN differ from controls in how accurately they perceive their body weight? (2) Do women with AN generally perceive bodies of their own shape differently than controls or only when viewing their own body? We investigate 25 women with AN and 25 healthy controls. Based on a 3D body scan, we create individual avatars for each participant. The avatar is manipulated to represent
+/- 5%, 10%, 15% and 20% of the participant’s weight. Additionally, for the control task, we manipulate identity of the avatar using a standard texture. Avatars were presented on a stereoscopic life-size screen. In the two-alternative forced choice (2AFC) task, participants
see each avatar 20 times for two seconds. After each presentation, they have to decide whether that was the correct or a manipulated avatar. In the Method of Adjustment (MoA) task, participants are asked to adjust each avatar to match both, the correct size and their
ideal size. In the control task, participants memorize the body with standard texture and afterwards perform the same 2AFC and MoA tasks with respect to the memorized body.
Additionally, eating pathology, body dissatisfaction and self-esteem are assessed. First results from 19 women with AN and 16 controls show a tendency of patients to be
accurate or to underestimate their current body size as compared to controls. In the control task, both groups accurately memorized and estimated the avatar’s weight. Our preliminary results indicate that body image disturbance in AN is not due to a general deficit in body size perception, but limited to the own person and influenced by evaluation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Investigating Body Image Disturbance in Anorexia Nervosa Using Biometric Self-Avatars in Virtual Reality150171501715422MeilingerSHCSFd20167TMeilingerMStrickrodtTHintereckerD-SChangASaultonLFademrechtSde la RosaTübingen, Germany2016-07-27The goal of social and spatial cognition is the understanding of human behavior when humans interact with their natural social and spatial environment. In contrast to this, many studies in the field examine social and spatial cognition under controlled but artificial
conditions in which participants are passive observers rather than active agents. Here we present several projects in which we use virtual reality to increase the naturalness of the experimental testing conditions, while keeping the experimental set up under high experimental control. Due to the use of virtual reality and related techniques participants are able to naturally interact with their environment (e.g. walk through spaces, high five with an avatar) while we alter the visual stimuli in real-time in response to their behavior by means of motion tracking. Using this approach we combine experimental rigor with
increased ecological validity to learn about the cognitive processes actualy taking place in life.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Using Virtual Reality to Examine Social and Spatial Cognition1501715422FedorovG2016_27LAFedorovMAGieseJeju, South Korea2016-07-0089The visual perception of body motion can show interesting multi-stability. For example, a walking body silhouette (bottom inset Fig. 83A) is seen alternately as walking in two different directions . For stimuli with minimal texture information, such as shading, this multi-stability disappears. Existing neural models for body motion perception [2–4] do not reproduce perceptual switching. Extending the model , we developed a neurodynamic model that accounts for this multi-stability (Fig. 83A). The core of the model is a two-dimensional neural field that consists of recurrently coupled neurons with selectivity for instantaneous body postures (‘snapshots’). The dimensions of the field encode the keyframe number θ and the view of the walker ϕ. The lateral connectivity of the field stabilizes two competing traveling pulse solutions that encode the perceived temporally changing action patterns (walking in the directions ±45°). The input activity of the field is generated by two visual pathways that recognize body postures from gray-level input movies. One pathway (‘silhouette pathway’) was adapted from  and recognizes shapes, mainly based on the contrast edges between the moving figure and the background. The second pathway is specialized for the analysis of luminance gradients inside the moving figure. Both pathways are hierarchical (deep) architectures, built from detectors that reproduce known properties of cortical neurons. Higher levels of the hierarchies extract more complex features with higher degree of position/scale invariance. The field activity is read out by two Motion Pattern (MP) neurons, which encode the two possible perceived walking directions. Testing the model with an unshaded silhouette stimulus, it produces randomly switching percepts that alternate between the walking directions (±45°) (Fig. 83B, C). Addition of shading cues disambiguates the percept and removes the bistability (Fig. 83D). The developed architecture accounts for the disambiguation by shape-from shading.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-89A model for multi-stable dynamics in action recognition modulated by integration of silhouette and shading cues1501715422HardcastleSBK20167BHardcastleDASchwynKBierigHGKrappLondon, UK2016-06-00700701The stabilization of gaze may involve multiple sensory systems. In blowflies, two visual pathways provide input to the gaze stabilization system: the high-resolution compound
eyes and the simple dorsal ocelli. Individually, the corresponding pathways involved cover different dynamic input ranges, incur different processing delays, and suffer from different levels of sensor and processing noise. Information from multiple sensory pathways must be integrated in order to effect appropriate movements of the head to stabilize gaze; however, it is not entirely clear how this happens. Using high-speed videography, we investigated the combination of information from the
two visual pathways at the behavioral output. We measured compensatory rotations of the head in response to a simulated roll rotation of a false-horizon around the fly, oscillating at up to 10 Hz. We found that the ocellar input reduces the response delay by an average of 5 ms but does not significantly affect the response gain or bandwidth. Our result suggests a nonlinear integration of compound eye and ocellar information. We are now performing intracellular recordings from elements along the visuomotor
pathway likely to be involved in the integration of motion vision and ocellar signals, in response to the same visual stimulus used to evoke head movements in our behavioral
experiments. This will allow us to study how signals affected by different processing delays along the two visual pathways are combined to ultimately reduce the delay of the behavioral output.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Integration of Multiple Visual Inputs in the Blowfly1501715422YukselSBF20167BYükselNStaubGBuondonnoAFranchiStockholm, Sweden2016-05-20An aerial manipulator is a flying robot, which can manipulate its environment through physical interaction. We have studied in our previous works the physical interaction of aerial vehicles using IDA-PBC and nonlinear external wrench observers [1, 2]. We also presented the design of a novel light-weight elastic joint-arm for quadrotors .
In this poster, we consider the type of aerial manipulators, in which the aerial robot is
- equipped with rigid or elastic-joint arm
- equipped with multiple manipulator arms with elastic or rigid actuatorsnonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2016/ICRA-Workshop-Poster-2016-Yueksel.pdfpublished0Differential Flatness and Control of the Aerial Manipulators with Mixed Rigid/Elastic Joints: Controllability from Single Joint Arm to the Multiple Arms1501715422ThalerGMGSBM20167AThalerMNGeussSCMölbertKEGielSStreuberMJBlackBJMohlerSt. Pete Beach, FL, USA2016-05-181400Previous research has suggested that inaccuracies in own body size estimation can largely be explained by a known error in perceived magnitude, called contraction bias (Cornelissen, Bester, Cairns, Tovée & Cornelissen, 2015). According to this, own body size estimation is biased towards an average reference body, such that individuals with a low body mass index (BMI) should overestimate their body size and high BMI individuals should underestimate their body size. However, previous studies have mainly focused on self-body size evaluation of patients suffering from anorexia nervosa. In this study, we tested healthy females varying in BMI to investigate whether personal body size influences accuracy of body size estimation and sensitivity to weight changes, reproducing a scenario of standing in front of a full length mirror. We created personalized avatars with a 4D full-body scanning system that records participants’ body geometry and texture, and altered the weight of the avatars based on a statistical body model. In two psychophysical experiments, we presented the stimuli on a stereoscopic, large-screen immersive display, and asked participants to respond to whether the body they saw was their own. Additionally, we used several questionnaires to assess participants’ self-esteem, eating behavior, and their attitudes towards their body shape and weight. Our results show that participants, across the range of BMI, veridically perceived their own body size, contrary to what is suggested by the contraction bias hypothesis. Interestingly, we found that BMI influenced sensitivity to weight changes in the positive direction, such that people with higher BMIs were more willing to accept bigger bodies as their own. BMI did not influence sensitivity to weight changes in the negative direction.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1400Investigating the influence of personal BMI on own body size perception in females using self-avatars150171542215017GeussMTM20167MGeussSCMölbertAThalerBJMohlerSt. Pete Beach, FL, USA2016-05-17986Our perception of our body, and its size, is important for many aspects of everyday life. Using a variety of measures, previous research demonstrated that people typically overestimate the size of their bodies (Longo & Haggard, 2010). Given that self-body size perception is informed from many different experiences, it is surprising that people do not perceive their bodies veridically. Here, we asked, whether different visual experiences of our bodies influence how large we estimate our body’s size. Specifically, participants estimated the width of four different body parts (feet, hips, shoulders, and head) as well as a noncorporeal object with No Visual Access, Self-Observation (1st person visual access), or looking through a Mirror (2nd person visual access) using a visual matching task. If estimates when given visual access (through mirror or 1st person perspective) differ from estimates made with no visual access, it would suggest that this method of viewing one’s body has less influence on how we represent the size of our bodies. Consistent with previous research, results demonstrated that in all conditions, each body part was overestimated. Interestingly, in the No Visual Access and Mirror conditions, the degree of overestimation was larger for upper body parts compared to lower body parts and there were no significant differences between the No Visual Access and Mirror conditions. There was, however, a significant difference between the Self-Observation condition and the other two conditions when estimating ones shoulder width. In the Self-Observation condition, participants were more accurate with estimating shoulder width. The similarity of results in the No Visual Access and Mirror conditions suggests that our representation of our body size may be partly based on experiences viewing one’s body in reflective surfaces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-986Body size estimations: the role of visual information from a first-person and mirror perspective150171542215017DobsBR20167KDobsIBülthoffLReddySt. Pete Beach, FL, USA2016-05-16925Integration of multiple sensory cues pertaining to the same object is essential for precise and accurate perception. The optimal strategy to estimate an object’s property is to weight sensory cues proportional to their relative reliability (i.e., the inverse of the variance). Recent studies showed that human observers apply this strategy when integrating low-level unisensory and multisensory signals, but evidence for high-level perception remains scarce. Here we asked if human observers optimally integrate high-level visual cues in a socially critical task, namely the recognition of a face. We therefore had subjects identify one of two previously learned synthetic facial identities (“Laura” and “Susan”) using facial form and motion.
Five subjects performed a 2AFC identification task (i.e., “Laura or Susan?”) based on dynamic face stimuli that systematically varied in the amount of form and motion information they contained about each identity (10% morph steps from Laura to Susan). In single-cue conditions one cue (e.g., form) was varied while the other (e.g., motion) was kept uninformative (50% morph). In the combined-cue condition both cues varied by the same amount. To assess whether subjects weight facial form and motion proportional to their reliability, we also introduced cue-conflict conditions in which both cues were varied but separated by a small conflict (±10%).
We fitted psychometric functions to the proportion of “Susan” choices pooled across subjects (fixed-effects analysis) for each condition. As predicted by optimal cue integration, the empirical combined variance was lower than the single-cue variances (p< 0.001, bootstrap test), and did not differ from the optimal combined variance (p>0.5). Moreover, no difference was found between empirical and optimal form and motion weights (p>0.5). Our data thus suggest that humans integrate high-level visual cues, such as facial form and motion, proportional to their reliability to yield a coherent percept of a facial identity.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-925Optimal integration of facial form and motion during face recognition1501715422ZhaoB20167MZhaoIBülthoffSt. Pete Beach, FL, USA2016-05-15731Holistic processing—the tendency to perceive objects as indecomposable wholes—has long been viewed as a process specific for faces or objects-of-expertise. While some researchers argue that holistic processing is unique for processing of faces (domain-specific hypothesis), others propose that it results from automatized attention strategy developed with expertise (i.e., expertise hypothesis). While these theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Non-face objects cannot elicit face-like holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line-patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. This face-like holistic processing of non-face objects also occurred when we tested faces and line patterns in different sessions on different days, suggesting that it was not due to the context effect incurred by testing both types of stimuli within a single session. Moreover, weakening the saliency of Gestalt information in line patterns reduced holistic processing for these stimuli, indicating the crucial role of Gestalt information in eliciting holistic processing. Taken together, these results indicate that, besides a top-down route based on expertise, holistic processing can be achieved via a bottom-up route relying merely on object-based information. Therefore, face-like holistic processing can extend beyond the domains of faces and objects-of-expertise, in contrary to current dominant theories.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-731Holistic Processing of Unfamiliar Line Patterns1501715422FademrechtNBBd20167LFademrechtJNieuwenhuisIBülthoffNBarracloughSde la RosaSt. Pete Beach, FL, USA2016-05-14280In real life humans need to recognize actions even if the actor is surrounded by a crowd of people but little is known about action recognition in cluttered environments. In the current study, we investigated whether a crowd influences action recognition with an adaptation paradigm. Using life-sized moving stimuli presented on a panoramic display, 16 participants adapted to either a hug or a clap action and subsequently viewed an ambiguous test stimulus (a morph between both adaptors). The task was to categorize the test stimulus as either ‘clap’ or ‘hug’. The change in perception of the ambiguous action due to adaptation is referred to as an ‘adaptation aftereffect’. We tested the influence of a cluttered background (a crowd of people) on the adaptation aftereffect under three experimental conditions: ‘no crowd’, ‘static crowd’ and ‘moving crowd’. Additionally, we tested the adaptation effect at 0° and 40° eccentricity. Participants showed a significant adaptation aftereffect at both eccentricities (p < .001). The results reveal that the presence of a crowd (static or moving) has no influence on the action adaptation effect (p = .07), neither in central vision nor in peripheral vision. Our results suggest that action recognition mechanisms and action adaptation aftereffects are robust even in complex and distracting environments.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-280Does action recognition suffer in a crowded environment?1501715422delaRosaFB20167Sde la RosaYFerstlHHBülthoffSt. Pete Beach, FL, USA2016-05-14268It has been suggested that the motor system is essential for various social cognitive functions including the perception of actions in social interactions. Typically, the influence of the motor system on action recognition has been addressed in studies in which participants are merely action observers. This is in stark contrast to real social interactions in which humans often execute and observe actions at the same time. To overcome this discrepancy, we investigated the contribution of the motor system to action recognition when participants concurrently observed and executed actions. As a control, participants also observed and executed actions separately (i.e. not concurrently). Specifically, we probed the sensitivity of action recognition mechanisms to motor action information in both unimodal and bimodal motor-visual adaptation conditions. We found that unimodal visual adaptation to an action changed the percept of a subsequently presented ambiguous action away from the adapted action (adaptation aftereffect). We found a similar adaptation aftereffect in the unimodal non-visual motor adaptation condition confirming that also motor action information contributes to action recognition. However, in bimodal adaptation conditions, in which participants executed and observed actions at the same time, adaptation aftereffects were governed by the visual but not motor action information. Our results demonstrate that the contribution of the motor system to action recognition is small in conditions of simultaneous action observation and execution. Because humans often concurrently execute and observe actions in social interactions, our results suggest that action recognition in social interaction is mainly based on visual action information.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-268Does the motor system contribute to action recognition in social interactions?1501715422FichtnerHGZABK20167NFichtnerAHenningIGiapitzakisNZoelchNAvdievichCBoeschRKreisBern, Switzerland2016-03-3136Introduction: Magnetic resonance spectroscopy benefits from using ultrahigh field scanners, as both the signal to noise ratio (SNR) and the separation of peaks improve. Inclusion of the downfield part of the spectrum (left of water peak) in addition to the generally used upfield part of the 1H MR spectrum is expected to allow for better monitoring of pathologies and metabolism in humans. The downfield part
at 5-10ppm is less well characterized than the upfield spectrum, although some data is available for animal brain at high fields, as well as human brain at 3T. Experiments have been performed to elucidate the downfield spectrum in human brain and to quantify metabolite relaxation times T1 and T2 in grey matter at 7T using series of spectra with variable inversion recovery (IR) and echo time (TE) delays. Initial downfield experiments have also been performed in humans at 9.4T. Materials and Methods: Acquisition methods at 7T used a Philips 7T whole body scanner (UZH/ETH Zürich), with a voxel of interest placed in the visual cortex. A series of TEs and IRs was acquired in a total of 22 healthy volunteers. At 9.4T, spectra were acquired in three healthy volunteers on a Siemens whole-body MRI scanner (MPI Tuebingen). Results and Discussion: The spectra acquired at 7T and 9.4T demonstrate significant improvements in SNR and peak separation compared to those at lower field strengths. The averaged data sets from the 7T series were combined to develop a spectral model of partially overlapping signals this heuristic model describes
the experimental data well and the results for many of the peaks are very consistent across subjects. T1 values found at 7T are mostly higher than those found at 3T, in particular for the NAA peak. Several peaks show a particularly short T1 in comparison to the others, indicating that they predominantly originate from macromolecules. The T2 values are in general much shorter than those found for upfield peaks.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-36Downfield MR Spectroscopy at Ultrahigh Magnetic Fields1501715422BeierholmRSN20167UBeierholmTRoheOStegleUNoppeneySalt Lake City, UT, USA2016-02-25Combining multiple sources of information requires an estimate of the reliability of each source in order to perform optimal information integration. The human brain is faced with this challenge whenever processing multisensory
stimuli, however how the brain estimates the reliability of each source is unclear with most studies assuming that
the reliability is directly available. In practice however reliability of an information source requires inference too,
and may depend on both current and previous information, a problem that can neatly be placed in a Bayesian framework. We performed three audio-visual spatial localization experiments where we manipulated the uncertainty of the visual stimulus over time. Subjects were presented with simultaneous auditory and visual cues in the horizontal plane and were tasked with locating the auditory cue. Due to the well-known ventriloquist illusion responses were biased towards the visual cue, depending on its reliability. We found that subjects changed their estimate of the visual reliability not only based on the presented visual stimulus, but were also influenced by the history of visual stimuli. The finding implies that the estimated reliability is governed by a learning process, here
operating on a timescale on the order of 10 seconds. Using model comparison we found for all three experiments
that a hierarchical Bayesian model that assumes a slowly varying reliability is best able to explain the data. Together these results indicate that the subjects’ estimated reliability of stimuli changes dynamically and thus that the brain utilizes the temporal dynamics of the environment by combining current and past estimates of reliability.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Using the past to estimate sensory uncertainty1501715422Nooij20167SAENooijUlm, Germany2016-02-06nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The role of eye movements, head movements and vection in visually induced motion sickness1501715422NestidB20167ANestiKde WinkelHHBülthoffUlm, Germany2016-02-05nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Accumulation of sensory evidence in self-motion perception: long stimulus exposure facilitates discrimination of sinusoidal yaw rotations1501715422D039Intino201615GD'Intino2016-09-29nonotspecifiedpublishedDesign and Experimental Evaluation of Haptic Support Systems for Pilot Training1501715422Olivari201615MOlivari2016-06-00nonotspecifiedpublishedMeasuring Pilot Control Behavior in Control Tasks with Haptic Feedback1501715422Bulthoff2016_710HHBülthoffNesti201610ANestiBulthoff2016_410HHBülthoffScheer201610MScheerChuang201610LLChuangGlatz201610CGlatzNesti2016_210ANestiBulthoff2016_510HHBülthoffGlatzBC201610CGlatzHHBülthoffLChuangZhaoB2016_310MZhaoIBülthoffChuangS201610LChuangMScheerChuang2016_310LChuangChuang2016_210LLChuangPretto201610PPrettoFedorovG201610LAFedorovMAGieseMeilinger201610CHoreisCFosterKWatanabeHHBülthoffTMeilingerChangFGBd201610D-SChangLFedorovMGieseHHBülthoffSde la RosaDobsR201610KDobsLReddyChangBd201610D-SChangHHBülthoffSde la RosaThaler201610AThalerMeilinger2016_210TMeilingerBulthoff2016_310HHBülthoffJVenrooijdeWinkelB201610KNde WinkelHHBülthoffBulthoffV201610HHBülthoffJVenrooijLohmannKMB201610JLohmannJKurzTMeilingerMVButzHintereckerLBBM201610THintereckerCLeroyMVButzHHBülthoffTMeilingerVenrooijB201610JVenrooijHHBülthoffBulthoff2016_210HHBülthoffMeilingerRHBM201610TMeilingerJRebaneAHensonHHBülthoffHAMallotBulthoff201610HHBülthoffDobs20151KDobsLogos VerlagBerlin, Germany2015-00-00Dynamic faces are highly complex, ecologically and socially relevant stimuli which we encounter almost everyday. When and what we extract from this rich source of information needs to be well coordinated by the face perception system. The current thesis investigates how this coordination is achieved.
Part I comprises two psychophysical experiments examining the mechanisms underlying facial motion processing. Facial motion is represented as high-dimensional spatio-temporal data defining which part of the face is moving in which direction over time. Previous studies suggest that facial motion can be adequately represented using simple approximations. I argue against the use of synthetic facial motion by showing that the face perception system is highly sensitive towards manipulations of the natural spatio-temporal characteristics of facial motion. The neural processes coordinating facial motion processing may rely on two mechanisms: first, a sparse but meaningful spatio-temporal code representing facial motion; second, a mechanism that extracts distinctive motion characteristics. Evidence for the latter hypothesis is provided by the observation that facial motion, when performed in unconstrained contexts, helps identity judgments.
Part II presents a functional magnetic resonance imaging (fMRI) study investigating the neural processing of expression and identity information in dynamic faces. Previous studies proposed a distributed neural system for face perception which distinguishes between invariant (e.g., identity) and changeable (e.g., expression) aspects of faces. Attention is a potential candidate mechanism to coordinate the processing of these two facial aspects. Two findings support this hypothesis: first, attention to expression versus identity of dynamic faces dissociates cortical areas assumed to process changeable aspects from those involved in discriminating invariant aspects of faces; second, attention leads to a more precise neural representation of the attended facial feature. Interactions between these two representations may be mediated by a part of the inferior occipital gyrus and the superior temporal sulcus which is supported by the observation that the latter area represented both expression and identity, while the first represented identity information irrespective of the attended feature.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published108Behavioral and Neural Mechanisms Underlying Dynamic Face Perception1501715422Esins20151JEsinsLogos VerlagBerlin, Germany2015-00-00Face recognition is one of the most important abilities for everyday social interactions. Congenital prosopagnosia, also referred to as glqqq face blindness", describes the innate, lifelong impairment to recognize other people by their face. About 2 % of the population is affected.
This thesis aimed to investigate different aspects of face processing in prosopagnosia in order to gain a clearer picture and a better understanding of this heterogeneous impairment. In a first study, various aspects of face recognition and perception were investigated to allow for a better understanding of the nature of prosopagnosia. The results replicated previous findings and helped to resolve discrepancies between former studies. In addition, it was found that prosopagnosics show an irregular response behavior in tests for holistic face recognition. We propose that prosopagnosics either switch between strategies or respond randomly when performing these tests. In a second study, the general face recognition deficit observed in prosopagnosia was compared to face recognition deficits occurring when dealing with other-race faces. Most humans find it hard to recognize faces of an unfamiliar race, a phenomenon called the "other-race effect". The study served to investigate if there is a possible common mechanism underlying prosopagnosia and the other-race effect, as both are characterized by problems in recognizing faces. The results allowed to reject this hypothesis, and yielded new insights about similarities and dissimilarities between prosopagnosia and the other-race effect. In the last study, a possible treatment of prosopagnosia was investigated. This was based on a single case in which a prosopagnosic reported a sudden improvement of her face recognition abilities after she started a special diet.
The different studies cover diverse aspects of prosopagnosia: the nature of prosopagnosia and measurement of its characteristics, comparison to other face recognition impairments, and treatment options. The results serve to broaden the knowledge about prosopagnosia and to gain a more detailed picture of this impairment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published137Face processing in congenital prosopagnosia1501715422Venrooij20151JVenrooijLogos VerlagBerlin, Germany2015-00-00Vehicle accelerations affect the human body in various ways. In some cases, accelerations cause involuntary motions of limbs like arms and hands. If someone is engaged in a manual control task at the same time, these involuntary limb motions can lead to involuntary control forces and control inputs. This phenomenon is called biodynamic feedthrough (BDFT). The control of many different vehicles is known to be vulnerable to BDFT effects, such as that of helicopters, aircraft, electric wheelchairs and hydraulic excavators.
The fact that BDFT reduces comfort, control performance and safety in a wide variety of vehicles and under many different circumstances has motivated numerous efforts into measuring, modeling and mitigating these effects. It is known that BDFT dynamics depend on vehicle dynamics and control device dynamics, but also on factors such as seating dynamics, disturbance direction, disturbance frequency and the presence of seat belts and arm rests. The most complex and influential factor in BDFT is the human body. It is through the human body dynamics that the vehicle accelerations are transferred into involuntary limb motions and, consequently, into involuntary control inputs. Human body dynamics vary between persons with different body sizes and weights, but also within one person over time.
The goal of the research was to increase the understanding of BDFT to allow for effective and efficient mitigation of the BDFT problem. The work dealt with several aspects of biodynamic feedthrough, but focused on the influence of the variable neuromuscular dynamics on BDFT dynamics. The approach of the research consisted of three parts: first, a method was developed to accurately measure BDFT. Then, several BDFT models were developed that describe the BDFT phenomenon based on various different principles. Finally, using the insights from the previous steps, a novel approach to BDFT mitigation was proposed and experimentally validated.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published440Measuring, modeling and mitigating biodynamic feedthrough1501715422Nesti20151ANestiLogos VerlagBerlin, Germany2015-00-00Everyday life requires humans to move through the environment, while completing crucial tasks such as retrieving nourishment, avoiding perils or controlling motor vehicles. Success in these tasks largely relies in a correct perception of self-motion, i.e. the continuous estimation of one's body position and its derivatives with respect to the world. The processes underlying self-motion perception have fascinated neuroscientists for more than a century and large bodies of neural, behavioural and physiological studies have been conducted to discover how the central nervous system integrates available sensory information to create an internal representation of the physical motion.
The goal of this PhD thesis is to extend current knowledge on self-motion perception by focusing on conditions that closely resemble typical aspects of everyday life. In the works conducted within this thesis, I isolate different components typical of everyday life motion and employ psychophysical methodologies to systematically investigate their effect on human self-motion sensitivity. Particular attention is dedicated to the human ability to discriminate between motions of different intensity. How this is achieved has been a fundamental question in the study of perception since the seminal works of Weber and Fechner. When tested over wide ranges of rotations and translations, participants' sensitivity (i.e. their ability to detect motion changes) is found to decrease with increasing motion intensities, revealing a nonlinearity in the perception of self-motion that is not present at the level of ocular reflexes or in neural responses of sensory afferents.
The relationship between the stimulus intensity and the smallest intensity change perceivable by the participants can be mathematically described by a power law, regardless on the sensory modality investigated (visual or inertial) and on whether visual and inertial cues were presented alone or congruently combined, such as during natural movements. Individual perceptual law parameters were fit based on experimental data for upward and downward translations and yaw rotations based on visual-only, inertial-only and combined visual-inertial motion cues. Besides wide ranges of motion intensities, everyday life scenarios also provide complex motion patterns involving combinations of rotational and translational motion, visual and inertial sensory cues and physical and mental workload.
The question of how different combinations of these factors affect motion sensitivity was experimentally addressed within the framework of driving simulation and revealed that sensitivity might strongly decrease in more realistic conditions, where participants do not only focus on perceiving a 'simple' motion stimulus (e.g. a sinusoidal profile at a specific frequency) but are, instead, actively engaged in a dynamic driving simulation. Applied benefits of the present thesis include advances in the field of vehicle motion simulation, where knowledge on human self-motion perception supports the development of state-of-the-art algorithms to control simulator motion. This allows for reproducing, within a safe and controlled environment, driving or flying experiences that are perceptually realistic to the user. Furthermore, the present work will guide future research into the neural basis of perception and action.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published211On the Perception of Self-motion: from Perceptual Laws to Car Driving Simulation1501715422LeeBM20151S-WLeeHHBülthoffK-RMüllerSpringerDordrecht, The Netherlands2015-00-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published213Recent Progress in Brain and Cognitive Engineering1501715422Piryankov20151IPiryankovaLogos VerlagBerlin, Germany2015-00-00Technologische Fortschritte in der Computergrafik, dem dreidimensionalen Scannen und in Motion-Tracking-Technologien haben zu einem erhöhten Einsatz von Selbst-Avataren in immersiven virtuellen Realitäten (VR) beigetragen.
Selbst-Avatare werden zum Beispiel in den Bereichen Visualisierung und Simulation, aber auch in klinischen Anwendungen oder für Unterhaltungszwecke eingesetzt. Deshalb ist es wichtig neue Erkenntnisse über die Wahrnehmung des eigenen Körpers, des Selbst-Avatars und der räumlichen Wahrnehmung des Benutzers zu gewinnen, sowie den Einfluss des Selbst-Avatars auf die räumliche Wahrnehmung in der virtuellen Welt zu untersuchen.
Mit Hilfe von moderner VR-Technologie habe ich untersucht wie Veränderungen des Selbst-Avatars die Wahrnehmung des eigenen Körpers und des Raumes verändern. Die Ergebnisse zeigen, dass Selbst-Avatare nicht genau die gleichen Dimensionen wie der Körper des Benutzers haben müssen, damit sich der Benutzer mit seinem Selbst-Avatar identifizieren kann.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published143The influence of a self-avatar on space and body perception in immersive virtual reality1501715422Kaulard20151KKaulardLogos VerlagBerlin, Germany2015-00-00One of the defining attributes of the human species is sophisticated communication, for which facial expressions are crucial. Traditional research has so far mainly investigated a minority of 6 basic emotional expressions displayed as pictures. Despite the important insights of this approach, its ecological validity is limited: facial movements express more than emotions, and facial expressions are more than just pictures. The objective of the present thesis is therefore to improve the understanding of facial expression recognition by investigating the internal representations of a large range of facial expressions, displayed both as static pictures and as dynamic videos.
To this end, it was necessary to develop and validate a new facial expression database which includes 20.000 stimuli of 55 expressions (study 1). Perceptual representations of the six basic emotional expressions were found previously to rely on evaluation of valence and arousal; study 2 showed that this evaluation generalises to many more expressions, particularly when displayed as videos. While it is widely accepted that knowledge influences perception, how these are linked is largely unknown; study 3 investigated this question by asking how knowledge about facial expressions, instantiated as conceptual representations, relates to perceptual representations of these expressions. A strong link was found which changed with the kind of expressions and the type of display.
In probably the most extensive behavioural studies (with regards to the number of facial expressions used) to date, this thesis suggests that there are commonalities but also differences in processing of emotional and of other types of facial expressions. Thus, to understand facial expression processing, one needs to consider more than the 6 basic emotional expressions. These findings outline first steps towards a new domain in facial expression research, which has implications for a number of research and application fields where facial expressions play a role, ranging from social, developmental, and clinical psychology to computer vision and affective computing research.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published224Visual Perception of Emotional and Conversational Facial Expressions1501715422TrutoiuGKSM201528LTrutoiuMGeussSKuhlBSandersRMantiukBulthoffKP201528HHBülthoffAKemenyPPrettoNestiBPB20153ANestiKABeykirchPPrettoHHBülthoff2015-12-001223335533564To successfully perform daily activities such as maintaining posture or running, humans need to be sensitive to self-motion over a large range of motion intensities. Recent studies have shown that the human ability to discriminate self-motion in the presence of either inertial-only motion cues or visual-only motion cues is not constant but rather decreases with motion intensity. However, these results do not yet allow for a quantitative description of how self-motion is discriminated in the presence of combined visual and inertial cues, since little is known about visual–inertial perceptual integration and the resulting self-motion perception over a wide range of motion intensity. Here we investigate these two questions for head-centred yaw rotations (0.5 Hz) presented either in darkness or combined with visual cues (optical flow with limited lifetime dots). Participants discriminated a reference motion, repeated unchanged for every trial, from a comparison motion, iteratively adjusted in peak velocity so as to measure the participants’ differential threshold, i.e. the smallest perceivable change in stimulus intensity. A total of six participants were tested at four reference velocities (15, 30, 45 and 60 °/s). Results are combined for further analysis with previously published differential thresholds measured for visual-only yaw rotation cues using the same participants and procedure. Overall, differential thresholds increase with stimulus intensity following a trend described well by three power functions with exponents of 0.36, 0.62 and 0.49 for inertial, visual and visual–inertial stimuli, respectively. Despite the different exponents, differential thresholds do not depend on the type of sensory input significantly, suggesting that combining visual and inertial stimuli does not lead to improved discrimination performance over the investigated range of yaw rotations.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11Human discrimination of head-centred visual–inertial yaw rotations1501715422OlivariNVBP20133MOlivariFNieuwenhuizenJVenrooijHHBülthoffLPollini2015-12-00124527802791In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses1501715422ChangBBd2015_23D-SChangFBurgerHHBülthoffSde la Rosa2015-12-006616Perceiving social information such as the cooperativeness of another person is an important part of human interaction. But can people perceive the cooperativeness of others even without any visual or auditory information? In a novel experimental setup, we connected two people with a rope and made them accomplish a point-collecting task together while they could not see or hear each other. We observed a consistently emerging turn-taking behavior in the interactions and installed a confederate in a subsequent experiment who either minimized or maximized this behavior. Participants experienced this only through the haptic force-feedback of the rope and made evaluations about the confederate after each interaction. We found that perception of cooperativeness was significantly affected only by the manipulation of this turn-taking behavior. Gender- and size-related judgments also significantly differed. Our results suggest that people can perceive social information such as the cooperativeness of other people even in situations where possibilities for communication are minimal.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5The Perception of Cooperativeness Without Any Visual or Auditory Communication1501715422StrickrodtOW20153MStrickrodtMO'MalleyJMWiener2015-12-0019366112We present two experiments investigating how navigators deal with ambiguous landmark information when learning unfamiliar routes. In the experiments we presented landmark objects repeatedly along a route, which allowed us to manipulate how informative single landmarks were (1) about the navigators' location along the route and (2) about the action navigators had to take at that location. Experiment 1 demonstrated that reducing location informativeness alone did not affect route learning performance. While reducing both location and action informativeness led to decreased route learning performance, participants still performed well above chance level. This demonstrates that they used other information than just the identity of landmark objects at their current position to disambiguate their location along the route. To investigate how navigators distinguish between visually identical intersections, we systematically manipulated the identity of landmark objects and the actions required at preceding intersections in Experiment 2. Results suggest that the direction of turn at the preceding intersections was sufficient to tell two otherwise identical intersections apart. Together, results from Experiments 1 and 2 suggest that route knowledge is more complex than simple stimulus-response associations and that neighboring places are tightly linked. These links not only encompass sequence information but also directional information which is used to identify the correct direction of travel at subsequent locations, but can also be used for self-localization.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11This Place Looks Familiar: How Navigators Distinguish Places with Ambiguous Landmark Objects When Learning Novel Routes1501715422GianiBOKN20153ASGianiPBelardinelliEOrtizMKleinerUNoppeney2015-11-00122203–213In everyday life, our auditory system is bombarded with many signals in complex auditory scenes. Limited processing capacities allow only a fraction of these signals to enter perceptual awareness. This magnetoencephalography (MEG) study used informational masking to identify the neural mechanisms that enable auditory awareness. On each trial, participants indicated whether they detected a pair of sequentially presented tones (i.e., the target) that were embedded within a multi-tone background.
We analysed MEG activity for ‘hits’ and ‘misses’, separately for the first and second tones within a target pair. Comparing physically identical stimuli that were detected or missed provided insights into the neural processes underlying auditory awareness. While the first tone within a target elicited a stronger early P50m on hit trials, only the second tone evoked a negativity at 150 ms, which may index segregation of the tone pair from the multi-tone background. Notably, a later sustained deflection peaking around 300 and 500 ms (P300m) was the only component that was significantly amplified for both tones, when they were detected pointing towards its key role in perceptual awareness.
Additional Dynamic Causal Modelling analyses indicated that the negativity at 150 ms underlying auditory stream segregation is mediated predominantly via changes in intrinsic connectivity within auditory cortices. By contrast, the later P300m response as a signature of perceptual awareness relies on interactions between parietal and auditory cortices.
In conclusion, our results suggest that successful detection and hence auditory awareness of a two-tone pair within complex auditory scenes rely on recurrent processing between auditory and higher-order parietal cortices.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-203Detecting tones in complex auditory scenes15017154221501718826GeussSCTM20153MNGeussJKStefanucciSHCreem-RegehrWBThompsonBJMohler2015-11-0075712351247Objective: Our goal was to evaluate the degree to which display technologies influence the perception of size in an image.
Background: Research suggests that factors such as whether an image is displayed stereoscopically, whether a user’s viewpoint is tracked, and the field of view of a given display can affect users’ perception of scale in the displayed image.
Method: Participants directly estimated the size of a gap by matching the distance between their hands to the gap width and judged their ability to pass unimpeded through the gap in one of five common implementations of three display technologies (two head-mounted displays [HMD] and a back-projection screen).
Results: Both measures of gap width were similar for the two HMD conditions and the back projection with stereo and tracking. For the displays without tracking, stereo and monocular conditions differed from each other, with monocular viewing showing underestimation of size.
Conclusions: Display technologies that are capable of stereoscopic display and tracking of the user’s viewpoint are beneficial as perceived size does not differ from real-world estimates. Evaluations of different display technologies are necessary as display conditions vary and the availability of different display technologies continues to grow.
Applications: The findings are important to those using display technologies for research, commercial, and training purposes when it is important for the displayed image to be perceived at an intended scale.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Effect of Display Technology on Perceived Scale of Space150171542215017MeilingerFWBH2014_23TMeilingerJFrankensteinKWatanabeHHBülthoffCHölscher2015-11-0067910001008In everyday life, navigators often consult a map before they navigate to a destination (e.g., a hotel, a room, etc.). However, not much is known about how humans gain spatial knowledge from seeing a map and direct navigation together. In the present experiments, participants learned a simple multiple corridor space either from a map only, only from walking through the virtual environment, first from the map and then from navigation, or first from navigation and then from the map. Afterwards, they conducted a pointing task from multiple body orientations to infer the underlying reference frames. We constructed the learning experiences in a way such that map-only learning and navigation-only learning triggered spatial memory organized along different reference frame orientations. When learning from maps before and during navigation, participants employed a map- rather than a navigation-based reference frame in the subsequent pointing task. Consequently, maps caused the employment of a map-oriented reference frame found in memory for highly familiar urban environments ruling out explanations from environmental structure or north preference. When learning from navigation first and then from the map, the pattern of results reversed and participants employed a navigation-based reference frame. The priority of learning order suggests that despite considerable difference between map and navigation learning participants did not use the more salient or in general more useful information, but relied on the reference frame established first.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/Psychol-Res-2015-Meilinger.pdfpublished8Reference frames in learning from maps and navigation1501715422KrimmelBBMDBRK20153MKrimmelMBreidtMBacherSMüller-HagedornKDietzHHBülthoffSReinertSKluba2015-10-004136490e501eBACKGROUND:
With the advent of computer-assisted three-dimensional surface imaging and rapid data processing, oral and maxillofacial surgeons and orthodontists are enabled to analyze facial growth three dimensionally. Normative data, however, are still rare and inconsistent. The aim of the present study was to establish a valid reference system and to give normative data for facial growth.
Three-dimensional facial surface images were obtained from 344 healthy Caucasian children (aged 0 to 7 years). The images were put in correspondence by means of six landmarks close to the skull base (exocanthion, endocanthion, otobasion inferius). Growth curves for 21 landmarks were estimated in the three dimensions.
Facial regions close to the skull base (orbit and ear) showed a biphasic growth pattern, with accelerated growth during the first year of life that subsided to a decreased and linear velocity thereafter. Landmarks on the nose, lips, and chin demonstrated either a curvilinear or a linear growth pattern.
The rapid increase of the orbit and ear region in infancy is a secondary phenomenon to the rapid growth of the neurocranium during the first year of life. Thereafter, maxillary and mandibular growth prevails. The present study gives three-dimensional normative data for an expanded growth span between birth and childhood.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11Three-Dimensional Normal Facial Growth from Birth to the Age of 7 Years1501715422BiegCBB20153H-JBiegLLChuangHHBülthoffJ-PBresciani2015-09-00923325272538Before initiating a saccade to a moving target, the brain must take into account the target’s eccentricity as well as its movement direction and speed. We tested how the kinematic characteristics of the target influence the time course of this oculomotor response. Participants performed a step-ramp task in which the target object stepped from a central to an eccentric position and moved at constant velocity either to the fixation position (foveopetal) or further to the periphery (foveofugal). The step size and target speed were varied. Of particular interest were trials that exhibited an initial saccade prior to a smooth pursuit eye movement. Measured saccade reaction times were longer in the foveopetal than in the foveofugal condition. In the foveopetal (but not the foveofugal) condition, the occurrence of an initial saccade, its reaction time as well as the strength of the pre-saccadic pursuit response depended on both the target’s speed and the step size. A common explanation for these results may be found in the neural mechanisms that select between oculomotor response alternatives, i.e., a saccadic or smooth response.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11Asymmetric saccade reaction times to smooth pursuit1501715422ZhaoW2015_23MZhaoWHWarren2015-09-0014296–109Path integration has long been thought of as an obligatory process that automatically updates one’s position and orientation during navigation. This has led to the hypotheses that path integration serves as a back-up system in case landmark navigation fails, and a reference system that detects discrepant landmarks. Three experiments tested these hypotheses in humans, using a homing task with a catch-trial paradigm. Contrary to the back-up system hypothesis, when stable landmarks unexpectedly disappeared on catch trials, participants were completely disoriented, and only then began to rely on path integration in subsequent trials (Experiment 1). Contrary to the reference system hypothesis, when stable landmarks unexpectedly shifted by 115° on catch trials, participants failed to detect the shift and were completely captured by the landmarks (Experiment 2). Conversely, when chronically unstable landmarks unexpectedly remained in place on catch trials, participants failed to notice and continued to navigate by path integration (Experiment 3). In the latter two cases, they gradually sensed the instability (or stability) of landmarks on later catch trials. These results demonstrate that path integration does not automatically serve as a back-up system, and does not function as a reference system on individual sorties, although it may contribute to monitoring environmental stability over time. Rather than being automatic, the roles of path integration and landmark navigation are thus dynamically modulated by the environmental context.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-96Environmental stability modulates the role of path integration in human navigation1501715422StefanucciCTLG20153JKStefanucciSHCreem-RegehrWBThompsonDALessardMNGeuss2015-09-00321215223Accurate perception of the size of objects in computer-generated imagery is important for a growing number of applications that rely on absolute scale, such as medical visualization and architecture. Addressing this problem requires both the development of effective evaluation methods and an understanding of what visual information might contribute to differences between virtual displays and the real world. In the current study, we use 2 affordance judgments—perceived graspability of an object or reaching through an aperture—to compare size perception in high-fidelity graphical models presented on a large screen display to the real world. Our goals were to establish the use of perceived affordances within spaces near to the observer for evaluating computer graphics and to assess whether the graphical displays were perceived similarly to the real world. We varied the nature of the affordance task and whether or not the display enabled stereo presentation. We found that judgments of grasping and reaching through can be made effectively with screen-based displays. The affordance judgments revealed that sizes were perceived as smaller than in the real world. However, this difference was reduced when stereo viewing was enabled or when the virtual display was viewed before the real world.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Evaluating the accuracy of size perception on screen-based displays: Displayed objects appear smaller than real objects1501715422SimHGK20143E-JSimHBHelbigMGrafMKiefer2015-09-0092529072918Recent evidence suggests an interaction between the ventral visual-perceptual and dorsal visuo-motor brain systems during the course of object recognition. However, the precise function of the dorsal stream for perception remains to be determined. The present study specified the functional contribution of the visuo-motor system to visual object recognition using functional magnetic resonance imaging and event-related potential (ERP) during action priming. Primes were movies showing hands performing an action with an object with the object being erased, followed by a manipulable target object, which either afforded a similar or a dissimilar action (congruent vs. incongruent condition). Participants had to recognize the target object within a picture–word matching task. Priming-related reductions of brain activity were found in frontal and parietal visuo-motor areas as well as in ventral regions including inferior and anterior temporal areas. Effective connectivity analyses suggested functional influences of parietal areas on anterior temporal areas. ERPs revealed priming-related source activity in visuo-motor regions at about 120 ms and later activity in the ventral stream at about 380 ms. Hence, rapidly initiated visuo-motor processes within the dorsal stream functionally contribute to visual object recognition in interaction with ventral stream processes dedicated to visual analysis and semantic integration.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11When Action Observation Facilitates Visual Perception: Activation in Visuo-Motor Areas Contributes to Object Recognition15017188241501715422SoykaBB20153FSoykaHHBülthoffMBarnett-Cowan2015-08-00810114Humans are capable of moving about the world in complex ways. Every time we move, our self-motion must be detected and interpreted by the central nervous system in order to make appropriate sequential movements and informed decisions. The vestibular labyrinth consists of two unique sensory organs the semi-circular canals and the otoliths that are specialized to detect rotation and translation of the head, respectively. While thresholds for pure rotational and translational self-motion are well understood surprisingly little research has investigated the relative role of each organ on thresholds for more complex motion. Eccentric (off-center) rotations during which the participant faces away from the center of rotation stimulate both organs and are thus well suited for investigating integration of rotational and translational sensory information. Ten participants completed a psychophysical direction discrimination task for pure head-centered rotations, translations and eccentric rotations with 5 different radii. Discrimination thresholds for eccentric rotations reduced with increasing radii, indicating that additional tangential accelerations (which increase with radius length) increased sensitivity. Two competing models were used to predict the eccentric thresholds based on the pure rotation and translation thresholds: one assuming that information from the two organs is integrated in an optimal fashion and another assuming that motion discrimination is solved solely by relying on the sensor which is most strongly stimulated. Our findings clearly show that information from the two organs is integrated. However the measured thresholds for 3 of the 5 eccentric rotations are even more sensitive than predictions from the optimal integration model suggesting additional non-vestibular sources of information may be involved.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Integration of Semi-Circular Canal and Otolith Cues for Direction Discrimination during Eccentric Rotations1501715422GrabeBSR20153VGrabeHHBülthoffDScaramuzzaPRobuffo Giordano2015-07-0083411141135For the control of unmanned aerial vehicles (UAVs) in GPS-denied environments, cameras have been widely exploited as the main sensory modality for addressing the UAV state estimation problem. However, the use of visual information for ego-motion estimation presents several theoretical and practical difficulties, such as data association, occlusions, and lack of direct metric information when exploiting monocular cameras. In this paper, we address these issues by considering a quadrotor UAV equipped with an onboard monocular camera and an inertial measurement unit (IMU). First, we propose a robust ego-motion estimation algorithm for recovering the UAV scaled linear velocity and angular velocity from optical flow by exploiting the so-called continuous homography constraint in the presence of planar scenes. Then, we address the problem of retrieving the (unknown) metric scale by fusing the visual information with measurements from the onboard IMU. To this end, two different estimation strategies are proposed and critically compared: a first exploiting the classical extended Kalman filter (EKF) formulation, and a second one based on a novel nonlinear estimation framework. The main advantage of the latter scheme lies in the possibility of imposing a desired transient response to the estimation error when the camera moves with a constant acceleration norm with respect to the observed plane. We indeed show that, when compared against the EKF on the same trajectory and sensory data, the nonlinear scheme yields considerably superior performance in terms of convergence rate and predictability of the estimation. The paper is then concluded by an extensive experimental validation, including an onboard closed-loop control of a real quadrotor UAV meant to demonstrate the robustness of our approach in real-world conditions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published21Nonlinear ego-motion estimation from optical flow for online control of a quadrotor UAV1501715422DobrickiM20153MDobrickiBJMohler2015-07-00744814820When looking into a mirror healthy humans usually clearly perceive their own face. Such an unambiguous face self-perception indicates that an individual has a discrete facial self-representation and thereby the involvement of a self-other face distinction mechanism. We have stroked the trunk of healthy individuals while they watched the trunk of a virtual human that was facing them being synchronously stroked. Subjects sensed self-identification with the virtual body, which was accompanied by a decrease of their self-other face distinction. This suggests that face self-perception involves the self-other face distinction and that this mechanism is underlying the formation of a discrete representation of one’s face. Moreover, the self-identification with another’s body that we find suggests that the perception of one’s full body affects the self-other face distinction. Hence, changes in self-other face distinction can indicate alterations of body self-perception, and thereby serve to elucidate the relationship of face and body self-perception.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Self-Identification With Another’s Body Alters Self-Other Face Distinction150171542215017KimCPCWBK20153JKimYGChungJ-YParkS-CChungCWallravenHHBülthoffS-PKim2015-06-00610117Perceptual sensitivity to tactile roughness varies across individuals for the same degree of roughness. A number of neurophysiological studies have investigated the neural substrates of tactile roughness perception, but the neural processing underlying the strong individual differences in perceptual roughness sensitivity remains unknown. In this study, we explored the human brain activation patterns associated with the behavioral discriminability of surface texture roughness using functional magnetic resonance imaging (fMRI). First, a whole-brain searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions from which we could decode roughness information. The searchlight MVPA revealed four brain regions showing significant decoding results: the supplementary motor area (SMA), contralateral postcentral gyrus (S1), and superior portion of the bilateral temporal pole (STP). Next, we evaluated the behavioral roughness discrimination sensitivity of each individual using the just-noticeable difference (JND) and correlated this with the decoding accuracy in each of the four regions. We found that only the SMA showed a significant correlation between neuronal decoding accuracy and JND across individuals; Participants with a smaller JND (i.e., better discrimination ability) exhibited higher decoding accuracy from their voxel response patterns in the SMA. Our findings suggest that multivariate voxel response patterns presented in the SMA represent individual perceptual sensitivity to tactile roughness and people with greater perceptual sensitivity to tactile roughness are likely to have more distinct neural representations of different roughness levels in their SMA.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published16Decoding Accuracy in Supplementary Motor Cortex Correlates with Perceptual Sensitivity to Tactile Roughness1501715422ZhaoW20153MZhaoWHWarren2015-06-00626915924How do people combine their sense of direction with their use of visual landmarks during navigation? Cue-integration theory predicts that such cues will be optimally integrated to reduce variability, whereas cue-competition theory predicts that one cue will dominate the response direction. We tested these theories by measuring both accuracy and variability in a homing task while manipulating information about path integration and visual landmarks. We found that the two cues were near-optimally integrated to reduce variability, even when landmarks were shifted up to 90°. Yet the homing direction was dominated by a single cue, which switched from landmarks to path integration when landmark shifts were greater than 90°. These findings suggest that cue integration and cue competition govern different aspects of the homing response: Cues are integrated to reduce response variability but compete to determine the response direction. The results are remarkably similar to data on animal navigation, which implies that visual landmarks reset the orientation, but not the precision, of the path-integration system.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9How You Get There From Here: Interaction of Visual Landmarks and Path Integration in Human Navigation1501715422deWinkelKB20153KNde WinkelMKatliarHHBülthoff2015-05-00510120It has been shown that the Central Nervous System (CNS) integrates visual and inertial information in heading estimation for congruent multisensory stimuli and stimuli with small discrepancies. Multisensory information should, however, only be integrated when the cues are redundant. Here, we investigated how the CNS constructs an estimate of heading for combinations of visual and inertial heading stimuli with a wide range of discrepancies. Participants were presented with 2s visual-only and inertial-only motion stimuli, and combinations thereof. Discrepancies between visual and inertial heading ranging between 0-90° were introduced for the combined stimuli. In the unisensory conditions, it was found that visual heading was generally biased towards the fore-aft axis, while inertial heading was biased away from the fore-aft axis. For multisensory stimuli, it was found that five out of nine participants integrated visual and inertial heading information regardless of the size of the discrepancy; for one participant, the data were best described by a model that explicitly performs causal inference. For the remaining three participants the evidence could not readily distinguish between these models. The finding that multisensory information is integrated is in line with earlier findings, but the finding that even large discrepancies are generally disregarded is surprising. Possibly, people are insensitive to discrepancies in visual-inertial heading angle because such discrepancies are only encountered in artificial environments, making a neural mechanism to account for them otiose. An alternative explanation is that detection of a discrepancy may depend on stimulus duration, where sensitivity to detect discrepancies differs between people.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published19Forced Fusion in Multisensory Heading Estimation1501715422SaultonDBd20153ASaultonTJDoddsHHBülthoffSde la Rosa2015-05-00523314711479Accurate knowledge about size and shape of the body derived from somatosensation is important to locate one’s own body in space. The internal representation of these body metrics (body model) has been assessed by contrasting the distortions of participants’ body estimates across two types of tasks (localization task vs. template matching task). Here, we examined to which extent this contrast is linked to the human body. We compared participants’ shape estimates of their own hand and non-corporeal objects (rake, post-it pad, CD-box) between a localization task and a template matching task. While most items were perceived accurately in the visual template matching task, they appeared to be distorted in the localization task. All items’ distortions were characterized by larger length underestimation compared to width. This pattern of distortion was maintained across orientation for the rake item only, suggesting that the biases measured on the rake were bound to an item-centric reference frame. This was previously assumed to be the case only for the hand. Although similar results can be found between non-corporeal items and the hand, the hand appears significantly more distorted than other items in the localization task. Therefore, we conclude that the magnitude of the distortions measured in the localization task is specific to the hand. Our results are in line with the idea that the localization task for the hand measures contributions of both an implicit body model that is not utilized in landmark localization with objects and other factors that are common to objects and the hand.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Objects exhibit body model like shape distortions1501715422LeyrerLBM2015_23MLeyrerSALinkenaugerHHBülthoffBJMohler2015-05-00510123In human perception, the ability to determine eye height is essential, because eye height is used to scale heights of objects, velocities, affordances and distances, all of which allow for successful environmental interaction. It is well understood that eye height is fundamental to determine many of these percepts. Yet, how eye height itself is provided is still largely unknown. While the information potentially specifying eye height in the real world is naturally coincident in an environment with a regular ground surface, these sources of information can be easily divergent in similar and common virtual reality scenarios. Thus, we conducted virtual reality experiments where we manipulated the virtual eye height in a distance perception task to investigate how eye height might be determined in such a scenario. We found that humans rely more on their postural cues for determining their eye height if there is a conflict between visual and postural information and little opportunity for perceptual-motor calibration is provided. This is demonstrated by the predictable variations in their distance estimates. Our results suggest that the eye height in such circumstances is informed by postural cues when estimating egocentric distances in virtual reality and consequently, does not depend on an internalized value for eye height.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published22The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality150171542215017KimSWLB20153JKimJSchultzTRoheCWallravenS-WLeeHHBülthoff2015-04-00143556555663Emotions can be aroused by various kinds of stimulus modalities. Recent neuroimaging studies indicate that several brain regions represent emotions at an abstract level, i.e., independently from the sensory cues from which they are perceived (e.g., face, body, or voice stimuli). If emotions are indeed represented at such an abstract level, then these abstract representations should also be activated by the memory of an emotional event. We tested this hypothesis by asking human participants to learn associations between emotional stimuli (videos of faces or bodies) and non-emotional stimuli (fractals). After successful learning, fMRI signals were recorded during the presentations of emotional stimuli and emotion-associated fractals. We tested whether emotions could be decoded from fMRI signals evoked by the fractal stimuli using a classifier trained on the responses to the emotional stimuli (and vice versa). This was implemented as a whole-brain searchlight, multivoxel activation pattern analysis, which revealed successful emotion decoding in four brain regions: posterior cingulate cortex (PCC), precuneus, MPFC, and angular gyrus. The same analysis run only on responses to emotional stimuli revealed clusters in PCC, precuneus, and MPFC. Multidimensional scaling analysis of the activation patterns revealed clear clustering of responses by emotion across stimulus types. Our results suggest that PCC, precuneus, and MPFC contain representations of emotions that can be evoked by stimuli that carry emotional information themselves or by stimuli that evoke memories of emotional stimuli, while angular gyrus is more likely to take part in emotional memory retrieval.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Abstract Representations of Associated Emotions in the Human Brain15017154221501718826BulthoffN20153IBülthoffFNNewell2015-04-001379–21Several studies have provided evidence in favour of a norm-based representation of faces in memory. However, such models have hitherto failed to take account of how other person-relevant information affects face recognition performance. Here we investigated whether distinctive or typical auditory stimuli affect the subsequent recognition of previously unfamiliar faces and whether the type of auditory stimulus matters. In this study participants learned to associate either unfamiliar distinctive and typical voices or unfamiliar distinctive and typical sounds to unfamiliar faces. The results indicated that recognition performance was better to faces previously paired with distinctive than with typical voices but we failed to find any benefit on face recognition when the faces were previously associated with distinctive sounds. These findings possibly point to an expertise effect, as faces are usually associated to voices. More importantly, it suggests that the memory for visual faces can be modified by the perceptual quality of related vocal information and more specifically that facial distinctiveness can be of a multi-sensory nature. These results have important implications for our understanding of the structure of memory for person identification.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-9Distinctive voices enhance the visual recognition of unfamiliar faces1501715422RoheN2015_23TRoheUNoppeney2015-04-00515116To obtain a coherent percept of the environment, the brain should integrate sensory signals from common sources and segregate those from independent sources. Recent research has demonstrated that humans integrate audiovisual information during spatial localization consistent with Bayesian Causal Inference (CI). However, the decision strategies that human observers employ for implicit and explicit CI remain unclear. Further, despite the key role of sensory reliability in multisensory integration, Bayesian CI has never been evaluated across a wide range of sensory reliabilities. This psychophysics study presented participants with spatially congruent and discrepant audiovisual signals at four levels of visual reliability. Participants localized the auditory signals (implicit CI) and judged whether auditory and visual signals came from common or independent sources (explicit CI). Our results demonstrate that humans employ model averaging as a decision strategy for implicit CI; they report an auditory spatial estimate that averages the spatial estimates under the two causal structures weighted by their posterior probabilities. Likewise, they explicitly infer a common source during the common-source judgment when the posterior probability for a common source exceeds a fixed threshold of 0.5. Critically, sensory reliability shapes multisensory integration in Bayesian CI via two distinct mechanisms: First, higher sensory reliability sensitizes humans to spatial disparity and thereby sharpens their multisensory integration window. Second, sensory reliability determines the relative signal weights in multisensory integration under the assumption of a common source. In conclusion, our results demonstrate that Bayesian CI is fundamental for integrating signals of variable reliabilities.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published15Sensory reliability shapes perceptual inference via two mechanisms15017188261501715422LinkenaugerBM20143SALinkenaugerHHBülthoffBJMohler2015-04-0070393–401Considerable empirical evidence has shown influences of the action capabilities of the body on the perception of sizes and distances. Generally, as one׳s action capabilities increase, the perception of the relevant distance (over which the action is to be performed) decreases and vice versa. As a consequence, it has been proposed that the body׳s action capabilities act as a perceptual ruler, which is used to measure perceived sizes and distances. In this set of studies, we investigated this hypothesis by assessing the influence of arm׳s reach on the perception of distance. By providing participant with a self-representing avatar seen in a first-person perspective in virtual reality, we were able to introduce novel and completely unfamiliar alterations in the virtual arm׳s reach to evaluate their impact on perceived distance. Using both action-based and visual matching measures, we found that virtual arm׳s reach influenced perceived distance in virtual environments. Due to the participants׳ inexperience with the reach alterations, we also were able to assess the amount of experience with the new arm׳s reach required to influence perceived distance. We found that minimal experience reaching with the virtual arm can influence perceived distance. However, some reaching experience is required. Merely having a long or short virtual arm, even one that is synchronized to one׳s movements, is not enough to influence distance perception if one has no experience reaching.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-393Virtual arm's reach influences perceived distances but only after experience reaching150171542215017NestiBPB2014_33ANestiKABeykirchPPrettoHHBülthoff2015-03-003233861869While moving through the environment, humans use vision to discriminate different self-motion intensities and to control their actions (e.g. maintaining balance or controlling a vehicle). How the intensity of visual stimuli affects self-motion perception is an open, yet important, question. In this study, we investigate the human ability to discriminate perceived velocities of visually induced illusory self-motion (vection) around the vertical (yaw) axis. Stimuli, generated using a projection screen (70 × 90 deg field of view), consist of a natural virtual environment (360 deg panoramic colour picture of a forest) rotating at constant velocity. Participants control stimulus duration to allow for a complete vection illusion to occur in every single trial. In a two-interval forced-choice task, participants discriminate a reference motion from a comparison motion, adjusted after every presentation, by indicating which rotation feels stronger. Motion sensitivity is measured as the smallest perceivable change in stimulus intensity (differential threshold) for eight participants at five rotation velocities (5, 15, 30, 45 and 60 deg/s). Differential thresholds for circular vection increase with stimulus velocity, following a trend well described by a power law with an exponent of 0.64. The time necessary for complete vection to arise is slightly but significantly longer for the first stimulus presentation (average 11.56 s) than for the second (9.13 s) and does not depend on stimulus velocity. Results suggest that lower differential thresholds (higher sensitivity) are associated with smaller rotations, because they occur more frequently during everyday experience. Moreover, results also suggest that vection is facilitated by a recent exposure, possibly related to visual motion after-effect.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Self-motion sensitivity to visual yaw rotations in humans1501715422LeeWKCLPC20143I-SLeeCWallravenJKongD-SChangHLeeH-JParkYChae2015-03-00140148–155The aim of this study was to compare behavioral and functional brain responses to the act of inserting needles into the body in two different contexts, treatment and stimulation, and to determine whether the behavioral and functional brain responses to a subsequent pain stimulus were also context dependent. Twenty-four participants were randomly divided into two groups: an acupuncture treatment (AT) group and an acupuncture stimulation (AS) group. Each participant received three different types of stimuli, consisting of tactile, acupuncture, and pain stimuli, and was given behavioral assessments during fMRI scanning. Although the applied stimuli were physically identical in both groups, the verbal instructions differed: participants in the AS group were primed to consider the acupuncture as a painful stimulus, whereas the participants in the AT group were told that the acupuncture was part of therapeutic treatment. Acupuncture yielded greater brain activation in reward-related brain areas (ventral striatum) of the brain in the AT group when compared to the AS group. Brain activation in response to pain stimuli was significantly attenuated in the bilateral secondary somatosensory cortex and the right dorsolateral prefrontal cortex after prior acupuncture needle stimulation in the AT group but not in the AS group. Inserting needles into the body in the context of treatment activated reward circuitries in the brain and modulated pain responses in the pain matrix. Our findings suggest that pain induced by therapeutic tools in the context of a treatment is modulated differently in the brain, demonstrating the power of context in medical practice.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-148When pain is not only pain: Inserting needles into the body evokes distinct reward-related brain responses in the context of a treatment1501715422RyllBR20143MRyllHHBülthoffPRobuffo Giordano2015-02-00223540556Standard quadrotor unmanned aerial vehicles (UAVs) possess a limited mobility because of their inherent underactuation, that is, availability of four independent control inputs (the four propeller spinning velocities) versus the 6 degrees of freedom parameterizing the quadrotor position/orientation in space. Thus, the quadrotor pose cannot track arbitrary trajectories in space (e.g., it can hover on the spot only when horizontal). Because UAVs are more and more employed as service robots for interaction with the environment, this loss of mobility due to their underactuation can constitute a limiting factor. In this paper, we present a novel design for a quadrotor UAV with tilting propellers which is able to overcome these limitations. Indeed, the additional set of four control inputs actuating the propeller tilting angles is shown to yield full actuation to the quadrotor position/orientation in space, thus allowing it to behave as a fully actuated flying vehicle. We then develop a comprehensive modeling and control framework for the proposed quadrotor, and subsequently illustrate the hardware and software specifications of an experimental prototype. Finally, the results of several simulations and real experiments are reported to illustrate the capabilities of the proposed novel UAV design.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published16A Novel Overactuated Quadrotor Unmanned Aerial Vehicle: Modeling, Control, and Experimental Validation1501715422AllerGCWN20153MAllerAGianiVConradMWatanabeUNoppeney2015-02-0016918To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7A spatially collocated sound thrusts a flash into awareness1501718826150171542115017188241501715422RoheN20153TRoheUNoppeney2015-02-00213118To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published17Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception15017154221501718826LeyrerLBM20153MLeyrerSALinkenaugerHHBülthoffBJMohler2015-02-001:112123Virtual reality technology can be considered a multipurpose tool for diverse applications in various domains, for example, training, prototyping, design, entertainment, and research investigating human perception. However, for many of these applications, it is necessary that the designed and computer-generated virtual environments are perceived as a replica of the real world. Many research studies have shown that this is not necessarily the case. Specifically, egocentric distances are underestimated compared to real-world estimates regardless of whether the virtual environment is displayed in a head-mounted display or on an immersive large-screen display. While the main reason for this observed distance underestimation is still unknown, we investigate a potential approach to reduce or even eliminate this distance underestimation. Building up on the angle of declination below the horizon relationship for perceiving egocentric distances, we describe how eye height manipulations in virtual reality should affect perceived distances. In addition, we describe how this relationship could be exploited to reduce distance underestimation for individual users. In a first experiment, we investigate the influence of a manipulated eye height on an action-based measure of egocentric distance perception. We found that eye height manipulations have similar predictable effects on an action-based measure of egocentric distance as we previously observed for a cognitive measure. This might make this approach more useful than other proposed solutions across different scenarios in various domains, for example, for collaborative tasks. In three additional experiments, we investigate the influence of an individualized manipulation of eye height to reduce distance underestimation in a sparse-cue and a rich-cue environment. In these experiments, we demonstrate that a simple eye height manipulation can be used to selectively alter perceived distances on an individual basis, which could be helpful to enable every user to have an experience close to what was intended by the content designer.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published22Eye Height Manipulations: A Possible Solution to Reduce Underestimation of Egocentric Distances in Head-Mounted Displays150171542215017ButlerCB20143JSButlerJLCamposHHBülthoff2015-02-002233587597Passive movement through an environment is typically perceived by integrating information from different sensory signals, including visual and vestibular information. A wealth of previous research in the field of multisensory integration has shown that if different sensory signals are spatially or temporally discrepant, they may not combine in a statistically optimal fashion; however, this has not been well explored for visual–vestibular integration. Self-motion perception involves the integration of various movement parameters including displacement, velocity, acceleration and higher derivatives such as jerk. It is often assumed that the vestibular system is optimized for the processing of acceleration and higher derivatives, while the visual system is specialized to process position and velocity. In order to determine the interactions between different spatiotemporal properties for self-motion perception, in Experiment 1, we first asked whether the velocity profile of a visual trajectory affects discrimination performance in a heading task. Participants performed a two-interval forced choice heading task while stationary. They were asked to make heading discriminations while the visual stimulus moved at a constant velocity (C-Vis) or with a raised cosine velocity (R-Vis) motion profile. Experiment 2 was designed to assess how the visual and vestibular velocity profiles combined during the same heading task. In this case, participants were seated on a Stewart motion platform and motion information was presented via visual information alone, vestibular information alone or both cues combined. The combined condition consisted of congruent blocks (R-Vis/R-Vest) in which both visual and vestibular cues consisted of a raised cosine velocity profile and incongruent blocks (C-Vis/R-Vest) in which the visual motion profile consisted of a constant velocity motion, while the vestibular motion consisted of a raised cosine velocity profile. Results from both Experiments 1 and 2 demonstrated that visual heading estimates are indeed affected by the velocity profile of the movement trajectory, with lower thresholds observed for the R-Vis compared to the C-Vis. In Exp. 2 when visual–vestibular inputs were both present, they were combined in a statistically optimal fashion irrespective of the type of visual velocity profile, thus demonstrating robust integration of visual and vestibular cues. The study suggests that while the time course of the velocity did affect visual heading judgments, a moderate conflict between visual and vestibular motion profiles does not cause a breakdown in optimal integration for heading.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published10Optimal visual-vestibular integration under conditions of conflicting intersensory motion profiles1501715422LinkenaugerPMCBSGW20143SALinkenaugerHyWongMGeussJKStefanucciKCMcCullochHHBülthoffBJMohlerDRProffitt2015-02-001144103113Given that observing one’s body is ubiquitous in experience, it is natural to assume that people accurately
perceive the relative sizes of their body parts. This assumption is mistaken. In a series of studies, we show
that there are dramatic systematic distortions in the perception of bodily proportions, as assessed by visual estimation tasks, where participants were asked to compare the lengths of two body parts. These distortions are not evident when participants estimate the extent of a body part relative to a noncorporeal object or when asked to estimate noncorporal objects that are the same length as their body parts. Our results reveal a radical asymmetry in the perception of corporeal and noncorporeal relative size estimates. Our findings also suggest that people visually perceive the relative size of their body parts as a function
of each part’s relative tactile sensitivity and physical size.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published10The Perceptual Homunculus: The Perception of the Relative Proportions of the Human Body150171542215017delaRosaCCUAB20143Sde la RosaRNChoudheryCCurioSUllmanLAssifHHBülthoff2015-02-009-102212331271Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting).nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published38Visual categorization of social interactions1501715422CaniardBT20143FCaniardHHBülthoffIMThornton2015-01-0010588114Local motion is known to produce strong illusory displacement in the perceived position of globally static objects. For example, if a dot-cloud or grating drifts to the left within a stationary aperture, the perceived position of the whole aperture will also be shifted to the left. Previously, we used a simple tracking task to demonstrate that active control over the global position of an object did not eliminate this form of illusion. Here, we used a new iPad task to directly compare the magnitude of illusory displacement under active and passive conditions. In the active condition, participants guided a drifting Gabor patch along a virtual slalom course by using the tilt control of an iPad. The task was to position the patch so that it entered each gate at the direct center, and we used the left/right deviations from that point as our dependent measure. In the passive condition, participants watched playback of standardized trajectories along the same course. We systematically varied deviation from midpoint at gate entry, and participants made 2AFC left/right judgments. We fitted cumulative normal functions to individual distributions and extracted the PSE as our dependent measure. To our surprise, the magnitude of displacement was consistently larger under active than under passive conditions. Importantly, control conditions ruled out the possibility that such amplification results from lack of motor control or differences in global trajectories as performance estimates were equivalent in the two conditions in the absence of local motion. Our results suggest that the illusion penetrates multiple levels of the perception-action cycle, indicating that one important direction for the future of perceptual illusions may be to more fully explore their influence during active vision.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Action can amplify motion-induced illusory displacement1501715422ZelazoFBR20133DZelazoAFranchiHHBülthoffPRobuffo Giordano2015-01-00134105128This work proposes a fully decentralized strategy for maintaining the formation rigidity of a multi-robot system using only range measurements, while still allowing the graph topology to change freely over time. In this direction, a first contribution of this work is an extension of rigidity theory to weighted frameworks and the rigidity eigenvalue, which when positive ensures the infinitesimal rigidity of the framework. We then propose a distributed algorithm for estimating a common relative position reference frame amongst a team of robots with only range measurements in addition to one agent endowed with the capability of measuring the bearing to two other agents. This first estimation step is embedded into a subsequent distributed algorithm for estimating the rigidity eigenvalue associated with the weighted framework. The estimate of the rigidity eigenvalue is finally used to generate a local control action for each agent that both maintains the rigidity property and enforces additional constraints such as collision avoidance and sensing/communication range limits and occlusions. As an additional feature of our approach, the communication and sensing links among the robots are also left free to change over time while preserving rigidity of the whole framework. The proposed scheme is then experimentally validated with a robotic testbed consisting of six quadrotor unmanned aerial vehicles operating in a cluttered environment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published23Decentralized rigidity maintenance control with range measurements for multi-robot systems1501715422KimMCCPBK20143JKimK-RMüllerYGChungS-CChungJ-YParkHHBülthoffS-PKim2015-01-0010708110According to the hierarchical view of human somatosensory network, somatic sensory information is relayed from the thalamus to primary somatosensory cortex (S1), and then distributed to adjacent cortical regions to perform further perceptual and cognitive functions. Although a number of neuroimaging studies have examined neuronal activity correlated with tactile stimuli, comparatively less attention has been devoted toward understanding how vibrotactile stimulus information is processed in the hierarchical somatosensory cortical network. To explore the hierarchical perspective of tactile information processing, we studied two cases: (a) discrimination between the locations of finger stimulation, and (b) detection of stimulation against no stimulation on individual fingers, using both standard general linear model (GLM) and searchlight multi-voxel pattern analysis (MVPA) techniques. These two cases were studied on the same data set resulting from a passive vibrotactile stimulation experiment. Our results showed that vibrotactile stimulus locations on fingers could be discriminated from measurements of human functional magnetic resonance imaging (fMRI). In particular, it was in case (a) where we observed activity in contralateral posterior parietal cortex (PPC) and supramarginal gyrus (SMG) but not in S1, while in case (b) we found significant cortical activations in S1 but not in PPC and SMG. These discrepant observations suggest the functional specialization with regard to vibrotactile stimulus locations, especially, the hierarchical information processing in the human somatosensory cortical areas. Our findings moreover support the general understanding that S1 is the main sensory receptive area for the sense of touch, and adjacent cortical regions (i.e., PPC and SMG) are in charge of a higher level of processing and may thus contribute most for the successful classification between stimulated finger locations.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Distributed functions of detection and discrimination of vibrotactile stimuli in the hierarchical human somatosensory system1501715422Meilinger2015_27TMeilingerCopenhagen, Denmark2015-11-091213People use “route knowledge” to navigate to targets along familiar routes and “survey knowledge” to determine (by pointing, for example) a target’s metric location.
We examined within which coordinate systems route and survey knowledge is represented in memory. Data suggests that navigators memorize survey knowledge of their city of residency (Fig1) within a single, north-oriented reference frame learned from maps.
(1). However, when they recall this knowledge while located within the city, they spontaneously adjusted this knowledge towards their current body orientation and location relative to the recalled area – probably to have the information ready for later action (2). Contrary to survey knowledge, route knowledge of one’s home city was memorized in different representations relying on multiple, local, street-based coordinate systems presumably learned from navigation.
(3). When recalling this knowledge to plan a route, navigators concentrate on turns and employ a “when-in-doubt-follow-your-nose” default strategy in order to not get lost
(4). Taken together, our results suggest that people coordinate multiple representations of their surrounding environment and adjust these to their current situation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1How do people memorize and recall spatial knowledge within
their city of residency?1501715422StrickrodtM20157MStrickrodtTMeilingerCopenhagen, Denmark2015-11-093334A vista space (VS), e.g., a room, is perceived from one vantage point, whereas an environmental space (ES), e.g., a building, is experienced successively during movement.
Participants learned the same object layout by walking through multiple corridors (ES) or within a differently oriented room (VS). In four VS conditions they either
learned a fully or a successively visible object layout, and either from a static position or by walking through the environment along a path, mirroring the translation in ES. Afterwards, participants pointed between object locations in different body orientations and reproduced the object layout. Pointing latency in ES increased with the number of corridors to the target and pointing performance was best along corridor-based orientations. In VS conditions latency did not increase with distance and pointing performance was best along room-based orientations, which were oblique
to corridor and walking orientations. Furthermore, ES learners arranged the layout in the order they experienced the objects, and less so VS learners. Most beneficial
pointing orientations, distance and order effects suggest that spatial memory in ES is qualitatively different from spatial memory in VS and that differences in the visible environment (spatial structure) rather than movement or successive presentation are responsible for that. Our results are in line with the dissociation of vista and environmental space as postulated by Montello (1993. Furthermore, our study provides a behavioral foundation for the application of isovists when conducting visual integration analysis, which is one module of the space syntax approach (e.g., Hillier, 1999).nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Movement, successive presentation and environmental structure and their influence on spatial memory in vista and
environmental space1501715422OdelgaSBA20157MOdelgaPStegagnoHHBülthoffAAhmadCancun, Mexico2015-11-00204210In this paper, we present a hardware in the loop simulation setup for multi-UAV systems. With our setup, we are able to command the robots simulated in Gazebo, a popular open source ROS-enabled physical simulator, using the computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot hardware. In addition, since our setup is inherently multi-robot, we can also test the communication flow among the robots. We provide two use cases to show the characteristics of our setup.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/RED-UAS-2015-Odelga.pdfpublished6A Setup for multi-UAV hardware-in-the-loop simulations1501715422SanzAL20157DSanzAAhmadPLimaLisboa, Portugal2015-11-00547559Domestic assistance for the elderly and impaired people is
one of the biggest upcoming challenges of our society. Consequently, in-home care through domestic service robots is identified as one of the most important application area of robotics research. Assistive tasks may range from visitor reception at the door to catering for owner's small daily necessities within a house. Since most of these tasks require the robot to interact directly with humans, a predominant robot functionality is to detect and track humans in real time: either the owner of the robot or visitors at home or both. In this article we present a robust method for such a functionality that combines depth-based segmentation and visual detection. The robustness of our method lies in its capability to not only identify partially occluded humans (e.g., with only torso visible) but also to do so in varying lighting conditions. We thoroughly validate our method through extensive experiments on real robot datasets and comparisons with the ground truth. The datasets were collected on a home-like environment set up within the context of RoboCup@Home and RoCKIn@Home competitions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Onboard robust person detection and tracking for domestic service robots1501715422LiuMSAZ20157YLiuJMMontenbruckPStegagnoFAllgöwerAZellHamburg, Germany2015-10-0054105416This paper presents a nonlinear control approach for quadrotor Micro Aerial Vehicles (MAVs), which combines a backstepping-like regulator based on the solution of a certain class of global output regulation problems for the rigid body equations on SO(3), a robust controller for the system with bounded disturbances, as well as a trajectory generator using a model predictive control method. The proposed algorithm is endowed with strong convergence properties so that it allows the quadrotor MAVs to reach almost all the desired attitudes. The control approach is implemented on a high-payload-capable quadcopter with unstructured dynamics and unknown disturbances. The performance of our algorithm is demonstrated through a series of experimental evaluations and comparisons with another control method on normal and aggressive trajectory tracking tasks.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6A Robust Nonlinear Controller for Nontrivial Quadrotor Maneuvers: Approach and Verification1501715422MassiddaBS20157CMassiddaHHBülthoffPStegagnoHamburg, Germany2015-10-0031053110Identification of landmarks for outdoor navigation is often performed using computationally expensive computer vision methods or via heavy and expensive multi-spectral and range sensors. Both choices are forbidden on Micro Aerial Vehicles (MAV) due to limited payload and computational power. However, an appropriate choice of the hardware sensor equipment allows the employment of mixed multi-spectral analysis and computer vision techniques to identify natural landmarks. In this work, we propose a low-cost low-weight camera array with appropriate optical filters to be exploited both as stereo camera and multi-spectral sensor. Through stereo vision and the Normalized Difference Vegetation Index (NDVI), we are able to classify the observed materials in the scene among several different classes, identify vegetation and water bodies and provide measurements of their relative bearing and distance from the robot. A handheld prototype of this camera array is tested in outdoor environment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/IROS-2015-Massidda.pdfpublished5Autonomous Vegetation Identification for Outdoor Aerial Navigation1501715422MasaratiQZYDPVS20137PMasaratiGQuarantaLZaichikYYashinPDesyatnikMDPavelJVenrooijHSmailiMoskva, Russia2015-10-00497501nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4Biodynamic Pilot Modelling for Aeroelastic A/RPC1501715422GeluardiNPB20137SGeluardiFNieuwenhuizenLPolliniHHBülthoffMoskva, Russia2015-10-00419433At the Max Planck Institute for Biological Cybernetics the influence of an augmented system on helicopter pilots with limited flight skills is being investigated. This study would provide important contributions in the research field on personal air transport systems. In this project, the flight condition under study is the hover. The first step is the implementation of a rigid-body dynamic model. This could be used to perform handling qualities evaluations for comparing the pilot performances with and without augmented system. This paper aims to provide a lean procedure and a reliable measurement setup for the collection of the flight test data. The latter are necessary to identify the helicopter dynamic model. The mathematical and technical tools used to reach this purpose are described in detail. First, the measurement setup is presented, used to collect the piloted control inputs and the helicopter response. Second, a description of the flight maneuvers and the pilot
training phase is taken into consideration. Finally the flight test data collection is described and the results are
showed to assess and validate the setup and the procedure presented.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2013/ERF-2013-Geluardi.pdfpublished14Data Collection for Developing a Dynamic Model of a Light Helicopter1501715422MeilingerSFBB20157TMeilingerJSchulte-PelkumJFrankensteinDBergerHHBülthoffKyoto, Japan2015-10-002528Comparing spatial performance in different virtual reality setups can indicate which cues are relevant for a realistic virtual experience. Bodily self-movement cues and global orientation information were shown to increase spatial performance compared with local visual cues only. We tested the combined impact of bodily and global orientation cues by having participants learn a virtual multi corridor environment either by only walking through it, with additional distant landmarks providing heading information, or with a surrounding hall relative to which participants could determine their orientation and location. Subsequent measures on spatial memory only revealed small and non-reliable differences between the learning conditions. We conclude that additional global landmark information does not necessarily improve user's orientation within a virtual environment when bodily-self-movement cues are available.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICAT-EGVE-2015-Meilinger.pdfpublished3Global Landmarks Do Not Necessarily Improve Spatial Performance in Addition to Bodily Self-Movement Cues when Learning a Large-Scale Virtual Environment1501715422OlivariNBP2015_27MOlivariFMNieuwenhuizenHHBülthoffLPolliniHong Kong, China2015-10-0030793085Effectiveness of hap tic guidance systems depends on how humans adapt their neuromuscular response to the force feedback. A quantitative insight into adaptation of neuromuscular response can be obtained by identifying neuromuscular dynamics. Since humans are likely to vary their neuromuscular response during realistic control scenarios, there is a need for methods that can identify time-varying neuromuscular dynamics. In this work an identification method is developed which estimates the impulse response of time-varying neuromuscular system by using a Recursive Least Squares (RLS) method. The proposed method extends the commonly used RLS-based method by employing the pseudo inverse operator instead of the inverse operator. This results in improved robustness to external noise. The method was validated in a human in-the-loop experiment. The neuromuscular estimates given by the proposed method were more accurate than those obtained with the commonly used RLS-based method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Identifying Time-Varying Neuromuscular Response: a Recursive Least-Squares Algorithm with Pseudoinverse1501715422ScheerBC2015_37MScheerHHBülthoffLLChuangBerlin, Germany2015-10-0024The extent to which we experience ‚workload‘ whilst steering depends on (i) the availability of the human operator’s (presumably limited) resources and, (ii) the demands of the steering task. Typically, an increased demand of the steering task for a specific resource can be inferred from how steering modifies the components of the event-related potential (ERP), which is elicited by the
stimuli of a competing task. Recent studies have demonstrated that this approach can continue to be applied even when the stimuli does not require an explicit response. Under certain circumstances, workload levels in the primary task can influence the ERPs that are elicited by task-irrelevant target events, in particular complex environmental sounds. Using this approach, the current study
assesses the human operator’s resources that are demanded by different aspects of the steering task. To enable future studies to focus their analysis, we identify ERP components and electrodes that are relevant to steering demands, using mass univariate analysis. Additionally we compare the effectiveness of sound stimuli that are conventionally employed to elicit ERPs for assessing workload, namely pure-tone oddballs and environmental sounds. In the current experiment, participants performed a compensatory tracking task that required them to align a continuously perturbed target line to a stationary reference line. Task difficulty
was manipulated either by varying the bandwidth of the disturbance or by varying the complexity of the controller dynamics of the steering system. Both manipulations presented two levels of difficulty (‚Easy‘ and ‚Hard‘), which could be contrasted to a baseline ‘View only’ condition. During the steering task, task-irrelevant sounds were presented to elicit ERPs: frequent pure-tone standards,
rare pure-tone oddballs and rare environmental sounds.
Our results show that steering task demands influence ERP components that are suggested by the previous literature to be related to the following cognitive processes, namely the call for orientation (i.e., early P3a), the orientation of attention (i.e., late P3a), and the semantic processing of
the task-irrelevant sound stimuli (i.e., N400). The early P3 was decreased in the frontocentral electrodes, the late P3 centrally and the N400 centrally and over the left hemisphere. Single subject analyses on these identified components reveal differences that correspond to our manipulations of steering difficulty. More participants discriminate for above components in the ‘Hard’ relative to
the ‘Easy’ condition. The current study identifies the spatial and temporal distribution of ERPs that ought to be targeted for future investigations of the influence of steering on workload. In addition, the use of
task-irrelevant environmental sounds to elicit ERP indices for workload holds several advantages over conventional beep tones, especially in the operational context. Finally, the current findings indicate the involvement of cognitive processes in steering, which is typically viewed as being a
predominantly visuo-motor task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-24On the influence of steering on the orienting response1501715422SchenkBM20157CSchenkHHBülthoffCMasoneCheile Gradistei, Romania2015-10-00427434In this paper we consider the application problem of a redundant cable-driven parallel robot, tracking a reference trajectory in presence of uncertainties and disturbances. A Super Twisting controller is implemented using a recently proposed gains adaptation law , thus not requiring the knowledge of the upper bound of the lumped uncertainties. The controller is extended by a feedforward dynamic inversion control that reduces the effort of the sliding mode controller. Compared to a recently developed Adaptive Terminal Sliding Mode Controller for cable-driven parallel robots , the proposed controller manages to achieve lower tracking errors and less chattering in the actuation forces even in presence of perturbations. The system is implemented and tested in simulation using a model of a large redundant cable-driven robot and assuming noisy measurements. Simulations show the effectiveness of the proposed method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Robust adaptive sliding mode control of a redundant cable driven parallel robot1501715422GlatzBC2015_37CGlatzHHBülthoffLLChuangNottingham, UK2015-09-0115Nowadays modern cars integrate advanced driving
assistance systems which range up to fully automated
driving modes. Since fully automated driving modes have not come into everyday practice yet, operators are currently making use of assistance systems. While still being in control of the vehicle, alerts signal possible collision dangers when, for example, parking. The reason for the necessity of such warnings is the fact that humans have limited resources. A critical event can stay unnoticed simply because the attention was focused elsewhere. This raises the question: What is an effective alert in a steering environment? Auditory warning signals have been shown to efficiently direct attention. In the context of
traffic, they can prevent collisions by heightening the driver's situational awareness to potential accidents.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4Attention Enhancement During Steering Through Auditory Warning Signals1501715422ChuangB20157LLChuangHHBülthoffNottingham, UK2015-09-0114Gaze-tracking technology is used increasingly to determine
how and which information is accessed and processed in a given interface environment, such as in-vehicle information systems in automobiles. Typically, fixations on regions
of interest (e.g., windshield, GPS) are treated as an indication that the underlying information has been attended to and is, thus, vital to the task. Therefore, decisions such as optimal instrument placement are often made on the basis of the distribution of recorded fixations. In this paper, we briefly introduce gaze-tracking methods for in-vehicle monitoring, followed by a discussion on the relationship between gaze and user-attention. We posit that gaze-tracking data can yield stronger insights on the utility of novel regions-
of-interests if they are considered in terms of their deviation from basic gaze patterns. In addition, we suggest how EEG recordings could complement gaze-tracking data and
raise outstanding challenges in its implementation. It is contended that gaze-tracking is a powerful tool for understanding how visual information is processed in a given environment, provided it is understood in the context of a model that first specifies the task that has to be carried out.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Towards a Better Understanding of Gaze Behavior in the Automobile1501715422LockenBMCSAM20157ALöckenSSBorojeniHMüllerLChuangRSchroeterIAlvarezVMeijeringNottingham, UK2015-09-0114Informing a driver of the vehicle’s changing state and environment is a major challenge that grows with the introduction of automated in-vehicle assistant and infotainment systems. Poorly designed systems could compete for the driver’s attention, away from the primary driving task. Thus, such systems should communicate information in a way that conveys its relevant urgency. While some information is unimportant and should never distract a driver from important tasks, there are also calls for action, which a driver should not be able to ignore. We believe that adaptive ambient displays and peripheral interaction could serve to unobtrusively present information while switching the driver’s attention when needed. This workshop will focus in promoting an exchange of best known methods, by discussing challenges and potentials for this kind of interaction in today’s scenarios as well as in future mixed or full autonomous traffic. The central objective of this workshop is to bring together researchers from different domains and discuss innovative, and engaging ideas and a future landscape for research in this area.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Workshop on Adaptive Ambient In-Vehicle Displays and Interactions1501715422RienerAJCJPC20157ARienerIAlvarezMPJeonLChuangWJuBPflegingMChiesaNottingham, UK2015-09-0114nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Workshop on Practical Experiences in Measuring and
Modeling Drivers and Driver-Vehicle Interactions1501715422GeussRS20157MGeussITRuginskiJKStefanucciTübingen, Germany2015-09-00241242Previous research suggests that drivers use specific visual information to execute braking behaviors [Faj05] and that drivers calibrate braking behavior to this visual information over time [Faj09]. Specifically, Fajen (2005) argued that when successfully braking, participants adjust braking pressure to maintain a visually-specified ideal
braking pressure less than one’s maximum ability to brake. In the current paper, we investigated whether factors, specifically one’s emotional state, would alter the relationship between braking behavior and visually-specified ideal braking pressure over time. Specifically, we investigated whether the performance of braking changed when anxious. Previous research demonstrated that anxiety influences static perceptual judgments of space [Gra12] and the performance of open-loop sports actions [Bei10]. Open-loop actions are actions where once the movement has been initiated there are no opportunities to alter the
outcome (i.e., putting a golf ball). This research shows an influence of anxiety on static perceptual tasks and the performance of open-loop actions suggesting that anxiety may also influence more complex everyday actions like braking. It is important to know whether, and how, the influence of anxiety extends to the performance of closedloop actions like braking given the potential realworld consequences of poor performance.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Anxiety alters visual guidance of braking over time1501715422JunSCGT20157EJunJKStefanucciSHCreem-RegehrMNGeussWBThompsonTübingen, Germany2015-09-00116Spatial perception research in the real world and in virtual environments suggests that the body (e.g., hands) plays a role in the perception of the scale of the world. However, little research has closely examined how varying the size of virtual body parts may influence judgments of action capabilities and spatial layout. Here, we questioned whether changing the size of virtual feet would affect judgments of stepping over and estimates of the width of a gap. Participants viewed their disembodied virtual feet as small or large and judged both their ability to step over a gap and the size of gaps shown in the virtual world. Foot size affected both affordance judgments and size estimates such that those with enlarged virtual feet estimated they could step over larger gaps and that the extent of the gap was smaller. Shrunken feet led to the perception of a reduced ability to step over a gap and smaller estimates of width. The results suggest that people use their visually perceived foot size to scale virtual spaces. Regardless of foot size, participants felt that they owned the feet rendered in the virtual world. Seeing disembodied, but motion-tracked, virtual feet affected spatial judgments, suggesting that the presentation of a single tracked body part is sufficient to produce similar effects on perception, as has been observed with the presence of fully co-located virtual self-avatars or other body parts in the past.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published15Big Foot: Using the Size of a Virtual Foot to Scale Gap Width1501715422CleijVPPBM20157DCleijJVenrooijPPrettoDMPoolMMulderHHBülthoffTübingen, Germany2015-09-00191198Motion cueing algorithms (MCA) are used in motion simulation to map the inertial vehicle motions onto the simulator motion space. To increase fidelity of the motion simulation, these MCAs are tuned to minimize the perceived incoherence between the visual and inertial motion cues. Despite time-invariant MCA dynamics the incoherence is not constant, but changes over time. Currently used methods to measure the quality of an MCA focus on the overall differences between MCAs, but lack the ability to detect how quality varies over time and how this influences the overall quality judgement. This paper describes a continuous subjective rating method with which perceived motion incoherence can be detected over time. An experiment was performed to show the suitability of this method for measuring motion incoherence. The experiment results were used to validate the continuous rating method and showed it provides important additional information on the perceived motion incoherence during a simulation compared to an offline rating method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Continuous rating of perceived visual-inertial motion incoherence during driving simulation1501715422deWinkelKB2015_27KNde WinkelMKatliarHHBülthoffTübingen, Germany2015-09-006770The set of physically incoherent combinations of visual and inertial motions that are nonetheless judged coherent by human observers is referred to as the 'Coherence Zone' (CZ). Here we propose that Causal Inference (CI) models of self-motion perception may offer a more comprehensive alternative to the CZ. CI models include an assessment of the probability of competing causal structures. This probability may be interpreted as a CZ.
In an experiment nine participants were presented with horizontal linear visual-only, inertial-only, and combined visual-inertial motion stimuli with heading discrepancies up to 90°, and asked to provide heading estimates. Model predictions were compared to obtained data to assess model tenability.
The CI model accounted well for the data of one participant; for five others the results imply that discrepancies do not affect heading perception. Results for the remaining participants were inconclusive.
We conclude that CI models can offer a more comprehensive interpretation of the CZ, but that more research is needed to identify when discrepancies are detected. The methodology proposed here may be adapted to account for characteristics of self-motion other than heading, such as amplitude and phase.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Heading Coherence Zone from Causal Inference Modelling1501715422KatliardPVB20157MKatliarKNde WinkelJVenrooijPPrettoHHBülthoffTübingen, Germany2015-09-00219222Motion cueing algorithms (MCAs) based on Model Predictive Control (MPC) are becoming increasingly popular. The MPC approach consists of solving an optimization problem to find a feasible simulator motion that minimizes the difference between the sensed motions in the real vehicle and in the simulator for some time interval. The length of this time interval, which is called the prediction horizon, is an important parameter that needs to be selected. Longer prediction horizons generally lead to better motion cueing but require more computational power because of the larger optimization problem. Consequently the selection of an appropriate prediction horizon for MPC-based MCAs is a compromise between motion cueing fidelity and computational load.
In this work the effect of the prediction horizon on motion cueing fidelity was studied by computing the simulation cost, i.e., the average error between desired and reproduced sensory stimulation (specific forces and rotational velocities), for a range of typical car and helicopter maneuvers, while varying the prediction horizon. We propose a simple parametric model that describes the effect of prediction horizon on the simulation cost. The proposed model provides an accurate description of the data (coefficient of determination R2>0.99) for horizons longer than 1s for 11 out of 13 tested maneuvers. One of the model’s parameters can be interpreted as the minimal prediction horizon needed to achieve reasonable quality of simulation. The simulation cost appears to decrease roughly quadratically with the prediction horizon.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3Impact of MPC Prediction Horizon on Motion Cueing Fidelity1501715422AhmadB20157AAhmadHHBülthoffLincoln, UK2015-09-0018In this article we present an online estimator for multirobot cooperative localization and target tracking based on nonlinear least squares minimization. Our method not only makes the rigorous optimization-based approach applicable online but also allows the estimator to be stable and convergent. We do so by employing a moving horizon technique to nonlinear least squares minimization and a novel design of the arrival cost function that ensures stability and convergence of the estimator. Through an extensive set of real robot experiments, we demonstrate the robustness of our method as well as the optimality of the arrival cost function. The experiments include comparisons of our method with i) an extended Kalman filter-based online-estimator and ii) an offline-estimator based on full-trajectory nonlinear least squares.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Moving-horizon Nonlinear Least Squares-based Multirobot Cooperative Perception1501715422LacheleVPZB20157JLächeleJVenrooijPPrettoAZellHHBülthoffLincoln, UK2015-09-0016In this paper we present a method for calculating inertial motion feedback in a teleoperation setup. For this we make a distinction between vehicle-state feedback that depends on the physical motion of the remote vehicle, and task-related motion feedback that provides information about the teleoperation task. By providing motion feedback that is independent of vehicle motion we exploit the spatial decoupling between the operator and the controlled vehicle.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Novel approach for calculating motion feedback in teleoperation1501715422ScheerBC2015_27MScheerHHBülthoffLLChuangLos Angeles, CA, USA2015-09-0010421046The cognitive workload of a steering task could reflect its demand on attentional as well as working memory resources under different conditions. These respective demands could be differentiated by evaluating components of the event-related potential (ERP) response to different types of stimulus probes, which are claimed to reflect the availability of either attention (i.e., novelty-P3) or working memory (i.e., target-P3) resources. Here, a within-subject analysis is employed to evaluate the robustness of ERP measurements in discriminating the cognitive demands of different steering conditions. We find that the amplitude of novelty-P3 ERPs to task-irrelevant environmental sounds is diminished when participants are required to perform a steering task. This indicates that steering places a demand on attentional resources. In addition, target-P3 ERPs to a secondary auditory detection task vary when the controller dynamics in the steering task are manipulated. This indicates that differences in controller dynamics vary in their working memory demands.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published4On the Cognitive Demands of Different Controller Dynamics: A within-subject P300 Analysis1501715422WellerdiekBGSKBM20157ACWellerdiekMBreidtMNGeussSStreuberUKloosMJBlackBJMohlerTübingen, Germany2015-09-00714We investigated the influence of body shape and pose on the perception of physical strength and social power for male virtual characters. In the first experiment, participants judged the physical strength of varying body shapes, derived from a statistical 3D body model. Based on these ratings, we determined three body shapes (weak, average, and strong) and animated them with a set of power poses for the second experiment. Participants rated how strong or powerful they perceived virtual characters of varying body shapes that were displayed in different poses. Our results show that perception of physical strength was mainly driven by the shape of the body. However, the social attribute of power was influenced by an interaction between pose and shape. Specifically, the effect of pose on power ratings was greater for weak body shapes. These results demonstrate that a character with a weak shape can be perceived as more powerful when in a high-power pose.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/SAP-2015-Wellerdiek.pdfpublished7Perception of Strength and Power of Realistic Male Characters150171542215017VenrooijPKNNLdCB20157JVenrooijPPrettoMKatliarSAENooijANestiMLächeleKNde WinkelDCleijHHBülthoffTübingen, Germany2015-09-00153161This paper describes a perception-based motion cueing (PBMC) algorithm, which aims to bridge the gap between what is known about human self-motion perception and what is currently used in motion simulation. In PBMC, motion perception knowledge is explicitly incorporated by means of a perception model and a cost function. PBMC has the potential of improving the realism of the motion simulation by exploiting the limitations and ambiguities of human self-motion perception and increasing the utilization of the simulator envelope, while reducing the need for parameter tuning. The PBMC algorithm was compared to a classical filter-based approach in an experimental study. To allow for a robust and reliable comparison, an evaluation method for motion cueing algorithms (MCAs) based on psychophysical techniques was developed. Results show that the PBMC approach received significantly higher ratings than the filter-based approach. This demonstrates the potential of the PBMC approach to improve motion cueing in vehicle simulation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/DSC-2015-Venrooij.pdfpublished8Perception-based motion cueing: validation in driving
simulation1501715422NooijPB20157SAENooijPPrettoHHBülthoffTübingen, Germany2015-09-003338During curve driving a lateral force, coupled to the off-center yaw rotation, is acting on the driver. In simulation, however, the lateral force is often not generated using off-centric rotation, thereby uncoupling translational and rotational motion cues. This may cause misalignment of the lateral force w.r.t. the motion direction along the curve. In the present study we investigated how sensitive humans are to such misalignment. We performed a psychophysical study where participants were repeatedly moved along circular trajectories. The participants’ physical orientation with respect to the motion path was systematically varied, and the participants’ task was to indicate whether they felt facing to the inside or the outside of the curve, in a two-alternative forced choice. The experiment was performed in darkness and with a congruent visual motion stimulus. Heading JND, i.e. the smallest detectable difference in yaw orientation w.r.t. the direction of motion, was measured. The results show a considerably lower sensitivity to the misalignment of the lateral force than what is commonly found for heading sensitivity along straight paths, with better performance when congruent visual information was presented. This indicates that for simulated curve driving some misalignment of the lateral force is acceptable, without affecting perceptual realism.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Sensitivity to lateral force is affected by concurrent yaw
rotation during curve driving1501715422GlatzBC2015_27CGlatzHHBülthoffLLChuangLos Angeles, CA, USA2015-09-001011Auditory warnings are often used to direct a user’s attention from a primary task to critical peripheral events. In the context of traffic, in-vehicle collision avoidance systems could, for example, employ spatially relevant sounds to alert the driver to the possible presence of a crossing pedestrian. This raises the question: What is an effective auditory alert in a steering environment? Ideally, such warning signals should not only arouse the driver but also result in deeper processing of the event that the driver is being alerted to. Warning signals can be designed to convey the time to contact with an approaching object (Gray, 2011). That is, sounds can rise in intensity in accordance with the physical velocity of an approaching threat. The current experiment was a manual steering task in which participants were occasionally required to recognized peripheral visual targets. These visual targets were sometimes preceded by a spatially congruent auditory warning signal. This was either a sound with constant intensity, linearly rising intensity, or non-linearly rising intensity that conveyed time-to-contact. To study the influence of warning cues on the arousal state, different features of electroencephalography (EEG) were measured. Alpha frequency, which ranges from 7.5 to 12.5 Hz, is believed to represent different cognitive processes, in particular arousal (Klimesch, 1999). That is, greater desynchronization in the alpha frequency reflects higher levels of attention as well as alertness. Our results showed a significant decrease in alpha power for sounds with rising intensity profiles, indicating increased alertness and expectancy for an event to occur. To analyze whether the increased arousal for rising sounds resulted in deeper processing of the visual target, we analyzed the event related potential P3. It is a positive component that occurs approximately 300 ms after an event and is known to be associated with recognition performance of a stimulus (Parasuraman & Beatty, 1980). In other words, smaller P3 amplitudes indicate worse identification than larger amplitudes. Our results show that sounds with time-to-contact properties induced larger P3 responses to the targets that they cued compared to targets cued by constant or linearly rising sounds. This suggests that rising sounds with time-to-contact intensity profiles evoke deeper processing of the visual target and therefore result in better identification than events cued by sounds with linearly rising or constant intensity.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1011Warning Signals With Rising Profiles Increase Arousal1501715422FladBC2015_37NFladHHBülthoffLLChuangRostock, Germany2015-08-00115124nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Combined use of eye-tracking and EEG to understand visual information processing1501715422NestmeyerFBR20157TNestmeyerAFranchiHHBülthoffPRobuffo GiordanoRoma, Italy2015-07-1718This paper presents a novel distributed control strategy that enables multi-target exploration while ensuring a time-varying connected topology in both 2D and 3D cluttered environments. Flexible continuous connectivity is guaranteed by gradient descent on a monotonic potential function applied on the algebraic connectivity (or Fiedler eigenvalue) of a generalized interaction graph. Limited range, line-of-sight visibility, and collision avoidance are taken into account simultaneously by weighting of the graph Laplacian. Completeness of the multi-target visiting algorithm is guaranteed by using a decentralized adaptive leader selection strategy and a suitable scaling of the exploration force based on the direction alignment between exploration and connectivity force and the traveling efficiency of the current leader. Extensive MonteCarlo simulations with a group of several quadrotor UAVs show the practicability, scalability and effectiveness of the proposed method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Decentralized Multi-target Exploration and Connectivity Maintenance with a Multi-robot System1501715422GerboniGONBP20147CAGerboniSGeluardiMOlivariFMNieuwenhuizenHHBülthoffLPolliniSouthampton, UK2015-07-00615626This paper describes the different phases of realizing and validating a helicopter model for the MPI CyberMotion
Simulator (CMS). The considered helicopter is a UH-60 Black Hawk. The helicopter model was developed
based on equations and parameters available in literature. First, the validity of the model was assessed by
performing tests based on ADS-33E-PRF criteria using closed loop controllers and with a non-expert pilot.
Results on simulated data were similar to results obtained with the real helicopter. Second, the validity of the
model was assessed with a helicopter pilot in-the-loop in both a fixed-base simulator and the CMS. The pilot
performed a vertical remask maneuver defined in ADS-33E-PRF. Most metrics for performance were reached
adequately with both simulators. The motion cues in the CMS allowed for improvements in some of the metrics.
The pilot was also asked to give a subjective evaluation of the model by answering the Israel Aircraft Industries
Pilot Rating Scale (IAI PRS). Similarly to results of ADS-33E-PRF, pilot responses confirmed that the motion
cues provided more realistic flight experience.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/ERF-2014-Gerboni.pdfpublished11Development of a 6 dof nonlinear helicopter model for the MPI Cybermotion Simulator1501715422Chuang20157LLChuangLos Angeles, CA, USA2015-07-00311A control schema for a human-machine system allows the human operator to be integrated as a mathematical description in a closed-loop control system, i.e., a pilot in an aircraft. Such an approach typically assumes that error feedback is perfectly communicated to the pilot who is responsible for tracking a single flight variable. However, this is unlikely to be true in a flight simulator or a real flight environment. This paper discusses different aspects that pertain to error visualization and the pilot’s ability in seeking out relevant information across a range of flight variables.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Error Visualization and Information-Seeking Behavior for Air-Vehicle Control1501715422GeluardiNPB20157SGeluardiFMNieuwenhuizenLPolliniHHBülthoffVirginia Beach, VA, USA2015-05-0614281436This paper presents the implementation of classic augmented control stategies applied to an identified civil light heli-
copter model in hover. Aim of this study is to enhance the stability and controllability of the helicopter model and to
improve its Handling Qualities (HQs) in order to meet those defined for a new category of aircrafts, Personal Aerial
Vehicles (PAVs). Two control methods were used to develop the augmented systems, H-control and m-synthesis. The
resulting augmented systems were compared in terms of achieved robust stability, nominal performance and robust
performance. The robustness was evaluated against parametric uncertainties and external disturbances modeled as real atmospheric turbulences that might be experienced in hover and low speed flight. The main result achieved in
this work is that classical control techniques can augment a linear helicopter model to match PAVs responses at low
frequencies. As a consequence, the achieved HQs performance resemble those defined for PAVs pilots. However, both control techniques performed poorly for some specific uncertainty conditions demonstrating unsatisfactory per-
formance robustness. Differences, advantages and limitations of the implemented control architectures with respect to the considered requirements are described in the paper.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Augmented Systems for a Personal Aerial Vehicle Using a Civil Light Helicopter Model1501715422GarsoffkyMHS20157BGarsoffkyTMeilingerCHoreisSSchwanZürich, Switzerland2015-05-0455Movies and especially animations, where cameras can move nearly without any restriction, often use moving cameras, thereby intensifying continuity [Bor02] and influencing the impression of cinematic space [Jon07]. Further studies effectively use moving cameras to explore perception and processing of real world action [HUGG14]. But what is the influence of simultaneous multiple movements of actors and camera on basic perception and understanding of film sequences? It seems reasonable to expect that understanding of object movement is easiest from a static viewpoint, but that nevertheless moving viewpoints can be partialed out during perception.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-55The influence of a moving camera on the perception of distances between moving objects1501715422YukselMSBF20157BYükselSMahboubiCSecchiHHBülthoffAFranchiSeattle, WA, USA2015-05-00870876In this paper we introduce the design of a light-weight novel flexible-joint arm for light-weight unmanned aerial vehicles (UAVs), which can be used both for safe physical interaction with the environment and it represents also a preliminary step in the direction of performing quick motions for tasks such as hammering or throwing. The actuator consists of an active pulley driven by a rotational servo motor, a passive pulley which is attached to a rigid link, and the elastic connections (springs) between these two pulleys. We identify the physical parameters of the system, and use an optimal control strategy to maximize its velocity by taking advantage of elastic components. The prototype can be extended to a light-weight variable stiffness actuator. The flexible-joint arm is applied on a quadrotor, to be used in aerial physical interaction tasks, which implies that the elastic components can also be used for stable interaction absorbing the interactive disturbances which might damage the flying system and its hardware. The design is validated through several experiments, and future developments are discussed in the paper.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICRA-2015-Yueksel.pdfpublished6Design, Identification and Experimental Testing of a Light-Weight Flexible-joint Arm for Aerial Physical Interaction1501715422RajappaRBF20157SRajappaMRyllHHBülthoffAFranchiSeattle, WA, USA2015-05-0040064013Mobility of a hexarotor UAV in its standard configuration is limited, since all the propeller force vectors are parallel and they achieve only 4-DoF actuation, similar, e.g., to quadrotors. As a consequence, the hexarotor pose cannot track an arbitrary trajectory while the center of mass is tracking a position trajectory. In this paper, we consider a different hexarotor architecture where propellers are tilted, without the need of any additional hardware. In this way, the hexarotor gains a 6-DoF actuation which allows to independently reach positions and orientations in free space and to be able to exert forces on the environment to resist any wrench for aerial manipulation tasks. After deriving the dynamical model of the proposed hexarotor, we discuss the controllability and the tilt angle optimization to reduce the control effort for the specific task. An exact feedback linearization and decoupling control law is proposed based on the input-output mapping, considering the Jacobian and task acceleration, for non-linear trajectory tracking. The capabilities of our approach are shown by simulation results.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICRA-2015-Rajappa.pdfpublished7Modeling, Control and Design Optimization for a Fully-actuated Hexarotor Aerial Vehicle with Tilted Propellers1501715422StegagnoMB20157PStegagnoCMassiddaHHBülthoffSalamanca, Spain2015-04-00307313The ability to identify the target of a common action is fundamental for the development of a multi-robot team able to interact with the environment. In most existing systems,
the identification is carried on individually, based on either color coding, shape identification or complex vision systems. Those methods usually assume a broad point of view over the objects, which are observed in their entirety. This assumption is sometimes difficult to fulfil in practice, and in particular in swarm systems, constituted by a multitude of small robots with limited sensing and computational capabilities.
In this paper, we propose a method for target identification with a heterogeneous swarm of low-informative
spatially-distributed sensors employing a distributed version of the naive Bayes classifier. Despite limited individual sensing capabilities, the recursive application of the Bayes law allows the identification if the robots cooperate sharing the information that they are able to gather from their limited points of view. Simulation results show the effectiveness of this approach highlighting some properties of the developed algorithm.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/SAC-2015-Stegagno.pdfpublished6Distributed Target Identification in Robotic Swarms1501715422SoykaLKBSM20157FSoykaEKokkinaraMLeyrerHHBülthoffMSlaterBJMohlerArles, France2015-03-253340The International Air Transport Association forecasts that there will be at least a 30% increase in passenger demand for flights over the next five years. In these circumstances the aircraft industry is looking for new ways to keep passengers occupied, entertained and healthy, and one of the methods under consideration is immersive virtual reality. It is therefore becoming important to understand how motion sickness and presence in virtual reality are influenced by physical motion. We were specifically interested in the use of head-mounted displays (HMD) while experiencing in-flight motions such as turbulence. 50 people were tested in different virtual environments varying in their context (virtual airplane versus magic carpet ride over tropical islands) and the way the physical motion was incorporated into the virtual world (matching visual and auditory stimuli versus no incorporation). Participants were subjected to three brief periods of turbulent motions realized with a motion simulator. Physiological signals (postural stability, heart rate and skin conductance) as well as subjective experiences (sickness and presence questionnaires) were measured. None of our participants experienced severe motion sickness during the experiment and although there were only small differences between conditions we found indications that it is beneficial for both wellbeing and presence to choose a virtual environment in which turbulent motions could be plausible and perceived as part of the scenario. Therefore we can conclude that brief exposure to turbulent motions does not get participants sick.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Turbulent Motions Cannot Shake VR150171542215017PaulM20157SPaulBMohlerArles, France2015-03-0012So far in my research studies with virtual reality I have focused on using body and hand motion tracking systems in order to animate different 3D self-avatars in immersive virtual reality environments (head-mounted displays or desktop virtual reality). We are using self-avatars to explore the following basic research question: what sensory information is used to perceive ones body dimensions? And
the applied question of how we can best create a calibrated selfavatar for efficient use in first-person immersive head-mounted display interaction scenarios. The self-avatar used for such research questions and applications has to be precise, easy to use and enable the virtual hand and body to interact with physical objects. This is what my research has focused on thus far and what I am developing for the completion of my first year of my graduate studies. We plan
to use LEAP motion for hand and arm movements and the Moven
Inertial Measurement suit for full body tracking and the Oculus DK2 head-mounted display. A several step process of setting up and calibrating an animated self-avatar with full body motion and hand tracking is described in this paper. First, the user’s dimensions will be measured, they will be given a self-avatar with these dimensions, then they will be asked to perform pre-determined actions (i.e. touching objects, walking in a specific trajectory), then we
will in real-time estimate how precise the animated body and body parts are relative to the real world reference objects, and finally a scaling of the avatar size or retargetting of the motion is performed in order to meet a specific minimum error requirement.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Animated self-avatars in immersive virtual reality for studying body perception and distortions150171542215017Chang20147D-SChangTübingen, Germany2015-02-005764nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Die Wahrnehmung von sozialen Signalen1501715422OlivariNBP20157MOlivariFMNieuwenhuizenHHBülthoffLPolliniKissimmee, FL, USA2015-01-00284298Methods for identifying neuromuscular response commonly assume time-invariant neuromuscular dynamics. However, neuromuscular dynamics are likely to change during realistic control scenarios. In a previous paper we presented a method for identifying time-varying neuromuscular dynamics based on a Recursive Least Squares (RLS) algorithm. To date, this method has only been validated in a Monte Carlo simulation study. This paper presents an experimental validation of the same method. In the experiment, three different disturbance-rejection tasks were performed: a position task with the human instructed to minimize the stick deflection in front of an external force disturbance, a relax task with the instruction to relax the arm, and a time-varying task with the instruction to alternate between position and relax tasks. The position and relax tasks induce different time-invariant neuromuscular dynamics, whereas the time-varying task induces time-varying neuromuscular dynamics. The RLS-based method was used to estimate neuromuscular dynamics in the three tasks. The neuromuscular estimates were reliable both in time-invariant and time-varying tasks. These findings indicate that the RLS-based method can be used to estimate time-varying neuromuscular responses in human-in-the loop experiments.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published14Identifying Time-Varying Neuromuscular Response: Experimental Evaluation of a RLS-based Algorithm1501715422vonLassbergBC20152Cvon LassbergKABeykirchJLCamposNova BiomedicalNew York, NY, USA2015-00-006181Visual orientation during head- and self-motion without retinal image slips requires efficient gaze stabilizing oculomotor functions that support unblurred retinal function. These mechanisms are driven by different sensor systems, such as vestibular afferents (vestibulo-ocular reflex - VOR) or retinal afferents (optokinetic reflex - OKR) to generate reflexive eye movements that continuously compensate for 3-dimensional head motions in space. High-level athletes trained in sports that involve fast and complex rotational movements (i.e., gymnasts) have a highly developed capability of efficiently orienting while executing such complex movements. This spatial orientation ability can, to a certain extend be learned. However, one's intrinsic aptitude for easily coping with such multiaxial orientation challenges seems to be specific to each individual. It is not clear which role the individual level of VOR precision plays within the possible factors that determine such individual aptitudes. The aim of the present study is to examine to what extend the individual level of VOR correlates with individual aptitude to cope with multiaxial spatial orientation challenges as required in gymnastics. For this we used a method to evaluate the individual aptitude for multiaxial spatial orientation during actively performed maneuvers in competitive gymnasts by exploiting the accumulated expertise of coaches. We directly compared these expert-rating measures to individual VOR characteristics. The results indicate relationships between these ratings and the response of the vertical VOR in gymnasts.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published20Comparing vestibulo-ocular eye movement characteristics with coaches' rankings of spatial orientation aptitudes in gymnasts1501715422PrettoVNB20152PPrettoJVenrooijANestiHHBülthoffSpringerDordrecht, The Netherlands2015-00-00131152The goal of vehicle motion simulation is the realistic reproduction of the perception a human observer would have inside the moving vehicle by providing realistic motion cues inside a motion simulator. Motion cueing algorithms play a central role in this process by converting the desired vehicle motion into simulator input commands with maximal perceptual fidelity, while remaining within the limited workspace of the motion simulator. By understanding how the one’s own body motion through the environment is transduced into neural information by the visual, vestibular and somatosensory systems and how this information is processed in order to create a whole percept of self-motion we can qualify the perceptual fidelity of the simulation. In this chapter, we address how a deep understanding of the functional principles underlying self-motion perception can be exploited to develop new motion cueing algorithms and, in turn, how motion simulation can increase our understanding of the brain’s perceptual processes. We propose a perception-based motion cueing algorithm that relies on knowledge about human self-motion perception and uses it to calculate the vehicle motion percept, i.e. how the motion of a vehicle is perceived by a human observer. The calculation is possible through the use of a self-motion perception model, which simulate the brain’s motion perception processes. The goal of the perception-based algorithm is then to reproduce the simulator motion that minimizes the difference between the vehicle’s desired percept and the actual simulator percept, i.e. the “perceptual error”. Finally, we describe the first experimental validation of the new motion cueing algorithm and shown that an improvement in the current standards of motion cueing is possible.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published21Perception-Based Motion Cueing: A Cybernetics Approach to Motion Simulation1501715422BulthoffALB20152IBülthoffRGMArmannRKLeeHHBülthoffSpringerDordrecht, The Netherlands2015-00-00153165The other-race effect refers to the observation that we perform better in tasks involving faces of our own race compared to faces of a race we are not familiar with. This is especially interesting as from a biological perspective, the category “race” does in fact not exist (Cosmides L, Tooby J, Krurzban R, Trends Cogn Sci 7(4):173–179, 2003); visually, however, we do group the people around us into such categories. Usually, the other-race effect is investigated in memory tasks where observers have to learn and subsequently recognize faces of individuals of different races (Meissner CA, Brigham JC, Psychol Public Policy Law 7(1):3–35, 2001) but it has also been demonstrated in perceptual tasks where observers compare one face to another on a screen (Walker PM, Tanaka J, Perception 32(9):1117–1125, 2003). In all tasks (and primarily for technical reasons) the test faces differ in race and identity. To broaden our general understanding of the effect that the race of a face has on the observer, in the present study, we investigated whether an other-race effect is also observed when participants are confronted with faces that differ only in ethnicity but not in identity. To that end, using Asian and Caucasian faces and a morph algorithm (Blanz V, Vetter T, A morphable model for the synthesis of 3D faces. In: Proceedings of the 26th annual conference on Computer graphics and interactive techniques – SIGGRAPH’99, pp 187–194, 1999), we manipulated each original Asian or Caucasian face to generate face “race morphs” that shared the same identity but whose race appearance was manipulated stepwise toward the other ethnicity. We presented each Asian or Caucasian face pair (original face and a race morph) to Asian (South Korea) and Caucasian (Germany) participants who had to judge which face in each pair looked “more Asian” or “more Caucasian”. In both groups, participants did not perform better for same-race pairs than for other-race pairs. These results point to the importance of identity information for the occurrence of an other-race effect.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12The Other-Race Effect Revisited: No Effect for Faces Varying in Race Only1501715422HardlessMM20142GHardlessTMeilingerHAMallotElsevier ScienceAmsterdam, The Netherlands2015-00-00133137In this article, the significance of virtual reality within the field of spatial cognition is outlined. The role of virtual reality is grouped in three sections addressing (1) the current and latest technology of virtual reality regarding the two main functions within virtual reality, that is, technology to interact with virtuality (input devices used to record observer actions and output devices used to simulate sensory stimuli) and technology for presenting the virtual environments to the user, (2) the usage of this technology for the purpose of research in the field of spatial cognition regarding behavioral and neuronal processes (discussing advantages and disadvantages of virtual reality), and (3) virtual reality experiments and their results that are relevant in current research of spatial cognition covering place memory, wayfinding in large-scale spaces, and the neural representations of spatial features.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/hardiess_et_al_2015_virtual_reality_and_spatial_cognition.pdfpublished4Virtual Reality and Spatial Cognition1501715422SchuchardtLNP201546BISchuchardtPLehmannFNieuwenhuizenPPerfect2015-01-00VolkovaVolkmar201546EVolkova-Volkmar2015-01-00FosterB20157CFosterABartelsSchramberg, Germany2015-11-2342Our natural visual world contains a variety of different types of motion. Two of the most prominent are global
ow, the movement of the entire visual scene that occurs
whenever we make an eye or head movement, and local motion, the real movement of people and objects in our environment. We constantly have a mixture of these two kinds of motion, but generally have no problems distinguishing between the two, even though they can produce similar movements on the retina. This ability of the visual system was explored in the present study. Subjects watched a feature movie, used
as an approximation to the natural visual world whilst functional magnetic resonance images (fMRI) were made of their brains. Relative amounts of global ow and local
motion in the movie were determined using a motion algorithm, and compared to bloodoxygenation level dependent (BOLD) activations in specific visual regions of interest,
which were determined from standard retinotopic mapping and localizer techniques. A significant preference to local motion was identified in areas MST, V5/MT, V3A, V2
and V3. Furthermore whole brain analyses showed additional areas with a preference to local motion, and responses to global flow in activity in areas commonly involved in
perception of our surrounding spatial environment. These findings further support the idea that there are different brain areas involved in the processing of global flow and
local motion.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/NeNa-2015-Abstract-Book.pdfpublished-42Perception of Global Flow and Local Motion under Natural Conditions15017154211501715422StanglMPSBW20157MStanglTMeilingerA-APapeJSchultzHHBülthoffTWolbersChicago, IL, USA2015-10-19Navigating the environment requires the integration of distance, direction, and place information, which critically depends on hippocampal place and entorhinal grid cells. Studies in rodents have shown, however, that substantial changes in the environment’s surroundings can trigger a change in the set of active place cells, accompanied by a rotation of the grid cell firing pattern (Fyhn et al., 2007) - a phenomenon commonly referred to as global remapping. In the present study, we investigated whether human grid and place cells show a similar remapping behavior in response to environmental changes and whether different episodes in the same environment might cause remapping as well. In two experiments, participants underwent 3T fMRI scanning while they navigated a virtual environment, comprising two different rooms in which objects were placed in random locations. Participants explored the first room and learned these object-location conjunctions (learning-phase), after which the objects disappeared and participants were asked to navigate repeatedly to the different object locations (test-phase). This procedure (i.e. a learning- and test-phase within a room) was repeated several times, separated by different events, such as leaving and re-entering the same room, or moving to the second, different room. Indicators of grid cell firing were derived from the BOLD activation while participants moved within the virtual environment, whereas indicators of place cell firing were derived from the activation patterns while participants were standing at particular object locations. We compared these indicators between the different rooms and events to investigate how these manipulations influence remapping. Overall, our findings demonstrate entorhinal grid cell and hippocampal place cell remapping in humans. Furthermore, our results suggest that beside environmental changes, also other events (e.g., re-entering the same environment) might evoke remapping. We conclude that, in humans, remapping is not only environment-based but also event-based and might serve as a neural mechanism to create distinct memory traces for episodic memory formation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Triggers of entorhinal grid cell and hippocampal place cell remapping in humans1501715422FademrechtBBd2015_27LFademrechtIBülthoffNEBarracloughSde la RosaChicago, IL, USA2015-10-18Actions often occur in the visual periphery. Here we measured the spatial extent of action sensitive perceptual channels across the visual field using a behavioral action adaptation paradigm. Participants viewed an action (punch or handshake) for a prolonged amount of time (adaptor) and subsequently categorized an ambiguous test action as either 'punch' or 'handshake'. The adaptation effect refers to the biased perception of the test stimulus due to the prolonged viewing of the adaptor and the resulting loss of sensitivity to that stimulus. Therefore the more a channel responds to a specific stimulus the higher is the adaptation effect for that certain channel. We measured the size of the adaptation effect as a function of the spatial distance between adaptor and test stimuli in order to determine if actions can be processed in spatially distinct channels. Specifically, we adapted participants at 0° (fixation), 20° and 40° eccentricity in three separate conditions to measure the putative spatial extent of action channels at these positions. In each condition, we measured the size of the adaptation effect at -60°,-40°,-20°, 0°,20°,40°,60° of eccentricity. We fitted Gaussian functions to describe the channel response of each condition and used the full width at half maximum (FWHM) of the Gaussians as a measure of the spatial extent of the action channels. In contrast to previous reports of an increase of midget ganglion cell dendritic field size with eccentricity (Dacey, 1993), our results showed that FWHM decreased with eccentricity (FWHM at 0°: 56°, FWHM at 20°: 29, FWHM at 40°: 26). We then asked whether the response of these action sensitive perceptual channels can be used to predict average recognition performance (d') of social actions across the visual field obtained in a previous study (Fademrecht et al. 2014). We used G(x) - the summed response of all three channels at eccentricity x, to predict recognition performance at eccentricity x. A simple linear transformation of the summed channel response of the form a+b*G(x) was able to predict 95.5% of the variation in the recognition performance. Taken together these results demonstrate that actions can be processed in separate spatially distinct perceptual channels, their FWHM decreases with eccentricity and can be used to predict action recognition performance in the visual periphery.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The spatial extent of action sensitive perceptual channels decrease with visual eccentricity1501715422ChangJBd20157D-SChangUJuHHBülthoffSde la RosaSt. Pete Beach, FL, USA2015-09-00493The way we use social actions in everyday life to interact with other people differs across various cultures. Can this cultural specificity of social interactions be already observed in perceptual processes underlying the visual recognition of actions? In the current study, we investigated whether there were any differences in action recognition between Germans and Koreans using a visual adaptation paradigm. German (n=24, male=10, female=14) and Korean (n=24, male=13, female=11) participants first had to recognize and describe four different social actions (handshake, punch, wave, fist-bump) presented as brief movies of point-light-stimuli. The actions handshake, punch and wave are commonly known in both cultures, but fist-bump is largely unknown in Korea. In the subsequent adaptation experiment, participants were repeatedly exposed to each of the four actions as adaptors (40 seconds in the beginning, and 3 times before each trial) in separate experimental blocks. The order of actions was mixed and balanced across all participants. In each experimental block, participants had to categorize ambiguous actions in a 2-Alternatives-Forced-Choice task. The ambiguous test stimuli were created by linearly combining the kinematic patterns of two actions such as a punch and a handshake. We measured to what degree each of the four adaptors biased the perception of the subsequent test stimulus for German and Korean participants. The actions handshake, punch and wave were correctly recognized by both Germans and Koreans, but most Koreans failed to recognize the correct meaning of a fist-bump. However, Germans and Koreans showed a remarkable similarity regarding the relative perceptual biases that the adaptors induced in the perception of the test stimuli. This consistency extended even to the action (fist-bump) which was not accurately recognized by Koreans. These results imply a surprising consistency and robustness of action recognition processes across different cultures.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-493How different is Action Recognition across Cultures? Visual Adaptation to Social Actions in Germany vs. Korea1501715422DobsSBG20157KDobsJSchultzIBülthoffJLGardnerSt. Pete Beach, FL, USA2015-09-00684Humans can easily extract who someone is and what expression they are making from the complex interplay of invariant and changeable visual features of faces. Recent evidence suggests that cortical mechanisms to selectively extract information about these two socially critical cues are segregated. Here we asked if these systems are independently controlled by task demands. We therefore had subjects attend to either identity or expression of the same dynamic face stimuli and examined cortical representations in topographically and functionally localized visual areas using fMRI. Six human subjects performed a task that involved detecting changes in the attended cue (expression or identity) of dynamic face stimuli (8 presentations per trial of 2s movie clips depicting 1 of 2 facial identities expressing happiness or anger) in 18-20 7min scans (20 trials/scan in pseudorandom order) in 2 sessions. Dorsal areas such as hMT and STS were disassociated from more ventral areas such as FFA and OFA by their modulation with task demands and their encoding of exemplars of expression and identity. In particular, dorsal areas showed higher activity during the expression task (hMT: p< 0.05, lSTS: p< 0.01; t-test) where subjects were cued to attend to the changeable aspects of the faces whereas ventral areas showed higher activity during the identity task (lOFA: p< 0.05; lFFA: p< 0.05). Specific exemplars of identity could be reliably decoded (using linear classifiers) from responses of ventral areas (lFFA: p< 0.05; rFFA: p< 0.01; permutation-test). In contradistinction, dorsal area responses could be used to decode specific exemplars of expression (hMT: p< 0.01; rSTS: p< 0.01), but only if expression was attended by subjects. Our data support the notion that identity and expression are processed by segregated cortical areas and that the strength of the representations for particular exemplars is under independent task control.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-684Independent control of cortical representations for expression and identity of dynamic faces1501715422ZhaoB20157MZhaoIBülthoffSt. Pete Beach, FL, USA2015-09-00698Does a face itself determine how well it will be recognized? Unlike many previous studies that have linked face recognition performance to individuals’ face processing ability (e.g., holistic processing), the present study investigated whether recognition of natural faces can be predicted by the faces themselves. Specifically, we examined whether short- and long-term recognition memory of both dynamic and static faces can be predicted according to face-based properties. Participants memorized either dynamic (Experiment 1) or static (Experiment 2) natural faces, and recognized them with both short- and long-term retention intervals (three minutes vs. seven days). We found that the intrinsic memorability of individual faces (i.e., the rate of correct recognition across a group of participants) consistently predicted an independent group of participants’ performance in recognizing the same faces, for both static and dynamic faces and for both short- and long-term face recognition memory. This result indicates that intrinsic memorability of faces is bound to face identity rather than image properties. Moreover, we also asked participants to judge subjective memorability of faces they just learned, and to judge whether they were able to recognize the faces in late test. The result shows that participants can extract intrinsic face memorability at encoding. Together, these results provide compelling evidence for the hypothesis that intrinsic face memorability predicts natural face recognition, highlighting that face recognition performance is not only a function of individuals’ face processing ability, but also determined by intrinsic properties of faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-698Intrinsic Memorability Predicts Short- and Long-Term Memory of Static and Dynamic Faces1501715422delaRosaLSSMBC20157Sde la RosaMLubkullSStreuberASaultonTMeilingerHHBülthoffRCañal-BrulandSt. Pete Beach, FL, USA2015-09-0052How do we control our bodily movements when socially interacting with others? Research on online motor control provides evidence that task relevant visual information is used for guiding corrective movements of ongoing motor actions. In social interactions observers have been shown to use their own motor system for predicting the outcome of another person's action (direct matching hypothesis) and it has been suggested that this information is used for the online control of their social interactions such as when giving someone a high five. Because only human but not non-human (e.g. robot) movements can be simulated within the observer's motor system, the human-likeness of the interaction partner should affect both the planning and online control of movement execution. We examined this hypothesis by investigating the effect of human-likeness of the interaction partner on motor planning and online motor control during natural social interactions. To this end, we employed a novel virtual reality paradigm in which participants naturally interacted with a life-sized virtual avatar. While 14 participants interacted with a human avatar, another 14 participants interacted with a robot avatar. All participants were instructed to give a high-five to the avatar. To test for online motor control we randomly perturbed the avatar's hand trajectories during participants' motor execution. Importantly, human and robot looking avatars were executing identical movements. We used optical tracking to track participants' hand positions. The analysis of hand trajectories showed that participants were faster in carrying out the high-five movements with humans than with robots suggesting that the human-likeness of the interaction partner indeed affected motor planning. However, there was little evidence for a substantial effect of the human-likeness on online motor control. Taken together the results indicate that the human-likeness of the interaction partner influences motor planning but not online motor control.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-52Motor planning and control: Humans interact faster with a human than a robot avatar1501715422FademrechtBd20157LFademrechtIBülthoffSde la RosaSt. Pete Beach, FL, USA2015-09-00494Although actions often appear in the visual periphery, little is known about action recognition outside of the fovea. Our previous results have shown that action recognition of moving life-size human stick figures is surprisingly accurate even in far periphery and declines non-linearly with eccentricity. Here, our aim was (1) to investigate the influence of motion information on action recognition in the periphery by comparing static and dynamic stimuli recognition and (2) to assess whether the observed non-linearity in our previous study was caused by the presence of motion because a linear decline of recognition performance with increasing eccentricity was reported with static presentations of objects and animals (Jebara et al. 2009; Thorpe et al. 2001). In our study, 16 participants saw life-size stick figure avatars that carried out six different social actions (three different greetings and three different aggressive actions). The avatars were shown dynamically and statically on a large screen at different positions in the visual field. In a 2AFC paradigm, participants performed 3 tasks with all actions: (a) They assessed their emotional valence; (b) they categorized each of them as greeting or attack and (c) they identified each of the six actions. We found better recognition performance for dynamic stimuli at all eccentricities. Thus motion information helps recognition in the fovea as well as in far periphery. (2) We observed a non-linear decrease of recognition performance for both static and dynamic stimuli. Power law functions with an exponent of 3.4 and 2.9 described the non-linearity observed for dynamic and static actions respectively. These non-linear functions describe the data significantly better (p=.002) than linear functions and suggest that human actions are processed differently from objects or animals.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-494Recognition of static and dynamic social actions in the visual periphery1501715422BulthoffZ20157IBülthoffMZhaoSt. Pete Beach, FL, USA2015-09-00145Holistic face processing is often referred to as the inability to selectively attend to part of faces without interference from irrelevant facial parts. While extensive research seeks for the origin of holistic face processing in perceiver-based properties (e.g., expertise), the present study aimed to pinpoint face-based visual information that may support this hallmark indicator of face processing. Specifically, we used the composite face task, a standard task of holistic processing, to investigate whether facial surface information (e.g., texture) or facial shape information underlies holistic face processing, since both sources of information have been shown to support face recognition. In Experiment 1, participants performed two composite face tasks, one for normal faces (i.e., shape + surface information) and one for shape-only faces (i.e., without facial surface information). We found that facial shape information alone is as sufficient to elicit holistic processing as normal faces, indicating that facial surface information is not necessary for holistic processing. In Experiment 2, we tested whether facial surface information alone is sufficient to observe holistic face processing. We chose to control facial shape information instead of removing it by having all test faces to share exactly the same facial shape, while exhibiting different facial surface information. Participants performed two composite face tasks, one for normal faces and one for same-shape faces. We found a composite face effect in normal faces but not in same-shape faces, indicating that holistic processing is mediated predominantly by facial shape rather than surface information. Together, these results indicate that facial shape, but not surface information, underlies holistic face processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-145What Type of Facial Information Underlies Holistic Face Processing?1501715422BulthoffMT20157IBülthoffBMohlerIMThorntonLiverpool, UK2015-08-0051In most face recognition studies, learned faces are shown without a visible body to passive participants. Here, faces were attached to a body and participants were either actively or passively viewing them before their recognition performance was tested. 3D-laser scans of real faces were integrated onto sitting or standing full-bodied avatars placed in a virtual room. In the ‘active’ learning condition, participants viewed the virtual environment through a head-mounted display. Their head position was tracked to allow them to walk physically from one avatar to the next and to move their heads to look up or down to the standing or sitting avatars. In the ‘passive dynamic’ condition, participants saw a rendering of the visual explorations of the first group. In the ‘passive static’ condition, participants saw static screenshots of the upper bodies in the room. Face orientation congruency (up versus down) was manipulated at test. Faces were recognized more accurately when viewed in a familiar orientation for all learning conditions. While active viewing in general improved performance as compared to viewing static faces, passive observers and active observers - who received the same visual information - performed similarly, despite the absence of volitional movements for the passive dynamic observers.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-51Active and passive exploration of faces150171542215017delaRosaB20157Sde la RosaHHBülthoffLiverpool, UK2015-08-00210Previous results showed that actions can be recognized in multiple ways suggesting that several recognition levels exist in action recognition (e.g. a waving action can be recognized as a greeting or a wave). Categorization tasks suggest that the recognition of social interactions is more accurate at the basic-level (e.g. greeting) than at the subordinate level (e.g. waving). What is the origin of the supremacy of basic-level recognition? Here we examined whether basic-level recognition relies to a larger degree on configural processing than subordinate social interaction recognition. To do so we probed basic-level and subordinate recognition performance (RT and discrimination ability (d')) of 20 participants for upright and inverted social interactions. Larger inversion effects are typically associated with stronger configural processing. Participants saw a one image at a time and reported whether it matched a predefined action. Our results showed that - contrary to our initial hypothesis - subordinate recognition of social interactions was significantly more affected by stimulus inversion than basic-level recognition. Moreover, recognition performance was better for subordinate than basic-level recognition. We show that these results can be well explained by a top-down activation of snapshot templates.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-210Inversion effects are stronger for subordinate than for basic-level action recognition1501715422FademrechtBBd20157LFademrechtNEBarracloughIBülthoffSde la RosaLiverpool, UK2015-08-00214Although actions often appear in the visual periphery, little is known about action recognition away from fixation. We showed in previous studies that action recognition of moving stick-figures is surprisingly good in peripheral vision even at 75° eccentricity. Furthermore, there was no decline of performance up to 45° eccentricity. This finding could be explained by action sensitive units in the fovea sampling also action information from the periphery. To investigate this possibility, we assessed the horizontal extent of the spatial sampling area (SSA) of action sensitive units in the fovea by using an action adaptation paradigm. Fifteen participants adapted to an action (handshake, punch) at the fovea were tested with an ambiguous action stimulus at 0°, 20°, 40° and 60° eccentricity left and right of fixation. We used a large screen display to cover the whole horizontal visual field of view. An adaptation effect was present in the periphery up to 20° eccentricity (p<0.001), suggesting a large SSA of action sensitive units representing foveal space. Hence, action recognition in the visual periphery might benefit from a large SSA of foveal units.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-214Seeing actions in the fovea influences subsequent action recognition in the periphery1501715422Chang20157D-SChangBudapest, Hungary2015-07-0341When two people interact, they adjust their behavior
to each other. For this, they utilize verbal or non-verbal communicative signals which are in most cases either visual or auditory. But how do people adjust their behavior with a partner when there are no possibilities to
exchange visual or auditory cues? Furthermore, do people make social inferences about each mother in such a situation? In a novel experimental setup, we connected two people with a rope and they had to accomplish a joint motor task together while being separated by a blind and not able
to see or hear each other. However, the participant’s confederate was always an experimenter who behaved either egoistically or cooperatively in a consistent manner. We
measured the point-collecting behavior and speed of coordination during the interaction, and person-related judgments about the confederate after the interaction (n=24). Results showed strong partner-dependent changes in
behavior depending on whether the partner was egoistic or
cooperative (t(23)=24.21, p<0.001).
In addition, an egoistic partner was more often judged as a male and bigger in size compared to a cooperative partner. These results demonstrate that partner-dependent changes in behavior and automatic judgments occur naturally even when possibilities for communication are minimal.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-41Blindly judging other people: Social interaction with an egoistic vs. cooperative person while being connected with a rope without seeing or hearing each other1501715422delaRosaWBFSMC20157Sde la RosaYWahnHHBülthoffLFademrechtASaultonTMeilingerD-SChangBudapest, Hungary2015-07-0253Associating sensory action information with the correct action interpretation (semantic action categorization (SAC)) is important for successful joint action, e.g. for the generation of an appropriate complementary response. Vision for perception and vision for action has been suggested to rely on different visual mechanisms (two
streams hypothesis). To better understand visual processes supporting joint actions, we compared SAC processes in passive observation and in joint actions. If passive observation and joint action taps into different SAC processes, then adapting SAC processes during passive observation should not affect the generation of complementary action responses. We used an action adaptation paradigm to selectively measure SAC processes
in a novel virtual reality set up, which allowed participants to naturally interact with a human looking avatar. Participants visually adapted to an action of an avatar and gave a SAC judgment about a subsequently presented ambiguous action in three different experimental conditions: (1) by pressing a button (passive condition) or
by either creating an action response (2) subsequently to
(active condition) or (3) simultaneously with (joint action condition) the avatar's action. We found no significant difference between the three conditions suggesting that
SAC mechanisms for passive observation and joint action shares similar processes.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-53Does the two streams hypothesis hold for joint actions?1501715422KimCCPBK20157JKimYGChungS-CChungJ-YParkHHBülthoffS-PKimHonolulu, HI, USA2015-06-17Introduction:
As the use of mobile devices (particularly, wearable devices with vibrating alert features) are becoming more widespread, investigations on perceptual grouping of vibrotactile stimuli with different features, such as vibrating frequencies, are becoming more important for the design of effective haptic user interfaces. Previous psychophysical studies demonstrated that human perceive vibration frequencies as three distinctive groups: 'slow motion' ranging from 1 to 3 Hz, 'fluttering' ranging from 10 to 70 Hz, and 'smooth vibration' ranging from 100 to 300 Hz [1, 2]. This perceptual grouping pattern has been mainly explained based on the different characteristics of the tactile sensory innervations [3, 4]. However, characteristics of tactile innervations and sensory afferents do not fully describe perceptual grouping of vibrotactile stimuli. For instance, a boundary frequency should be between 40 and 50 Hz according to the afferent characteristics, but perception of vibrotactile stimuli is rather discriminated between 70 and 100 Hz. Furthermore, perceptual grouping is more likely to be affected by the neural encoding of vibration frequencies in the central nervous system, in addition to the characteristics of afferents. Here, we therefore search for the brain regions carrying frequency discriminative information using the searchlight multi-voxel pattern analysis (MVPA)  and compare the neural representations of different frequencies with the perceptual grouping patterns using multidimensional scaling (MDS).
Fourteen subjects participated in this study and experimental procedures were approved by the Korea University (KU-IRB-11-46-A-1). Vibrotactile stimuli whose frequency varied from 20 to 200 Hz with an increment of 20 Hz were delivered to the tip of the index finger of the right hand by a vibrotactile stimulation device. Subjects performed ten runs of two sessions (one run for each frequency). Each session consisted of two consecutive periods: a 30 s resting period followed by a 30 s stimulation period. Functional images (T2*-weighted gradient EPI, TR = 3 s, voxel size = 2.0 × 2.0 × 2.0 mm3) were obtained using a 3T scanner.
An information-based analysis with a cubical searchlight was employed to find spatially localized neuronal patterns varying with tactile frequencies. Decoding accuracies evaluated by a 2-fold cross-validation procedure were allocated to the center voxel of each searchlight. Then, we computed a correlation-based dissimilarity matrix and used MDS to map the neural representations for each of ten different frequencies onto the 2D space.
A random-effects group analysis revealed that a cluster exhibited statistically significant decoding capabilities to differentiate distinct frequencies (p<0.0001 uncorrected, cluster size>50). This cluster covered the contralateral postcentral gyrus (S1) and the supramarginal gyrus (SMG). Mean decoding accuracy was 77.7 ± 13.8 % and decoding accuracy results significantly exceeded the chance level (t13=7.5, p<0.01). The MDS analysis showed that neural representations of 20 and 200 Hz were mapped the farthest positions (i.e. located in opposite side). Moreover, hierarchical cluster analyses revealed that neural representations of each frequency were grouped into two clusters, one for 20-100Hz and the other for 120-200 Hz.
In this study, we statistically assessed each set of multi-voxel patterns and revealed that contralateral S1 and SMG exhibited neural activity patterns specific to the vibration frequency discrimination. Results of MDS indicated that neural representations of 20~100 Hz and 120~200 Hz were divided into two distinct groups. This grouping pattern of neural representations is in line with the perceptual frequency categories suggested by previous studies [1, 2]. Our findings therefore suggest that the neural activity patterns in contralateral S1 and SMG may be closely related to perceptual grouping of vibrotactile frequency.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Multi-voxel patterns in the human brain associated with perceptual grouping of tactile frequencies1501715422RoheN2015_37TRoheUNoppeneyHonolulu, HI, USA2015-06-17Introduction:
To form a reliable percept of the multisensory environment, the brain integrates signals across the senses. To estimate for example an object's location from vision and audition, the optimal strategy is to integrate the object's audiovisual signals proportional to their reliability under the assumption that they were caused by a single source (i.e., maximum likelihood estimation, MLE). Behaviorally, it is well-established that humans integrate signals weighted by their reliability in a near-optimal fashion when integrating visual-haptic (Ernst and Banks, 2002) and audiovisual signals (Alais and Burr, 2004). Recently, elegant neurophysiological studies in macaques have shown that single neurons and neuronal populations implement reliability-weighted integration of visual-vestibular signals (Fetsch, et al., 2012; Morgan, et al., 2008). Yet, it is unclear how the human brain accomplishes this feat. Combining psychophysics and multivariate fMRI decoding in a spatial ventriloquist paradigm, we characterized the computational operations underlying audiovisual reliability-weighted integration at several cortical levels along the auditory and visual processing hierarchy.
In a spatial ventriloquist paradigm, participants (N = 5) were presented with auditory and visual signals that were independently sampled from four locations along the azimuth (Fig. 1). The signals were presented alone in unisensory conditions or jointly in bisensory conditions. The spatial reliability of the visual signal was high or low. Participants localized either the auditory or the visual spatial signal. The behavioral signal weights were estimated by fitting psychometric functions to participants' localization responses in bisensory conditions without (0°, i.e. congruent) or with a small spatial discrepancy (± 6°). These empirical weights were compared to weights which were predicted according to the MLE model from the signals' sensory reliabilities estimated in unisensory conditions. Similarly, neural signal weights were estimated by fitting 'neurometric' functions to the spatial locations decoded from regional fMRI activation patterns in bisensory conditions and compared to weight predictions from unisensory conditions. For decoding signal locations, a support vector machine was trained on activation patterns from congruent conditions and then generalized to data from discrepant conditions as well as unisensory conditions.
In summary, the results demonstrate that higher-order multisensory regions perform probabilistic computations such as reliability-weighting. However, despite the small signal discrepancy, the signals were not mandatorily integrated as predicted by the MLE model because task-relevant signals attained larger weights. Thus, probabilistic multisensory computations might involve more complex processes than mandatory reliability-weighted integration, such as inferring whether the signals were caused by a common or independent sources (i.e., causal inference). Only under conditions in which the assumptions of a common source is fostered (e.g., by presenting only correlated signals with a small discrepancy), multisensory signals might be fully integrated weighted by their reliability.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Task-dependent reliability-weighted integration of audiovisual spatial signals in parietal cortex15017188261501715422deWinkelB20157Kde WinkelHHBülthoffPisa, Italy2015-06-157475It has been shown repeatedly that visual and inertial sensory information on the heading of self-motion is fused by the CNS in a manner consistent with Bayesian Integration (BI). However, a few studies report violations of BI predictions. This dichotomy in experimental findings previously led us to develop a Causal Inference model for multisensory heading estimation, which could account for different strategies of processing multisensory heading information, based on discrepancies between the heading of the visual and inertial cues. Surprisingly, the results of an assessment of this model showed that multisensory heading estimates were consistent with BI regardless of any discrepancy. Here, we hypothesized that Causal Inference is a slow-top down process, and that heading estimates for discrepant cues show less consistency with BI when motion duration increases. Six participants were presented with unisensory visual and inertial horizontal linear motions with headings ranging between ±180°, and combinations thereof with discrepancies up to ±90°. Motion profiles followed a single period of a raised cosine bell with a maximum velocity of 0.3m/s, and had durations of two, four, and six seconds. For each stimulus, participants provided an estimate of the heading of self-motion. In general, the results showed that the probability that heading estimates are consistent with BI decreases as a function of stimulus duration, consistent with the hypothesis. We conclude that BI is likely to be a default mode of processing multisensory heading information, and that Causal Inference is a slow top-down process that interferes only given enough time.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Effects of Motion Duration on Causal Inference in Multisensory Heading Estimation1501715422YukselSSF20157BYükselNStaubCSecchiAFranchiSeattle WA, USA.2015-05-26nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2015/ICRA-2015-Workshop-Yueksel.pdfpublished0Aerial Physical Interaction: Design, Control, Identification and Estimation1501715422GeussS20157MNGeussJStefanucciAmsterdam, The Netherlands2015-03-153Fear is characterized by both state- and trait-level changes, both of which can temporally fluctuate to alter behavior. We observed an interaction between state and trait fear on perceptual estimates over time. When trait fear was low, estimates increased with state fear. High trait fear led to consistent overestimation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-3Height Estimates Are Altered by State- and Trait-Levels of Fear1501715422SaultonDB20157ASaultonTDoddsHHBülthoffAmsterdam, The Netherlands2015-03-1318We demonstrate that German and South Korean cultures perceive the size of surrounding indoor spaces differently. While Koreans seem to attend to all aspects/dimensions of rooms when comparing their size, Germans anchored on one single dimension of the space (egocentric depth) resulting in biases in room size perception.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-18Holistic Versus Analytic Perception of Indoor Spaces: Korean and German Cultural Differences in Comparative Judgments of Room Size1501715422ZhaoB2015_27MZhaoIBülthoffAmsterdam, The Netherlands2015-03-1232We demonstrate that both encoding and memory processes affect recognition of own- and other-race faces differently. Static own-race faces are better recognized than static other-race faces but this other-race effect is not found for rigidly moving faces. Further, this effect is larger in short-term memory than in long-term memory.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-32Memory of Own- and Other-Race Faces: Influences of Encoding and Retention Processes1501715422SymeonidouOBC20157E-RSymeonidouMOlivariHHBülthoffLLChuangHildesheim, Germany2015-03-10249250Haptic feedback can be introduced in control devices to improve steering performance, such as in driving and flying scenarios. For example, direct haptic feedback (DHF) can be employed to guide the operator towards an optimal trajectory. It remains unclear how DHF magnitude could interact with user performance. A weak DHF might not be perceptible to the user, while a large DHF could result in overreliance. To assess the influence of DHF, five naive participants performed a compensatory tracking task across different DHF magnitudes. During the task, participants were seated in front of an artificial horizon display and were asked to compensate for externally induced disturbances in the roll dimension by manipulating a control joystick. Our results indicate that haptic feedback benefits steering performance across all tested DHF levels. This benefit increases linearly with increasing DHF magnitude. Interestingly, shared control performance was always inferior to the same DHF system without human input. This could be due to involuntary resistance that results from the arm-dynamics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Direct haptic feedback benefits control performance during steering1501715422HuffPMd20157MHuffHPapenmeierTMeilingerSde la RosaHildesheim, Germany2015-03-10122When processing the semantic relations in a picture, observers are faster in determining the agent (i.e. the acting person) than the patient of an action (i.e. the person receiving an action). This “agent advantage effect” was shown with static pictorial stimulus material (e.g., one fish biting an other fish). We investigated whether this effect also holds true for dynamic social interactions (e.g. one person pushing an other person). The most important difference between static and dynamic stimuli is the amount of change per time unit, which is different for agents and patients. Participants viewed dynamic animations depicting two stick figures with one patting the other on the shoulder. The viewing angle on this interaction as well as the start frame of the movement were systematically varied and randomly presented. Participants were instructed to search for the agent (i.e. the person patting) and the patient (i.e. the person being patted; order counterbalanced across participants) in these interactions and to press the button corresponding to the location on the screen. Results indicated a reversed “agent advantage effect” with the participants being more correct when searching for the patient. This suggests that motion information derived from the dynamic interactions interacts with semantic processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-122Semantic Relations in Asymmetric Dynamic Social Interactions1501715422FladBC20157NFladHHBülthoffLLChuangHildesheim, Germany2015-03-1081Eye-movements can result in large artifacts in the EEG signal that could potentially obscure weaker cortically-based signals. Therefore, EEG studies are typically designed to minimize eyemovements [although see, Plöchl et al., 2012; Dimigen et al., 2011]. We present methods for simultaneous EEG and eye-tracking recordings in a visual scanning task. Participants were required to serially attend to four area-of-interests to detect a visual target. We compare EEG results, which were recorded either in the presence or absence of natural eye-movements. Furthermore, we demonstrate how natural eye-movement fixations can be reconstructed from the EOG signal, in a way that is comparable to the input from a simultaneous video-based eye-tracker. Based on these fixations, we address how EEG data can be segmented according to
eye-movements (as opposed to experimentally timed stimuli). Finally, we explain how eyemovement induced artifacts can be effectively removed via independent component analysis (ICA), which allows EEG components to be classified as having either a 'cortical' or 'noncortical' origin. These methods offer the potential of measuring robust EEG signals even in the presence of natural eye-movements.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-81Simultaneous EEG and eye-movement recording in a visual scanning task1501715422GlatzBC20157CGlatzHHBülthoffLLChuangHildesheim, Germany2015-03-1093Sounds with rising intensities are known to be more salient than their constant amplitude counterparts [Seifritz et al., 2002]. Incorporating a time-to-contact characteristic into the rising profile can further increase their perceived saliency [Gray, 2011]. We investigated whether
looming sounds with this time-to-contact profile might be especially effective as warning signals. Nine volunteers performed a primary steering task whilst occasionally discriminating oriented Gabor patches that were presented in their visual periphery. These visual stimuli could be preceded by an auditory warning cue, 1 second before they appeared. The 2000 Hz tone could have an intensity profile that was either constant (65 dB), linearly rising (60 - 75 dB, ramped tone), or exponentially increasing (looming tone). Overall, warning cues resulted in significantly faster and more sensitive detections of the visual targets. More importantly, we found that EEG potentials to the looming tone were significantly earlier and sustained for longer, compared to both the constant and ramped tones. This suggests that looming sounds are processed preferentially because of time-to-contact cues rather than rising intensity alone.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-93Sounds with time-to-contact properties are processed preferentially1501715422ScheerBC20157MScheerHHBülthoffLLChuangHildesheim, Germany2015-03-09220The workload of a given task, such as steering, can be defined as the demand that it places on the limited attentional and cognitive resources of a driver. Given this, an increase in workload should reduce the amount of resources that are available for other tasks. For example, increasing workload in a primary steering task can decrease attention to oddball targets in a secondary auditory detection task. This can diminish the amplitude of its event-related potential (i.e., P3; Wickens et al., 1984). Here, we present a novel approach that does not require the participant to perform a secondary task. During steering, participants experienced a threestimuli oddball paradigm, where pure tones were intermixed with infrequently presented, unexpected environmental sounds (e.g., cat meowing). Such sounds are known to elicit a subcomponent of the P3, namely novelty-P3. Novelty-P3 reflects a passive shift of attention, which also applies to task-irrelevant events, thus removing the need for a secondary task (Ullsperger et al., 2001). We found that performing a manual steering task attenuated the amplitude of the novelty-P3, elicited by task-irrelevant novel sounds. The presented paradigm could be a viable approach to estimate workload in real-world scenarios.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-220Measuring workload during steering: A novelty-P3 study1501715422Ryll201515MRyll2015-07-27nonotspecifiedpublishedA novel overactuated quadrotor UAV1501715422delaRosaM201641Sde la RosaTMeilingerMeilinger201510TMeilingerChuang2015_310LLChuangBulthoff2015_410JVenrooijHHBülthoffChang2015_510D-SChangBulthoff2015_310HHBülthoffAhmadL201510AAhmadPLimaReichenbach201510AReichenbachChuang2015_210LChuangCurioB201510CCurioMBreidtSaultonMBd201510ASaultonTMeilingerHHBülthoffSde la RosavanderHamWM201510Ivan der HamJWienerTMeilingerMeilingerRHBM201510TMeilingerJRebaneAHensonHHBülthoffHAMallotMeilingerTFWBd201510TMeilingerKTakahashiCFosterKWatanabeHHBülthoffSde la RosaCleijVPPMB201510DCleijJVenrooijPPrettoDMPoolMMulderHHBülthoffPaul201510SPaulFladBC2015_210NFladHHBülthoffLLChuangCanalBrulandd201510RCañal-BrulandSde la RosaChangBd201510D-SChangHHBülthoffSde la RosaChang2015_410D-SChangChang2015_310D-SChangBulthoff201510HHBülthoffChangJBd2015_210D-SChangUJuHHBülthoffSde la RosaChang2015_210D-SChangMeilingerd201510TMeilingerSde la RosaChangd201510D-SChangSde la RosaBreidt201510LTrutoiuMBreidtBMohlerASteedChangBBd201510D-SChangFBurgerHHBülthoffSde la RosaBulthoff2015_210HHBülthoffMeilingerHB201510TMeilingerAHensonHHBülthoffFosterTKHBdWBM201510CFosterKTakahashiSKurekCHoreisMJBäuerleSde la RosaKWatanabeMVButzTMeilingerdelaRosaLSMBC201510Sde la RosaMLubkollASaultonTMeilingerHHBülthoffCCañal-BrulandStickrodtM201510MStickrodtTMeilingerLeroyZBBM201510CLeroyMZhaoMVButzHHBülthoffTMeilingerMeilingerGHS201510TMeilingerBGarsoffkyCHoreisSSchwanChuangNWB201510LLChuangFMNieuwenhuizenJWalterHHBülthoffdeWinkel2015_210Kde WinkelNooij201510SAENooijdeWinkel201510Kde WinkelGiani20151AGianiLogos VerlagBerlin, Germany2014-00-00Throughout the day, our senses provide us with a rich stream of information about the environment: We see colours and shapes, hear music or smell food. With seemingly no effort, the human brain integrates these signals to create a conscious sensory experience of the external world. Yet, this sensory experience is not a truthful representation of the physical world. Instead, it is crucially shaped by a variety of processes, two of which are the focus of the current work: multisensory integration and awareness.
Yet, in contrast to their impact, relatively little is known about the mechanisms that enable perceptual awareness within a multisensory world. For example, does multisensory integration occur automatically or are higher order cognitive processes (such as awareness) necessary to bind the information? And where does awareness emerge within the human brain? The current work describes three experimental studies, which were designed to provide further insights into auditory and visual perception and the human brain.Tübingen, Univ., Diss., 2014nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published120From multiple senses to perceptual awareness15017188261501715422Alaimo20141SAlaimoLogos VerlagBerlin, Germany2014-00-00Both remote piloted systems for Unmanned Aerial Vehicles and Fly-By-Wire systems for manned aircrafts do not transfer to the pilot important information or cues regarding the state of the aircraft and the loads which are being imposed by the pilot's control actions. These cues have been shown to be highly responsible for pilot situational awareness; this has a negative impact on system performance especially in the presence of remote and unforeseen environmental constraints and disturbances.
Extending the visual feedback with force feedback is able to complement the visual information (when missing or limited). An artificially recreated sense of touch (haptics) may allow the operator to better perceive information from the remote aircraft state, the environment and its constraints, hopefully preventing dangerous situations. The dissertation introduces first of all a novel classification for haptic aid systems in two large classes: Direct Haptic Aid (DHA) and Indirect Haptic Aid (IHA), then, after showing that almost all existing aid concepts belong to the first class, focuses on IHA and tries to show that classical applications (that use a DHA approach) can be revised in a IHA fashion.
The novel IHA systems produce different sensations, which in most cases may appear as exactly "opposite in sign" from the corresponding DHA; such sensations can provide valuable cues for the pilot, both in terms of performance improvement and "level of appreciation". Furthermore, the present dissertation shows that the novel IHA cueing algorithms, which were designed just to appear "natural" to the operator and not to directly help the pilot in the task (as in the DHA cases), can outperform the corresponding DHA systems.
Three case studies are selected: obstacle avoidance, wind gust rejection, and a combination of the two. For all the cases, DHA and IHA systems are designed and compared against baseline performance with no haptic aid. Both professional pilots and naive subjects were asked to test them through a deep campaign of experiments. Test results show that a net improvement in terms of performance is provided by employing the IHA cues instead of both the DHA cues or the visual cues only.
In the end, this thesis aim is to show that the IHA philosophy is a valid and promising alternative to the other commonly used, and published in the scientific literature, approaches which fall in the DHA category.
Finally the haptic cue for the obstacle avoidance task was tested in the presence of time delay in the communication link as in a classical bilateral teleoperation scheme. The Master was provided with an admittance controller and an observer of force exerted by the human on the stick was developed. Experiments have shown that the proposed system is capable of standing substantial communication delays.Zugl.: Università di Pisa & Max Planck Institute for Biologocal Cybernetics, Diss., 2013nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published241Novel Haptic Cues for UAV Tele-Operation1501715422Bieg20141H-JBiegLogos VerlagBerlin, Germany2014-00-00Saccades are rapid eye movements that relocate the fovea, the retinal area with highest acuity, to fixate different points in the visual field in turn. Where and when the eyes shift needs to be tightly coordinated with our behavior. The current thesis investigates how this coordination is achieved.
Part I examines the coordination of eye and hand movements. Previous studies suggest that the neural processes that coordinate saccades and hand movements do so by adjusting the onset time and movement speed of saccades. I argue against this hypothesis by showing that the need to process task-relevant visual information at the saccade endpoint is sufficient to cause such adjustments. Rather than a mechanism to coordinate the eyes with the hands, changes in saccade onset time and speed may reflect the increased importance of vision at a saccade's target location.
Part II examines the coordination of smooth pursuit and saccadic eye movements. Smooth pursuit eye movements are slow eye movements that follow a moving object of interest. The eyes frequently alternate between smooth pursuit and saccadic eye movements, which suggests that their control processes are closely coupled. In support of this idea, smooth pursuit eye movements are shown to systematically influence the onset time of saccadic eye movements. This influence may rest on two different mechanisms: first, a bias in visual attention in the direction of pursuit for saccades that occur during smooth pursuit; second, a mechanism that inhibits the saccadic response in the case of saccades to a moving target. Evidence for the latter hypothesis is provided by the observation that both the probability of occurence and the latency of saccades to a moving target depend on the target's eccentricity and velocity.Tübingen, Univ., Diss., 2014nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published130On the coordination of saccades with hand and smooth pursuit eye movements1501715422Leyrer20141MLeyrerLogos VerlagBerlin, Germany2014-00-00Today, virtual reality technology is a multi-purpose tool for diverse applications in various domains. However, research has shown that virtual worlds are often not perceived in scale, especially regarding egocentric distances, as the programmer intended them. While the main reason for this misperception of distances in virtual environments is still unknown, this dissertation investigates one specific aspect of fundamental importance to distance perception – eye height.
In human perception, the ability to determine eye height is essential, because eye height is used to perceive heights of objects, velocity, affordances and distances, all of which allow for successful environmental interaction. It is reasonably well understood how eye height is used to determine many of these percepts. Yet, how eye height itself is determined is still unknown. In multiple studies conducted in virtual reality and the real world, this dissertation investigates how eye height might be determined in common scenarios in virtual reality.
Using manipulations of the virtual eye height and distance perception tasks, the results suggest that humans rely more on their body-based information to determine their eye height, if they have no possibility for calibration. This has major implications for many existing virtual reality setups. Because humans rely on their body-based eye height, this can be exploited to systematically alter the perceived space in immersive virtual environments, which might be sufficient to enable every user an experience close to what was intended by the programmer.Tübingen, Univ., Diss., 2014nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published164Understanding and Manipulating Eye Height to Change the User's Experience of Perceived Space in Virtual Reality150171542215017BaileyKMS201428RBaileySKuhlBMohlerKSinghZhaoHB2014_23MZhaoWGHaywardIBülthoff2014-12-0010561–69Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remain unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-61Holistic processing, contact, and the other-race effect in face recognition1501715422LeeLLPCWBC20143I-SLeeA-RLeeHLeeH-JParkS-YChungCWallravenIBülthoffYChae2014-12-00619680686Acne vulgaris is a common inflammatory disease that manifests on the face and affects appearance. In general, facial acne has a wide-ranging negative impact on the psychosocial functioning of acne sufferers and leaves physical and emotional scars. In the present study, we investigated whether patients with acne vulgaris demonstrate enhanced psychological bias when assessing the attractiveness of faces with acne symptoms and whether they devote greater selective attention to acne lesions than to acne-free (control) individuals. Participants viewed images of faces under two different skin (acne vs. acne-free) and emotional facial expression (happy and neutral) conditions. They rated the attractiveness of the faces, and the time spent fixating on the acne lesions was recorded with an eye tracker. We found that the gap in perceived attractiveness between acne and acne-free faces was greater for acne sufferers. Furthermore, patients with acne fixated longer on facial regions exhibiting acne lesions than did control participants irrespective of the facial expression depicted. In summary, patients with acne have a stronger attentional bias for acne lesions and focus more on the skin lesions than do those without acne. Clinicians treating the skin problems of patients with acne should consider these psychological and emotional scars.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published6Psychological distress and attentional bias toward acne lesions in patients with acne1501715422VolkovadBM20143EVolkovaSde la RosaHHBülthoffBMohler2014-12-00129128Emotion expression in human-human interaction takes place via various types of information, including body motion. Research on the perceptual-cognitive mechanisms underlying the processing of natural emotional body language can benefit greatly from datasets of natural emotional body expressions that facilitate stimulus manipulation and analysis. The existing databases have so far focused on few emotion categories which display predominantly prototypical, exaggerated emotion expressions. Moreover, many of these databases consist of video recordings which limit the ability to manipulate and analyse the physical properties of these stimuli. We present a new database consisting of a large set (over 1400) of natural emotional body expressions typical of monologues. To achieve close-to-natural emotional body expressions, amateur actors were narrating coherent stories while their body movements were recorded with motion capture technology. The resulting 3-dimensional motion data recorded at a high frame rate (120 frames per second) provides fine-grained information about body movements and allows the manipulation of movement on a body joint basis. For each expression it gives the positions and orientations in space of 23 body joints for every frame. We report the results of physical motion properties analysis and of an emotion categorisation study. The reactions of observers from the emotion categorisation study are included in the database. Moreover, we recorded the intended emotion expression for each motion sequence from the actor to allow for investigations regarding the link between intended and perceived emotions. The motion sequences along with the accompanying information are made available in a searchable MPI Emotional Body Expression Database. We hope that this database will enable researchers to study expression and perception of naturally occurring emotional body expressions in greater depth.
Figuresnonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published27The MPI Emotional Body Expressions Database for Narrative Scenarios150171542215017LinkenaugerGSLRPBM20143SALinkenaugerMNGeussJKStefanucciMLeyrerBHRichardsonDRProffittHHBülthoffBJMohler2014-11-00112520862094The hand is a reliable and ecologically useful perceptual ruler that can be used to scale the sizes of close, manipulatable objects in the world in a manner similar to the way in which eye height is used to scale the heights of objects on the ground plane. Certain objects are perceived proportionally to the size of the hand, and as a result, changes in the relationship between the sizes of objects in the world and the size of the hand are attributed to changes in object size rather than hand size. To illustrate this notion, we provide evidence from several experiments showing that people perceive their dominant hand as less magnified than other body parts or objects when these items are subjected to the same degree of magnification. These findings suggest that the hand is perceived as having a more constant size and, consequently, can serve as a reliable metric with which to measure objects of commensurate size.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Evidence for Hand-Size Constancy: The Dominant Hand as a Natural Perceptual Metric150171542215017SchecklmannGTLRPVHHF20143MSchecklmannAGianiSTupakBLangguthVRaabTPolakCVárallyayWHarnischMJHerrmannAJFallgatter2014-11-00894203201418Objective. Several neuroscience tools showed the involvement of auditory cortex in chronic tinnitus. In this proof-of-principle study we probed the capability of functional near-infrared spectroscopy (fNIRS) for the measurement of brain oxygenation in auditory cortex in dependence from chronic tinnitus and from intervention with transcranial magnetic stimulation. Methods. Twenty-three patients received continuous theta burst stimulation over the left primary auditory cortex in a randomized sham-controlled neuronavigated trial (verum = 12; placebo = 11). Before and after treatment, sound-evoked brain oxygenation in temporal areas was measured with fNIRS. Brain oxygenation was measured once in healthy controls . Results. Sound-evoked activity in right temporal areas was increased in the patients in contrast to healthy controls. Left-sided temporal activity under the stimulated area changed over the course of the trial; high baseline oxygenation was reduced and vice versa. Conclusions. By demonstrating that rTMS interacts with auditory evoked brain activity, our results confirm earlier electrophysiological findings and indicate the sensitivity of fNIRS for detecting rTMS induced changes in brain activity. Moreover, our findings of trait- and state-related oxygenation changes indicate the potential of fNIRS for the investigation of tinnitus pathophysiology and treatment response.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Functional Near-Infrared Spectroscopy to Probe State- and Trait-Like Conditions in Chronic Tinnitus: A Proof-of-Principle Study15017188261501715422OlivariNBP2014_23MOlivariFNieuwenhuizenHHBülthoffLPollini2014-11-0063717411753Haptic aids have been largely used in manual control tasks to complement the visual information through the sense of touch. To analytically design a haptic aid, adequate knowledge is needed about how pilots adapt their visual response and the biomechanical properties of their arm (i.e., admittance) to a generic haptic aid. In this work, two different haptic aids, a direct haptic aid and an indirect haptic aid, are designed for a target tracking task, with the aim of investigating the pilot response to these aids. The direct haptic aid provides forces on the control device that suggest the right control action to the pilot, whereas the indirect haptic aid provides forces opposite in sign with respect to the direct haptic aid. The direct haptic aid and the indirect haptic aid were tested in an experimental setup with nonpilot participants and compared to a condition without haptic support. It was found that control performance improved with haptic aids. Participants significantly adapted both their admittance and visual response to fully exploit the haptic aids. They were more compliant with the direct haptic aid force, whereas they showed stiffer neuromuscular settings with the indirect haptic aid, as this approach required opposing the haptic forces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Pilot Adaptation to Different Classes of Haptic Aids in Tracking Tasks1501715422ZhaoCWRCCH20143MZhaoS-HCheungAC-NWongGRhodesEKSChanWWLChanWGHayward2014-11-003-45160167We investigated how face-selective cortical areas process configural and componential face information and how race of faces may influence these processes. Participants saw blurred (preserving configural information), scrambled (preserving componential information), and whole faces during fMRI scan, and performed a post-scan face recognition task using blurred or scrambled faces. The fusiform face area (FFA) showed stronger activation to blurred than to scrambled faces, and equivalent responses to blurred and whole faces. The occipital face area (OFA) showed stronger activation to whole than to blurred faces, which elicited similar responses to scrambled faces. Therefore, the FFA may be more tuned to process configural than componential information, whereas the OFA similarly participates in perception of both. Differences in recognizing own- and other-race blurred faces were correlated with differences in FFA activation to those faces, suggesting that configural processing within the FFA may underlie the other-race effect in face recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Processing of configural and componential information in face-selective cortical areas1501715422MeilingerFB20143TMeilingerJFrankensteinHHBülthoff2014-11-001363517Route selection is governed by various strategies which often allow minimizing the required memory capacity. Previous research showed that navigators primarily remember information at route decision points and at route turns, rather than at intersections which required straight walking. However, when actually navigating the route or indicating directional decisions, navigators make fewer errors when they are required to walk straight. This tradeoff between location memory and route decisions accuracy was interpreted as a “when in doubt follow your nose” strategy which allows navigators to only memorize turns and walk straight by default, thus considerably reducing the number of intersections to memorize. These findings were based on newly learned routes. In the present study we show that such an asymmetry in route memory also prevails for planning routes within highly familiar environments. Participants planned route sequences between locations in their city of residency by pressing arrow keys on a keyboard. They tended to ignore straight walking intersections, but they ignored turns much less so. However, for reported intersections participants were quicker at indicating straight walking than turning. Together with results described in the literature, these findings suggest that a “when in doubt follow your nose strategy” is applied also within highly familiar spaces and might originate from limited working memory capacity during planning a route.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/Frontiers-Psychol-2014-Meilinger.pdfpublished6When in doubt follow your nose: a wayfinding strategy1501715422BrowatzkiTMBW20143BBrowatzkiVTikhanoffGMettaHHBülthoffCWallraven2014-10-0053012601269For any robot, the ability to recognize and manipulate unknown objects is crucial to successfully work in natural environments. Object recognition and categorization is a very challenging problem, as 3-D objects often give rise to ambiguous, 2-D views. Here, we present a perception-driven exploration and recognition scheme for in-hand object recognition implemented on the iCub humanoid robot. In this setup, the robot actively seeks out object views to optimize the exploration sequence. This is achieved by regarding the object recognition problem as a localization problem. We search for the most likely viewpoint position on the viewsphere of all objects. This problem can be solved efficiently using a particle filter that fuses visual cues with associated motor actions. Based on the state of the filter, we can predict the next best viewpoint after each recognition step by searching for the action that leads to the highest expected information gain. We conduct extensive evaluations of the proposed system in simulation as well as on the actual robot and show the benefit of perception-driven exploration over passive, vision-only processes at discriminating between highly similar objects. We demonstrate that objects are recognized faster and at the same time with a higher accuracy.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Active In-Hand Object Recognition on a Humanoid Robot1501715422CamposBB20143JLCamposJSButlerHHBülthoff2014-10-001023232773289Recent research has provided evidence that visual and body-based cues (vestibular, proprioceptive and efference copy) are integrated using a weighted linear sum during walking and passive transport. However, little is known about the specific weighting of visual information when combined with proprioceptive inputs alone, in the absence of vestibular information about forward self-motion. Therefore, in this study, participants walked in place on a stationary treadmill while dynamic visual information was updated in real time via a head-mounted display. The task required participants to travel a predefined distance and subsequently match this distance by adjusting an egocentric, in-depth target using a game controller. Travelled distance information was provided either through visual cues alone, proprioceptive cues alone or both cues combined. In the combined cue condition, the relationship between the two cues was manipulated by either changing the visual gain across trials (0.7×, 1.0×, 1.4×; Exp. 1) or the proprioceptive gain across trials (0.7×, 1.0×, 1.4×; Exp. 2). Results demonstrated an overall higher weighting of proprioception over vision. These weights were scaled, however, as a function of which sensory input provided more stable information across trials. Specifically, when visual gain was constantly manipulated, proprioceptive weights were higher than when proprioceptive gain was constantly manipulated. These results therefore reveal interesting characteristics of cue-weighting within the context of unfolding spatio-temporal cue dynamics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Contributions of visual and proprioceptive information to travelled distance estimation during changing sensory congruencies1501715422ChaeLJCNLPW20143YChaeI-SLeeW-MJungD-SChangVNapadowHLeeH-JParkCWallraven2014-10-00109110Acupuncture stimulation increases local blood flow around the site of stimulation and induces signal changes in brain regions related to the body matrix. The rubber hand illusion (RHI) is an experimental paradigm that manipulates important aspects of bodily self-awareness. The present study aimed to investigate how modifications of body ownership using the RHI affect local blood flow and cerebral responses during acupuncture needle stimulation. During the RHI, acupuncture needle stimulation was applied to the real left hand while measuring blood microcirculation with a LASER Doppler imager (Experiment 1, N = 28) and concurrent brain signal changes using functional magnetic resonance imaging (fMRI; Experiment 2, N = 17). When the body ownership of participants was altered by the RHI, acupuncture stimulation resulted in a significantly lower increase in local blood flow (Experiment 1), and significantly less brain activation was detected in the right insula (Experiment 2). This study found changes in both local blood flow and brain responses during acupuncture needle stimulation following modification of body ownership. These findings suggest that physiological responses during acupuncture stimulation can be influenced by the modification of body ownership.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Decreased Peripheral and Central Responses to Acupuncture Stimulation following Modification of Body Ownership1501715422HeinrichdS2014_23AHeinrichSde la RosaBASchneider2014-10-004:1797136111Thresholds for detecting a gap between two complex tones were determined for young listeners with normal hearing and old listeners with mild age-related hearing loss. The leading tonal marker was always a 20-ms, 250-Hz complex tone with energy at 250, 500, 750, and 1000 Hz. The lagging marker, also tonal, could differ from the leading marker with respect to fundamental frequency (f0), the presence versus absence of energy at f0, and the degree to which it overlapped spectrally with the leading marker. All stimuli were presented with steeper (1 ms) and less steep (4 ms) envelope rise and fall times. F0 differences, decreases in the degree of spectral overlap between the markers, and shallower envelope shape all contributed to increases in gap-detection thresholds. Age differences for gap detection of complex sounds were generally small and constant when gap-detection thresholds were measured on a log scale. When comparing the results for complex sounds to thresholds obtained for pure-tones in a previous study by Heinrich and Schneider [(2006). J. Acoust. Soc. Am. 119, 2316–2326], thresholds increased in an orderly fashion from markers with identical (within-channel) pure tones to different (between-channel) pure tones to complex sounds. This pattern of results was true for listeners of both ages although younger listeners had smaller thresholds overall.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published10The role of stimulus complexity, spectral overlap, and pitch for gap-detection thresholds in young and old listeners1501715422VenrooijvMAvB20143JVenrooijMMvan PaassenMMulderDAAbbinkFCTvan der HelmHHBülthoff2014-09-0094416861698Biodynamic feedthrough (BDFT) is a complex phenomenon, which has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, a framework for biodynamic feedthrough analysis is presented. The goal of this framework is two-fold. First, it provides some common ground between the seemingly large range of different approaches existing in the BDFT literature. Second, the framework itself allows for gaining new insights into BDFT phenomena. It will be shown how relevant signals can be obtained from measurement, how different BDFT dynamics can be derived from them, and how these different dynamics are related. Using the framework, BDFT can be dissected into several dynamical relationships, each relevant in understanding BDFT phenomena in more detail. The presentation of the BDFT framework is divided into two parts. This paper, Part I, addresses the theoretical foundations of the framework. Part II, which is also published in this issue, addresses the validation of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework's theoretical background and its validation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12A Framework for Biodynamic Feedthrough Analysis Part I: Theoretical Foundations1501715422VenrooijvMAMvB20143JVenrooijMMvan PaassenMMulderDAAbbinkMMulderFCTvan der HelmHHBülthoff2014-09-0094416991710Biodynamic feedthrough (BDFT) is a complex phenomenon, that has been studied for several decades. However, there is little consensus on how to approach the BDFT problem in terms of definitions, nomenclature, and mathematical descriptions. In this paper, the framework for BDFT analysis, as presented in Part I of this dual publication, is validated and applied. The goal of this framework is twofold. First of all, it provides some common ground between the seemingly large range of different approaches existing in BDFT literature. Secondly, the framework itself allows for gaining new insights into BDFT phenomena. Using recently obtained measurement data, parts of the framework that were not already addressed elsewhere, are validated. As an example of a practical application of the framework, it will be demonstrated how the effects of control device dynamics on BDFT can be understood and accurately predicted. Other ways of employing the framework are illustrated by interpreting the results of three selected studies from the literature using the BDFT framework. The presentation of the BDFT framework is divided into two parts. This paper, Part II, addresses the validation and application of the framework. Part I, which is also published in this journal issue, addresses the theoretical foundations of the framework. The work is presented in two separate papers to allow for a detailed discussion of both the framework’s theoretical background and its validation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11A Framework for Biodynamic Feedthrough Analysis Part II: Validation and Application1501715422EsinsSWB20143JEsinsJSchultzCWallravenIBülthoff2014-09-007598114Congenital prosopagnosia, an innate impairment in recognizing faces, as well as the other-race effect, a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls in three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the other-race effect and congenital prosopagnosia.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?1501715422EsinsSBK20133JEsinsJSchultzIBülthoffIKennerknecht2014-09-00517239240A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Galactose uncovers face recognition and mental images in congenital prosopagnosia: The first case report1501715422ZhaoHB20143MZhaoWGHaywardIBülthoff2014-08-009:614113Memory of own-race faces is generally better than memory of other-races faces. This other-race effect (ORE) in face memory has been attributed to differences in contact, holistic processing, and motivation to individuate faces. Since most studies demonstrate the ORE with participants learning and recognizing static, single-view faces, it remains unclear whether the ORE can be generalized to different face learning conditions. Using an old/new recognition task, we tested whether face format at encoding modulates the ORE. The results showed a significant ORE when participants learned static, single-view faces (Experiment 1). In contrast, the ORE disappeared when participants learned rigidly moving faces (Experiment 2). Moreover, learning faces displayed from four discrete views produced the same results as learning rigidly moving faces (Experiment 3). Contact with other-race faces was correlated with the magnitude of the ORE. Nonetheless, the absence of the ORE in Experiments 2 and 3 cannot be readily explained by either more frequent contact with other-race faces or stronger motivation to individuate them. These results demonstrate that the ORE is sensitive to face format at encoding, supporting the hypothesis that relative involvement of holistic and featural processing at encoding mediates the ORE observed in face memory.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Face format at encoding affects the other-race effect in face memory1501715422LeeN2014_23HLeeUNoppeney2014-08-00868519This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech, or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogs of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms). Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past 3 years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech, and music15017154221501718826PiryankovaWLSLBM20143IVPiryankovaHYWongSALinkenaugerCStinsonMRLongoHHBülthoffBJMohler2014-08-0089113Our bodies are the most intimately familiar objects we encounter in our perceptual environment. Virtual reality provides a unique method to allow us to experience having a very different body from our own, thereby providing a valuable method to explore the plasticity of body representation. In this paper, we show that women can experience ownership over a whole virtual body that is considerably smaller or larger than their physical body. In order to gain a better understanding of the mechanisms underlying body ownership, we use an embodiment questionnaire, and introduce two new behavioral response measures: an affordance estimation task (indirect measure of body size) and a body size estimation task (direct measure of body size). Interestingly, after viewing the virtual body from first person perspective, both the affordance and the body size estimation tasks indicate a change in the perception of the size of the participant's experienced body. The change is biased by the size of the virtual body (overweight or underweight). Another novel aspect of our study is that we distinguish between the physical, experienced and virtual bodies, by asking participants to provide affordance and body size estimations for each of the three bodies separately. This methodological point is important for virtual reality experiments investigating body ownership of a virtual body, because it offers a better understanding of which cues (e.g. visual, proprioceptive, memory, or a combination thereof) influence body perception, and whether the impact of these cues can vary between different setups.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published12Owning an Overweight or Underweight Body: Distinguishing the Physical, Experienced and Virtual Body150171542215017WallravenBWvG20133CWallravenHHBülthoffSWaterkampLvan DamNGaissert2014-08-00421976985Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9The eyes grasp, the hands see: Metric category knowledge transfers between vision and touch15017154221501718824VenrooijAMvvB20133JVenrooijDAAbbinkMMulderMMvan PaassenFCTvan der HelmHHBülthoff2014-07-0074411411154A biodynamic feedthrough (BDFT) model is proposed that describes how vehicle accelerations feed through the human body, causing involuntary limb motions and so involuntary control inputs. BDFT dynamics strongly depend on limb dynamics, which can vary between persons (between-subject variability), but also within one person over time, e.g., due to the control task performed (within-subject variability). The proposed BDFT model is based on physical neuromuscular principles and is derived from an established admittance model---describing limb dynamics---which was extended to include control device dynamics and account for acceleration effects. The resulting BDFT model serves primarily the purpose of increasing the understanding of the relationship between neuromuscular admittance and biodynamic feedthrough. An added advantage of the proposed model is that its parameters can be estimated using a two-stage approach, making the parameter estimation more robust, as the procedure is largely based on the well documented procedure required for the admittance model. To estimate the parameter values of the BDFT model, data are used from an experiment in which both neuromuscular admittance and biodynamic feedthrough are measured. The quality of the BDFT model is evaluated in the frequency and time domain. Results provide strong evidence that the BDFT model and the proposed method of parameter estimation put forward in this paper allows for accurate BDFT modeling across different subjects (accounting for between-subject variability) and across control tasks (accounting for within-subject variability).nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13A Biodynamic Feedthrough Model Based on Neuromuscular Principles1501715422BrielmannBA20143AABrielmannIBülthoffRArmann2014-07-00100105–112Race categorization of faces is a fast and automatic process and is known to affect further face processing profoundly and at earliest stages. Whether processing of own- and other-race faces might rely on different facial cues, as indicated by diverging viewing behavior, is much under debate. We therefore aimed to investigate two open questions in our study: 1) Do observers consider information from distinct facial features informative for race categorization or do they prefer to gain global face information by fixating the geometrical center of the face? 2) Does the fixation pattern, or, if facial features are considered relevant, do these features differ between own- and other-race faces? We used eye tracking to test where European observers look when viewing Asian and Caucasian faces in a race categorization task. Importantly, in order to disentangle centrally located fixations from those towards individual facial features, we presented faces in frontal, half-profile and profile views. We found that observers showed no general bias towards looking at the geometrical center of faces, but rather directed their first fixations towards distinct facial features, regardless of face race. However, participants looked at the eyes more often in Caucasian faces than in Asian faces, and there were significantly more fixations to the nose for Asian compared to Caucasian faces. Thus, observers rely on information from distinct facial features rather than facial information gained by centrally fixating the face. To what extent specific features are looked at is determined by the face’s race.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-105Looking at faces from different angles: Europeans fixate different features in Asian and Caucasian faces1501715422VenrooijMAvMvB20133JVenrooijMMulderDAAbbinkMMvan PaassenMMulderFCTvan der HelmHHBülthoff2014-07-0074410251038Biodynamic feedthrough (BDFT) occurs when vehicle accelerations feed through the human body and cause involuntary control inputs. This paper proposes a model to quantitatively predict this effect in rotorcraft. This mathematical BDFT model aims to fill the gap between the currently existing black box BDFT models and physical BDFT models. The model structure was systematically constructed using asymptote modeling, a procedure described in detail in this paper. The resulting model can easily be implemented in many typical rotorcraft BDFT studies, using the provided model parameters. The model's performance was validated in both the frequency and time domain. Furthermore, it was compared with several recent BDFT models. The results show that the proposed mathematical model performs better than typical black box models and is easier to parameterize and implement than a recent physical model.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Mathematical Biodynamic Feedthrough Model Applied to Rotorcraft1501715422DobsBBVCS20143KDobsIBülthoffMBreidtQCVuongCCurioJSchultz2014-07-0010078–87A great deal of perceptual and social information is conveyed by facial motion. Here, we investigated observers’ sensitivity to the complex spatio-temporal information in facial expressions and what cues they use to judge the similarity of these movements. We motion-captured four facial expressions and decomposed them into time courses of semantically meaningful local facial actions (e.g., eyebrow raise). We then generated approximations of the time courses which differed in the amount of information about the natural facial motion they contained, and used these and the original time courses to animate an avatar head. Observers chose which of two animations based on approximations was more similar to the animation based on the original time course. We found that observers preferred animations containing more information about the natural facial motion dynamics. To explain observers’ similarity judgments, we developed and used several measures of objective stimulus similarity. The time course of facial actions (e.g., onset and peak of eyebrow raise) explained observers’ behavioral choices better than image-based measures (e.g., optic flow). Our results thus revealed observers’ sensitivity to changes of natural facial dynamics. Importantly, our method allows a quantitative explanation of the perceived similarity of dynamic facial expressions, which suggests that sparse but meaningful spatio-temporal cues are used to process facial motion.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-78Quantifying human sensitivity to spatio-temporal information in dynamic faces1501715422VolkovaMDTB2014_23EPVolkovaBJMohlerTJDoddsJTeschHHBülthoff2014-06-006235111Humans can recognize emotions expressed through body motion with high accuracy even when the stimuli are impoverished. However, most of the research on body motion has relied on exaggerated displays of emotions. In this paper we present two experiments where we investigated whether emotional body expressions could be recognized when they were recorded during natural narration. Our actors were free to use their entire body, face, and voice to express emotions, but our resulting visual stimuli used only the upper body motion trajectories in the form of animated stick figures. Observers were asked to perform an emotion recognition task on short motion sequences using a large and balanced set of emotions (amusement, joy, pride, relief, surprise, anger, disgust, fear, sadness, shame, and neutral). Even with only upper body motion available, our results show recognition accuracy significantly above chance level and high consistency rates among observers. In our first experiment, that used more classic emotion induction setup, all emotions were well recognized. In the second study that employed narrations, four basic emotion categories (joy, anger, fear, and sadness), three non-basic emotion categories (amusement, pride, and shame) and the “neutral” category were recognized above chance. Interestingly, especially in the second experiment, observers showed a bias toward anger when recognizing the motion sequences for emotions. We discovered that similarities between motion sequences across the emotions along such properties as mean motion speed, number of peaks in the motion trajectory and mean motion span can explain a large percent of the variation in observers' responses. Overall, our results show that upper body motion is informative for emotion recognition in narrative scenarios.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published10Emotion categorization of body expressions in narrative scenarios150171542215017DavidSMSSMSVE20133NDavidJSchultzEMilneOSchunkeDSchöttleAMünchauMSiegelKVogeleyAKEngel2014-06-0064414331446Individuals with an autism spectrum disorder (ASD) show hallmark deficits in social perception. These difficulties might also reflect fundamental deficits in integrating visual signals. We contrasted predictions of a social perception and a spatial–temporal integration deficit account. Participants with ASD and matched controls performed two tasks: the first required spatiotemporal integration of global motion signals without social meaning, the second required processing of socially relevant local motion. The ASD group only showed differences to controls in social motion evaluation. In addition, gray matter volume in the temporal–parietal junction correlated positively with accuracy in social motion perception in the ASD group. Our findings suggest that social–perceptual difficulties in ASD cannot be reduced to deficits in spatial–temporal integration.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Right Temporoparietal Gray Matter Predicts Accuracy of Social Perception in the Autism Spectrum1501715422CaitiCGGM20143ACaitiVCalabròSGeluardiSGrammaticoCMunafò2014-05-002228136145The dynamic, control-oriented model of an underwater glider with independently controllable wings is presented. The onboard vehicle's actuators are a ballast tank and two hydrodynamic wings. A control strategy is proposed to improve the vehicle's maneuverability. In particular, a switching control law, together with a backstepping feedback scheme, is designed to limit the energy-inefficient actions of the ballast tank and hence to enforce efficient maneuvers. The case study considered here is an underwater vehicle with hydrodynamic wings behind its main hull. This unusual structure is motivated by the recently introduced concept of the underwater wave glider, which is a vehicle capable of both surface and underwater navigation. The proposed control strategy is validated via numerical simulations, in which the simulated vehicle has to perform three-dimensional path-following maneuvers.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Switching control of an underwater glider with independently controllable wings1501715422BrowatzkiBC20143BBrowatzkiHHBülthoffLLChuang2014-04-002008112Video-based gaze-tracking systems are typically restricted in terms of their effective tracking space. This constraint limits the use of eyetrackers in studying mobile human behavior. Here, we compare two possible approaches for estimating the gaze of participants who are free to walk in a large space whilst looking at different regions of a large display. Geometrically, we linearly combined eye-in-head rotations and head-in-world coordinates to derive a gaze vector and its intersection with a planar display, by relying on the use of a head-mounted eyetracker and body-motion tracker. Alternatively, we employed Gaussian process regression to estimate the gaze intersection directly from the input data itself. Our evaluation of both methods indicates that a regression approach can deliver comparable results to a geometric approach. The regression approach is favored, given that it has the potential for further optimization, provides confidence bounds for its gaze estimates and offers greater flexibility in its implementation. Open-source software for the methods reported here is also provided for user implementation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11A comparison of geometric- and regression-based mobile gaze-tracking1501715422vonLassbergBMB20143Cvon LassbergKABeykirchBJMohlerHHBülthoff2014-04-0049115Using state-of-the-art technology, interactions of eye, head and intersegmental body movements were analyzed for the first time during multiple twisting somersaults of high-level gymnasts. With this aim, we used a unique combination of a 16-channel infrared kinemetric system; a three-dimensional video kinemetric system; wireless electromyography; and a specialized wireless sport-video-oculography system, which was able to capture and calculate precise oculomotor data under conditions of rapid multiaxial acceleration. All data were synchronized and integrated in a multimodal software tool for three-dimensional analysis. During specific phases of the recorded movements, a previously unknown eye-head-body interaction was observed. The phenomenon was marked by a prolonged and complete suppression of gaze-stabilizing eye movements, in favor of a tight coupling with the head, spine and joint movements of the gymnasts. Potential reasons for these observations are discussed with regard to earlier findings and integrated within a functional model.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published14Intersegmental Eye-Head-Body Interactions during Complex Whole Body Movements150171542215017delaRosaB20133Sde la RosaHHBülthoff2014-04-00237197198Cook et al. suggest that motor-visual neurons originate from associative learning. This suggestion has interesting implications for the processing of socially relevant visual information in social interactions. Here, we discuss two aspects of the associative learning account that seem to have particular relevance for visual recognition of social information in social interactions – namely, context-specific and contingency based learning.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Motor-visual neurons and action recognition in social interactions1501715422PariseKE20143CVPariseKKnorreMOErnst2014-04-00161116104–6108Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-6104Natural auditory scene statistics shapes human spatial hearing15017188241501715422NestiBMBB20133ANestiKABeykirchPRMacNeilageMBarnett-CowanHHBülthoff2014-04-004918Motion simulators are widely employed in basic and applied research to study the neural mechanisms of perception and action during inertial stimulation. In these studies, uncontrolled simulator-introduced noise inevitably leads to a disparity between the reproduced motion and the trajectories meticulously designed by the experimenter, possibly resulting in undesired motion cues to the investigated system. Understanding actual simulator responses to different motion commands is therefore a crucial yet often underestimated step towards the interpretation of experimental results. In this work, we developed analysis methods based on signal processing techniques to quantify the noise in the actual motion, and its deterministic and stochastic components. Our methods allow comparisons between commanded and actual motion as well as between different actual motion profiles. A specific practical example from one of our studies is used to illustrate the methodologies and their relevance, but this does not detract from its general applicability. Analyses of the simulator’s inertial recordings show direction-dependent noise and nonlinearity related to the command amplitude. The Signal-to-Noise Ratio is one order of magnitude higher for the larger motion amplitudes we tested, compared to the smaller motion amplitudes. Simulator-introduced noise is found to be primarily of deterministic nature, particularly for the stronger motion intensities. The effect of simulator noise on quantification of animal/human motion sensitivity is discussed. We conclude that accurate recording and characterization of executed simulator motion are a crucial prerequisite for the investigation of uncertainty in self-motion perception.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7The importance of stimulus noise analysis for self-motion studies1501715422MayerDE20133KMMayerMDi LucaMOErnst2014-03-0014729How humans perform duration judgments with multisensory stimuli is an ongoing debate. Here, we investigated how sub-second duration judgments are achieved by asking participants to compare the duration of a continuous sound to the duration of an empty interval in which onset and offset were marked by signals of different modalities using all combinations of visual, auditory and tactile stimuli. The pattern of perceived durations across five stimulus durations (ranging from 100 ms to 900 ms) follows the Vierordt Law. Furthermore, intervals with a sound as onset (audio-visual, audio-tactile) are perceived longer than intervals with a sound as offset. No modality ordering effect is found for visualtactile intervals. To infer whether a single modality-independent or multiple modality-dependent time-keeping mechanisms exist we tested whether perceived duration follows a summative or a multiplicative distortion pattern by fitting a model to all modality combinations and durations. The results confirm that perceived duration depends on sensory latency (summative distortion). Instead, we did not find evidence for multiplicative distortions. The results of the model and the behavioural data support the concept of a single time-keeping mechanism that allows for judgments of durations marked by multisensory stimuli.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7Duration perception in crossmodally-defined intervals15017188241501715422MeilingerRB20133TMeilingerBERieckeHHBülthoff2014-03-00367542569Two experiments examined how locations in environmental spaces, which cannot be overseen from one location, are represented in memory: by global reference frames, multiple local reference frames, or orientation-free representations. After learning an immersive virtual environment by repeatedly walking a closed multisegment route, participants pointed to seven previously learned targets from different locations. Contrary to many conceptions of survey knowledge, local reference frames played an important role: Participants performed better when their body or pointing targets were aligned with the local reference frame (corridor). Moreover, most participants turned their head to align it with local reference frames. However, indications for global reference frames were also found: Participants performed better when their body or current corridor was parallel/orthogonal to a global reference frame instead of oblique. Participants showing this pattern performed comparatively better. We conclude that survey tasks can be solved based on interconnected local reference frames. Participants who pointed more accurately or quickly additionally used global reference frames.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/QJEP-2014.pdfpublished27Local and global reference frames for environmental spaces1501715422SennaMBP20143ISennaAMaravitaNBologniniCVParise2014-03-003916Our body is made of flesh and bones. We know it, and in our daily lives all the senses constantly provide converging information about this simple, factual truth. But is this always the case? Here we report a surprising bodily illusion demonstrating that humans rapidly update their assumptions about the material qualities of their body, based on their recent multisensory perceptual experience. To induce a misperception of the material properties of the hand, we repeatedly gently hit participants' hand with a small hammer, while progressively replacing the natural sound of the hammer against the skin with the sound of a hammer hitting a piece of marble. After five minutes, the hand started feeling stiffer, heavier, harder, less sensitive, unnatural, and showed enhanced Galvanic skin response (GSR) to threatening stimuli. Notably, such a change in skin conductivity positively correlated with changes in perceived hand stiffness. Conversely, when hammer hits and impact sounds were temporally uncorrelated, participants did not spontaneously report any changes in the perceived properties of the hand, nor did they show any modulation in GSR. In two further experiments, we ruled out that mere audio-tactile synchrony is the causal factor triggering the illusion, further demonstrating the key role of material information conveyed by impact sounds in modulating the perceived material properties of the hand. This novel bodily illusion, the ‘Marble-Hand Illusion', demonstrates that the perceived material of our body, surely the most stable attribute of our bodily self, can be quickly updated through multisensory integration.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5The Marble-Hand Illusion15017154221501718824ThorntonBHRL20143IMThorntonHHBülthoffTSHorowitzARynningS-WLee2014-02-0029119We introduce a new task for exploring the relationship between action and attention. In this interactive multiple object tracking (iMOT) task, implemented as an iPad app, participants were presented with a display of multiple, visually identical disks which moved independently. The task was to prevent any collisions during a fixed duration. Participants could perturb object trajectories via the touchscreen. In Experiment 1, we used a staircase procedure to measure the ability to control moving objects. Object speed was set to 1°/s. On average participants could control 8.4 items without collision. Individual control strategies were quite variable, but did not predict overall performance. In Experiment 2, we compared iMOT with standard MOT performance using identical displays. Object speed was set to 2°/s. Participants could reliably control more objects (M = 6.6) than they could track (M = 4.0), but performance in the two tasks was positively correlated. In Experiment 3, we used a dual-task design. Compared to single-task baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be completed together. Overall, these findings suggest: 1) There is a clear limit to the number of items that can be simultaneously controlled, for a given speed and display density; 2) participants can control more items than they can track; 3) task-relevant action appears not to disrupt MOT performance in the current experimental context.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published18Interactive Multiple Object Tracking (iMOT)1501715422ReichenbachTPBB20133AReichenbachAThielscherAPeerHHBülthoffJPBresciani2014-01-0084615–625Seemingly effortless, we adjust our movements to continuously changing environments. After initiation of a goal-directed movement, the motor command is under constant control of sensory feedback loops. The main sensory signals contributing to movement control are vision and proprioception. Recent neuroimaging studies have focused mainly on identifying the parts of the posterior parietal cortex (PPC) that contribute to visually guided movements. We used event-related TMS and force perturbations of the reaching hand to test whether the same sub-regions of the left PPC contribute to the processing of proprioceptive-only and of multi-sensory information about hand position when reaching for a visual target. TMS over two distinct stimulation sites elicited differential effects: TMS applied over the posterior part of the medial intraparietal sulcus (mIPS) compromised reaching accuracy when proprioception was the only sensory information available for correcting the reaching error. When visual feedback of the hand was available, TMS over the anterior intraparietal sulcus (aIPS) prolonged reaching time. Our results show for the first time the causal involvement of the posterior mIPS in processing proprioceptive feedback for online reaching control, and demonstrate that distinct cortical areas process proprioceptive-only and multi-sensory information for fast feedback corrections.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-615A key region in the human parietal cortex for processing proprioceptive hand feedback during reaching movements15017154221501718821SonJLCB20133HISonHJungDYLeeJHChoHHBülthoff2014-01-00132117In this paper, human viscosity perception in haptic teleoperation systems is thoroughly analyzed. An accurate perception of viscoelastic environmental properties such as viscosity is a critical ability in several contexts, such as telesurgery, telerehabilitation, telemedicine, and soft-tissue interaction. We study and compare the ability to perceive viscosity from the standpoint of detection and discrimination using several relevant control methods for the teleoperator. The perception-based method, which was proposed by the authors to enhance the operator's kinesthetic perception, is compared with the conventional transparency-based control method for the teleoperation system. The fidelity-based method, which is a primary method among perception-centered control schemes in teleoperation, is also studied. We also examine the necessity and impact of the remote-site force information for each of the methods. The comparison is based on a series of psychophysical experiments measuring absolute threshold and just noticeable difference for all conditions. The results clearly show that the perception-based method enhances both detection and discrimination abilities compare with other control methods. The results further show that the fidelity-based method confers a better discrimination ability than the transparency-based method, although this is not true with respect to detection ability. In addition, we show that force information improves viscosity detection for all control methods, as predicted from previous theoretical analysis, but improves the discrimination threshold only for the perception-based method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published16A psychophysical evaluation of haptic controllers: viscosity perception of soft environments1501715422NestiBMB20133ANestiMBarnett-CowanPRMacNeilageHHBülthoff2014-01-001232303314Perceiving vertical self-motion is crucial for maintaining balance as well as for controlling an aircraft. Whereas heave absolute thresholds have been exhaustively studied, little work has been done in investigating how vertical sensitivity depends on motion intensity (i.e., differential thresholds). Here we measure human sensitivity for 1-Hz sinusoidal accelerations for 10 participants in darkness. Absolute and differential thresholds are measured for upward and downward translations independently at 5 different peak amplitudes ranging from 0 to 2 m/s2. Overall vertical differential thresholds are higher than horizontal differential thresholds found in the literature. Psychometric functions are fit in linear and logarithmic space, with goodness of fit being similar in both cases. Differential thresholds are higher for upward as compared to downward motion and increase with stimulus intensity following a trend best described by two power laws. The power laws’ exponents of 0.60 and 0.42 for upward and downward motion, respectively, deviate from Weber’s Law in that thresholds increase less than expected at high stimulus intensity. We speculate that increased sensitivity at high accelerations and greater sensitivity to downward than upward self-motion may reflect adaptations to avoid falling.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11Human sensitivity to vertical self-motion1501715422delaRosaSGBC2013_23Sde la RosaSStreuberMGieseHHBülthoffCCurio2014-01-0019110The social context in which an action is embedded provides important information for the interpretation of an action. Is this social context integrated during the visual recognition of an action? We used a behavioural visual adaptation paradigm to address this question and measured participants’ perceptual bias of a test action after they were adapted to one of two adaptors (adaptation after-effect). The action adaptation after-effect was measured for the same set of adaptors in two different social contexts. Our results indicate that the size of the adaptation effect varied with social context (social context modulation) although the physical appearance of the adaptors remained unchanged. Three additional experiments provided evidence that the observed social context modulation of the adaptation effect are owed to the adaptation of visual action recognition processes. We found that adaptation is critical for the social context modulation (experiment 2). Moreover, the effect is not mediated by emotional content of the action alone (experiment 3) and visual information about the action seems to be critical for the emergence of action adaptation effects (experiment 4). Taken together these results suggest that processes underlying visual action recognition are sensitive to the social context of an action.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9Putting Actions in Context: Visual Action Adaptation Aftereffects Are Modulated by Social Contexts1501715422GutekunstGSKM20147MGutekunstMGeussGRauhoeftJKStefanucciUKloosBMohlerBremen, Germany2014-12-09912This paper compares the influence a video self-avatar and a lack of a visual representation of a body have on height estimation when standing at a virtual visual cliff. A height estimation experiment was conducted using a custom augmented reality Oculus Rift hardware and software prototype also described in this paper. The results show a consistency with previous research demonstrating that the presence of a visual body influences height estimates, just as it has been shown to influence distance estimates and affordance estimates.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published3A Video Self-avatar Influences the Perception of Heights in an Augmented Reality Oculus Rift150171542215017VenrooijMAvMvB20147JVenrooijMMulderDAAbbinkMMvan PaassenMMulderFCTvan der HelmHHBülthoffSan Diego, CA, USA2014-10-0019461951Biodynamic feedthrough (BDFT) is the feedthrough of vehicle accelerations through the human body, leading to involuntary control device inputs. BDFT is a relevant problem as it reduces control performance in a large range of vehicles under various circumstances. This paper proposes an approach to mitigate BDFT. What differentiates this method from other mitigation approaches is that it accounts for adaptations in the neuromuscular dynamics of the human body. It is known that BDFT is strongly dependent on these dynamics. The approach was tested, as proof-of-concept, in an experiment in a motion simulator where participants were asked to fly a simulated vehicle through a virtual tunnel. By evaluating the performance with and without motion disturbance active and with and without cancellation active, the performance of the cancellation approach was evaluated. Results showed that the cancellation approach was successful. The detrimental effects of BDFT, such as a decrease in control performance and increase in control effort, were largely removed.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/IEEE-SMC-2014-Venrooij-Slides.pdfpublished5Admittance-adaptive model-based cancellation of biodynamic feedthrough1501715422OlivariNBP2014_37MOlivariFMNieuwenhuizenHHBülthoffLPolliniSan Diego, CA, USA2014-10-0035733578A human-centered design of haptic aids aims at tuning the force feedback based on the effect it has on human behavior. For this goal, a better understanding of the influence of haptic aids on the pilot neuromuscular response becomes crucial. In realistic scenarios, the neuromuscular response can continuously vary depending on many factors, such as environmental factors or pilot fatigue. This paper presents a method that online estimates time-varying neuromuscular dynamics during force-related tasks. This method is based on a Recursive Least Squares (RLS) algorithm and assumes that the neuromuscular response can be approximated by a Finite Impulse Response filter. The reliability and the robustness of the method were investigated by performing a set of Monte-Carlo simulations with increasing level or remnant noise. Even with high level of remnant noise, the RLS algorithm provided accurate estimates when the neuromuscular dynamics were constant or changed slowly. With instantaneous changes, the RLS algorithm needed almost 8s to converge to a reliable estimate. These results seem to indicate that RLS algorithm is a valid tool for estimating online time-varying admittance.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published5Identifying Time-Varying Neuromuscular System with a Recursive Least-Squares Algorithm: a Monte-Carlo Simulation Study1501715422CognettiOPRS20147MCognettiGOrioloPPelitiLRosaPStegagnoChicago, IL, USA2014-09-00350356We propose a cooperative control scheme for a
heterogeneous multi-robot system, consisting of an Unmanned
Aerial Vehicle (UAV) equipped with a camera and multiple
identical Unmanned Ground Vehicles (UGVs). Our control
scheme takes advantage of the different capabilities of the
robots. Since the system is highly redundant, the execution
of multiple different tasks is possible. The primary task is
aimed at keeping the UGVs well inside the camera field of
view, so as to allow our localization system to reconstruct the identity and relative pose of each UGV with respect to the UAV. Additional tasks include formation control, navigation and obstacle avoidance. We thoroughly discuss the feasibility of each task, proving convergence when possible. Simulation results are presented to validate the proposed method.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/IROS-2014-Cognetti.pdfpublished6Cooperative Control of a Heterogeneous Multi-Robot System
based on Relative Localization1501715422ScheerBC20147MScheerHHBülthoffLLChuangTübingen, Germany2014-09-00S135S137Difficulties experienced in steering a vehicle can be expected to place a demand on one’s mental resources (O’Donnell, Eggemeier 1986). While the extent of this mental workload (MWL) can be estimated by self-reports (e.g., NASA-TLX; Hart, Staveland 1988), it can also be
physiologically evaluated in terms of how a primary task taxes a common and limited pool of mental resources, to the extent that it reduces the electroencephalographic (EEG) responses to a secondary task (e.g. an auditory oddball task). For example, the participant could be primarily required to control a cursor to track a target while
attending to a series of auditory stimuli, which would infrequently present target tones that should be responded to with a button-press (e.g., Wickens, Kramer, Vanasse and Donchin 1983). Infrequently presented targets, termed oddballs, are known to elicit a large positive potential after approximately 300 ms of their presentation (i.e.,P3). Indeed, increasing tracking difficulty either by decreasing the predictability of the tracked target or by changing the complexity of the controller dynamics has been shown to attenuate P3 responses in the secondary auditory monitoring task (Wickens et al. 1983; Wickens, Kramer and Donchin 1984). In contrast, increasing tracking difficulty—by introducing more frequent direction changes of the tracked target (i.e. including higher frequencies in the function that describes the motion trajectory of the target)—has been shown to bear little influence on the secondary
task’s P3 response (Wickens, Israel and Donchin 1977; Isreal, Chesney, Wickens and Donchin 1980). Overall, the added requirement of a steering task consistently results in a lower P3 amplitude, relative to performing auditory monitoring alone (Wickens et al. 1983; Wickens et al. 1977; Isreal et al. 1980).
Using a dual-task paradigm for indexing workload is not ideal. First, it requires participants to perform a secondary task. This prevents it from being applied in real-world scenarios; users cannot be expected to perform an unnecessary task that could compromise their critical work performance. Second, it can only be expected to work if
the performance of the secondary task relies on the same mental resources as those of the primary task (Wickens, Yeh 1983), requiring a deliberate choice of the secondary task. Thus, it is fortunate that more recent studies have demonstrated that P3 amplitudes can be sensitive to MWL, even if the auditory oddball is ignored (Ullsperger,
Freude and Erdmann 2001; Allison, Polich 2008). This effect is said to induce a momentary and involuntary shift in general attention, especially if recognizable sounds (e.g. a dog bark, opposed to a pure sound) are used (Miller, Rietschel, McDonald and Hatfield 2011).
The current work, containing two experiments, investigates the conditions that would allow ‘novelty-P3’, the P3 elicited by the ignored, recognizable oddball, to be an effective index for the MWL of compensatory tracking. Compensatory tracking is a basic steering task that can be generalized to most implementations of vehicular control. In both experiments participants were required to use a joystick to counteract disturbances of a horizontal plane. To evaluate the generalizability of this paradigm, we depicted this horizontal plane as either a line in a simplified visualization or as the horizon in a realworld
environment. In the latter, participants experienced a large
field-of-view perspective of the outside world from the cockpit of an aircraft that rotated erratically about its heading axis. The task was the same regardless of the visualization. In both experiments, we employed a full factorial design for the visualization (instrument,
world) and 3 oddball paradigms (in experiment 1) or 4 levels of task difficulty (in experiment 2) respectively. Two sessions were conducted on separate days for the different visualizations, which were counter-balanced for order. Three trials were presented per oddball paradigm (experiment 1) or level of task difficulty (experiment 2) in
blocks, which were randomized for order. Overall, we found that steering performance was worse when the visualization was provided by a realistic world environment in experiments 1 (F (1, 11) = 42.8, p\0.01) and 2 (F (1, 13) = 35.0, p\0.01). Nonetheless, this manipulation of visualization had no consequence on our participants’
MWL as evaluated by a post-experimental questionnaire (i.e., NASATLX) and EEG responses. This suggests that MWL was unaffected by our choice of visualization.
The first experiment, with 12 participants, was designed to identify the optimal presentation paradigm of the auditory oddball. For the EEG analysis, two participants had to be excluded, due to noisy electrophysiological recordings (more than 50 % of rejected epochs). Whilst performing the tracking task, participants were presented with a sequence of auditory stimuli that they were instructed to ignore.
This sequence would, in the 1-stimulus paradigm, only contain the infrequent odd-ball stimulus (i.e., the familiar sound of a dog’s bark (Fabiani, Kazmerski, Cycowicz and Friedmann 1996)). In the 2-stimulus paradigm this infrequently presented oddball (0.1) is accompanied by a more frequently presented pure tone (0.9) and in the 3-stimulus paradigm the infrequently presented oddball (0.1) is accompanied by a more frequently presented pure tone (0.8) and an infrequently presented pure tone (0.1). These three paradigms are widely used in P3 research (Katayama, Polich 1996). It should be noted, however, that the target to target interval is 20 s regardless of the paradigm. To obtain the ERPs the epochs from 100 ms before to 900 ms after the onset of the recognizable oddball stimulus, were averaged. Mean amplitude measurements were obtained in a 60 ms window, centered at the group- mean peak latency for the largest positive maximum component between 250 and 400 ms for the oddball P3, for each of the three mid-line electrode channels of interest (i.e., Fz, Cz, Pz). In agreement with previous work, the novelty-P3 response is smaller when participants had to perform the
tracking task compared to when they were only presented with the task-irrelevant auditory stimuli, without the tracking task (F (1, 9) = 10.9, p\0.01). However, the amplitude of the novelty-P3 differed significantly across the presentation paradigms (F (2, 18) = 5.3, p\0.05), whereby the largest response to our task-irrelevant
stimuli was elicited by the 1- stimulus oddball paradigm. This suggests that the 1-stimulus oddball paradigm is most likely to elicit novelty-P3 s that are sensitive to changes in MWL. Finally, the attenuation of novelty-P3 amplitudes by the tracking task varied across the three mid-line electrodes (F (2, 18) = 28.0, p\0.001).
Pairwise comparison, Bonferroni corrected for multiple comparisons, revealed P3 amplitude to be largest at Cz, followed by Fz and smallest at Pz (all p\0.05). This stands in contrast with previous work that found control difficulty to attenuate P3 responses in parietal electrodes
(cf., Isreal et al. 1980; Wickens et al. 1983). Thus, the current paradigm that uses a recognizable, ignored sound is likely to reflect an underlying process that is different from previous studies, which could be more sensitive to the MWL demands of a tracking task.
Given the result of experiment 1, the second experiment with 14 participants, investigated whether the 1-stimulus oddball paradigm would be sufficiently sensitive in indexing tracking difficulty as defined by the bandwidth of frequencies that contributed to the disturbance of the horizontal plane (cf., Isreal et al. 1980). Three
different bandwidth profiles (easy, medium, hard) defined the linear increase in the amount of disturbance that had to be compensated for.
This manipulation was effective in increasing subjective MWL, according to the results of a post- experimental NASA-TLX questionnaire (F (2, 26) = 14.9, p\0.001) and demonstrated the expected linear trend (F (1, 13) = 23.2, p\0.001). This increase in control effort was also reflected in the amount of joystick activity, which grew linearly across the difficulty conditions (F (1, 13) = 42.2,
p\0.001). For the EEG analysis two participants had to be excluded due to noisy electrophysiological recordings (more than 50 % of rejected epochs). A planned contrast revealed that the novelty- P3 was significantly lower in the most difficult condition compared to the baseline viewing condition, where no tracking was done (F (1, 11) = 5.2, p\0.05; see Fig. 1a). Nonetheless, novelty-P3 did not
differ significantly between the difficulty conditions (F (2, 22) = 0.13, p = 0.88), nor did it show the expected linear trend (F (1, 11) = 0.02, p = 0.91). Like (Isreal et al. 1980), we find that EEGresponses do not discriminate for MWL that is associated with controlling increased disturbances. It remains to be investigated, whether
the novelty-P3 is sensitive for the complexity of controller dynamics, like it has been shown for the P3.
The power spectral density of the EEG data around 10 Hz (i.e., alpha) has been suggested by (Smith, Gevins 2005) to index MWL. A post hoc analysis of our current data, at electrode Pz, revealed that alpha power was significantly lower for the medium and hard conditions, relative to the view-only condition (F (1, 11) = 6.081, p\0.05; (F (1, 11) = 6.282, p\0.05). Nonetheless, the expected linear trend across tracking difficulty was not significant (Fig. 1b).
To conclude, the current results suggest that a 1-stimulus oddball task ought to be preferred when measuring general MWL with the novelty-P3. Although changes in novelty-P3 can identify the control effort required in our compensatory tracking task, it is not sufficiently sensitive to provide a graded response across different levels of disturbances.
In this regard, it may not be as effective as self-reports and joystick activity in denoting control effort. Nonetheless, further research can improve upon the sensitivity of EEG metrics to MWL by investigating other aspects that better correlate to the specific demands of a steering task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Is the novelty-P3 suitable for indexing mental workload in steering tasks?1501715422PrettoNNLB2014_27PPrettoANestiSAENooijMLosertHHBülthoffParis, France2014-09-0040.140.7In driving simulation, simulator tilt is used to reproduce linear acceleration. In order to feel realistic, this tilt is performed at a rate below the tilt-rate detection threshold, which is usually assumed constant. However, it is known that many factors affect the threshold, like visual information, simulator motion in additional directions, or active vehicle control. Here we investigated the effect of these factors on roll-rate detection threshold during simulated curve driving.
Ten participants reported whether they detected roll in multiple trials on a driving simulator. Roll-rate detection thresholds were measured under four conditions. In the first condition, three participants were moved passively through a curve with: (i) roll only in darkness; (ii) combined roll/sway in darkness; (iii) combined roll/sway and visual information. In the fourth condition participants actively drove through the curve.
Results showed that roll-rate perception in vehicle simulation is affected by the presence of motion in additional directions. Moreover, an active control task seems to increase the detection threshold, i.e. impair motion sensitivity, but with large individual differences. We hypothesize that this is related to the level of immersion during the task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/DSC-2014-Pretto.pdfpublished0.6Variable Roll-rate Perception in Driving Simulation1501715422PiryankovaSRdBM20147IVPiryankovaJKStefanucciJRomeroSde la RosaMJBlackBJMohlerVancouver, Canada2014-08-09118The goal of this research was to investigate women’s sensitivity to changes in their perceived weight by altering the body mass index (BMI) of the participants’ personalized avatars displayed on a large-screen immersive display. We created the personalized avatars with a full-body 3D scanner that records both the participants’ body geometry and texture. We altered the weight of the personalized avatars to produce changes in BMI while keeping height, arm length and inseam fixed and exploited the correlation between body geometry and anthropometric measurements encapsulated in a statistical body shape model created from thousands of body scans. In a 2x2 psychophysical experiment, we investigated the relative importance of visual cues, namely shape (own shape vs. an average female body shape with equivalent height and BMI to the participant) and texture (own photo-realistic texture or checkerboard pattern texture) on the ability to accurately perceive own current body weight (by asking them ‘Is the avatar the same weight as you?’). Our results indicate that shape (where height and BMI are fixed) had little effect on the perception of body weight. Interestingly, the participants perceived their body weight veridically when they saw their own photo-realistic texture and significantly underestimated their body weight when the avatar had a checkerboard patterned texture. The range that the participants accepted as their own current weight was approximately a 0.83 to −6.05 BMI% change tolerance range around their perceived weight. Both the shape and the texture had an effect on the reported similarity of the body parts and the whole avatar to the participant’s body. This work has implications for new measures for patients with body image disorders, as well as researchers interested in creating personalized avatars for games, training applications or virtual reality.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published17Can I recognize my body's weight? The influence of shape and texture on the perception of self150171542215017YukselSBF2014_27BYükselCSecchiHHBülthoffAFranchiBesançon, France2014-07-00433440In order to properly control the physical interactive behavior of a flying vehicle, the information about the forces acting on the robot is very useful. Force/torque sensors can be exploited for measuring such information but their use increases the cost of the equipment, the weight to be carried by the robot and, consequently, it reduces the flying autonomy. Furthermore, a sensor can measure only the force/torque applied to the point it is mounted in. In order to overcome these limitations, in this paper we introduce a Lyapunov based nonlinear observer for estimating the external forces applied to a quadrotor. Furthermore, we show how to exploit the estimated force for shaping the interactive behavior of the quadrotor using Interconnection and Damping Assignment Passivity Based Controller (IDA-PBC). The results of the paper are validated by means of simulations.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published7A nonlinear force observer for quadrotors and application to physical interactive tasks1501715422StegagnoMB20147PStegagnoCMassiddaHHBülthoffHong Kong, China2014-06-0113Object recognition is a fundamental topic for the development of robotic systems able to interact with the
environment. Most existing methods are based on vision systems and assume a broad point of view over the objects, which are observed in their entirety. This assumption is sometimes difficult to fulfill in practice, and in particular in swarm systems, constituted by a multitude of small robots with limited sensing and computational capabilities. We have developed a method for object recognition with a heterogeneous swarm of low-informative spatially-distributed sensors employing a distributed version of the naive Bayes classifier. Simulation results show the effectiveness of this approach highlighting some
nice properties of the developed algorithm.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/ICRA-2014-Stegagno2.pdfpublished2Object Recognition in Swarm Systems: Preliminary Results1501715422StegagnoBBF20147PStegagnoMBasileHHBülthoffAFranchiHong Kong, China2014-06-0038623869We present the development of a semi-autonomous quadrotor UAV platform for indoor teleoperation using RGB-D technology as exceroceptive sensor. The platform integrates IMU and Dense Visual Odometry pose estimation in order to stabilize the UAV velocity and track the desired velocity commanded by a remote operator though an haptic interface. While being commanded, the quadrotor autonomously performs a persistent pan-scanning of the surrounding area in order to extend the intrinsically limited field of view. The RGB-D sensor is used also for collision-safe navigation using a probabilistically updated local obstacle map. In the operator visual feedback, pan-scanning movement is real time compensated by an IMU-based adaptive filtering algorithm that lets the operator perform the drive experience in a oscillation-free frame. An additional sensory channel for the operator is provided by the haptic feedback, which is based on the obstacle map and velocity tracking error in order to convey information about the environment and quadrotor state. The effectiveness of the platform is validated by means of experiments performed without the aid of any external positioning system.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014e-SteBasBueFra-preprint.pdfpublished7A Semi-autonomous UAV Platform for Indoor Remote Operation with Visual and Haptic Feedback1501715422GagliardiOBF20147MGagliardiGOrioloHHBülthoffAFranchiStrasbourg, France2014-06-0019021908We address the problem of clearing an arbitrary and unknown network of roads using an organized team of Unmanned Aerial Vehicles (UAVs) equipped with a monocular down-facing camera, an altimeter, plus high-bandwidth short-range and low-bandwidth long-range communication systems. We allow the UAVs to possibly split in several subgroups. In each subgroup a leader guides the motion employing a hier-
archical coordination. A feature/image-based algorithm guides the subgroup toward the unexplored region without any use of global localization or environmental mapping. At the same time all the entry-points of the the explored region are kept under control, so that any moving object that enters or exits the previously cleared area. Simulative results on real aerial images demonstrate the functionalities and the effectiveness of the proposed algorithm.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014g-GagOriBueFra-preprint.pdfpublished6Image-based road network clearing without localization and without maps using a team of UAVs1501715422YukselSBF20147BYükselCSecchiHHBülthoffAFranchiHong Kong, China2014-06-0062586265In this paper we propose a controller, based on an extension of Interconnection and Damping Assignment-Passivity Based Control (IDA-PBC) framework, for shaping the whole physical characteristics of a quadrotor and for obtaining a desired interactive behavior between the robot and the environment. In the control design, we shape the total energy (kinetic and potential) of the undamped original system by first excluding external effects. In this way we can assign a new dynamics to the system. Then we apply damping injection to the new system for achieving a desired damped behavior. Then we show how to connect a high-level control input to such system by taking advantage of the new desired physics. We support the theory with extensive simulations by changing the overall behavior of the UAV for different desired dynamics, and show the advantage of this method for sliding on a surface tasks, such as ceiling painting, cleaning or surface inspection.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014d-YueSecBueFra-preprint.pdfpublished7Reshaping the physical properties of a quadrotor through IDA-PBC and its application to aerial physical interaction1501715422MasoneRBF20147CMasonePRobuffo GiordanoHHBülthoffAFranchiHong Kong, China2014-06-0064686475A new framework for semi-autonomous path planning for mobile robots that extends the classical paradigm of bilateral shared control is presented. The path is represented as a B-spline and the human operator can modify its shape by controlling the motion of a finite number of control points. An autonomous algorithm corrects in real time the human directives in order to facilitate path tracking for the mobile robot and ensures i) collision avoidance, ii) path regularity, and iii) attraction to nearby points of interest. A haptic feedback algorithm processes both human's and autonomous control terms, and their integrals, to provide an information of the mismatch between the path specified by the operator and the one corrected by the autonomous algorithm. The framework is validated with extensive experiments using a quadrotor UAV and a human in the loop with two haptic interfaces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014c-MasRobBueFra-preprint.pdfpublished7Semi-autonomous Trajectory Generation for Mobile Robots with Integral Haptic Shared Control1501715422FladNBC20147NFladFMNieuwenhuizenHHBülthoffLLChuangHeraklion, Greece2014-06-00311Delays between user input and the system’s reaction in control tasks have been shown to have a detrimental effect on performance. This is often accompanied by increases in self-reported workload. In the current work, we sought to identify physiological measures that correlate with pilot workload in a conceptual aerial vehicle that suffered from varying time delays between control input and vehicle response. For this purpose, we measured the skin conductance and heart rate variability of 8 participants during flight maneuvers in a fixed-base simulator. Participants were instructed to land a vehicle while compensating for roll disturbances under different conditions of system delay. We found that control error and the self-reported workload increased with increasing time delay. Skin conductance and input behavior also reflect corresponding changes. Our results show that physiological measures are sufficiently robust for evaluating the adverse influence of system delays in a conceptual vehicle model.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8System Delay in Flight Simulators Impairs Performance and Increases Physiological Workload1501715422GioiosoFSSP20147GGioiosoAFranchiGSalviettiSScheggiCPrattichizzoHong Kong, China2014-06-0043354341The flying hand is a robotic hand consisting of a swarm of UAVs able to grasp an object where each UAV contributes to the grasping task with a single contact point at the tooltip. The swarm of robots is teleoperated by a human hand whose fingertip motions are tracked, e.g., using an RGB-D camera. We solve the kinematic dissimilarity of this unique master-slave system using a multi-layered approach that includes: a hand interpreter that translates the fingertip motion in a desired motion for the object to be manipulated; a mapping algorithm that transforms the desired object motions into a suitable set of virtual points deviating from the planned contact points; a compliant force control for the case of quadrotor UAVs that allows to use them as indirect 3D force effectors. Visual feedback is also used as sensory substitution technique to provide a hint on the internal forces exerted on the object. We validate the approach with several human-in-the-loop simulations including the full physical model of the object, contact points and UAVs.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014a-GioFraSalSchPra-preprint.pdfpublished6The flying hand: A formation of UAVs for cooperative aerial tele-manipulation1501715422ScheerNBC20147MScheerFMNieuwenhuizenHHBülthoffLLChuangHeraklion, Greece2014-06-00202211Flight simulators are often assessed in terms of how well they imitate the physical reality that they endeavor to recreate. Given that vehicle simulators are primarily used for training purposes, it is equally important to consider the implications of visualization in terms of its influence on the user’s control performance. In this paper, we report that a complex and realistic visual world environment can result in larger performance errors compared to a simplified, yet equivalent, visualization of the same control task. This is accompanied by an increase in subjective workload. A detailed analysis of control performance indicates that this is because the error perception is more variable in a real world environment.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published9The Influence of Visualization on Control Performance in a Flight Simulator1501715422GioiosoRPBF20147GGioiosoMRyllDPrattichizzoHHBülthoffAFranchiHong Kong, China2014-06-0062786284In this paper the problem of a quadrotor that physically interacts with the surrounding environment through a rigid tool is considered. We present a theoretical design that allows to exert an arbitrary 3D force by using a standard near-hovering controller that was originally developed for contact-free flight control. This is achieved by analytically solving the nonlinear system that relates the quadrotor state, the force exerted by the rigid tool on the environment, and the near-hovering controller action at the equilibrium points, during any generic contact. Stability of the equilibria for the most relevant actions (pushing, releasing, lifting, dropping, and left-right shifting) are proven by means of numerical analysis using the indirect Lyapunov method. An experimental platform, including a suitable tool design, has been developed and used to validate the theory with preliminary experiments.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/2014b-GioRylPraBueFra-preprint.pdfpublished6Turning a near-hovering controlled quadrotor into a 3D force effector1501715422GeluardiNPB20147SGeluardiFMNieuwenhuizenLPolliniHHBülthoffMontréal, QC, Canada2014-05-0017211731This paper presents the implementation of a Multi-Input Single-Output fully coupled transfer function model of a civil light helicopter in hover. A frequency domain identification method is implemented. It is discussed that the chosen frequency range of excitation allows to capture some important rotor dynamic modes. Therefore, studies that require coupled rotor/body models are possible. The pitch-rate response with respect to the longitudinal cyclic is considered in detail throughout the paper. Different transfer functions are evaluated to compare the capability to capture the main helicopter dynamic modes. It is concluded that models with order less than 6 are not able to model the lead-lag dynamics in the pitch axis. Nevertheless, a transfer function model of the 4th order can provide acceptable results for handling qualities evaluations. The identified transfer function models are validated in the time domain with different input signals than those used during the identification and show good predictive capabilities. From the results it is possible to conclude that the identified transfer function models are able to capture the main dynamic characteristics of the considered light helicopter in hover.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/AHS-2014-Geluardi.pdfpublished10Frequency Domain System Identification of a Light Helicopter in Hover1501715422LacheleVPB20147JLächeleJVenrooijPPrettoHHBülthoffMontréal, QC, Canada2014-05-0017771785nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8Motion Feedback Improves Performance in Teleoperating UAVs1501715422WiskemannDPvMB20147CMWiskemannFMDropDMPoolMMvan PaassenMMulderHHBülthoffMontréal, QC, Canada2014-05-0017061720This paper describes an experiment conducted to investigate the effects of roll-lateral motion cueing algorithm settings
on motion fidelity for helicopter roll-lateral repositioning tasks. A total of 13 motion conditions, comprising two roll gain settings, two degrees of roll-lateral coordination and three roll washout intensities, were tested by four pilots on the CyberMotion Simulator at the Max Planck Institute for Biological Cybernetics. An emphasis was put on the use of objective measurements for motion fidelity determination, in addition to collected subjective handling quality ratings (HQR) and motion fidelity ratings (MFS). Higher roll gains were found to have a beneficial effect on both the subjective and the objective metrics, which is in line with previous findings. Reducing the degree of coordination had a negative effect on subjective ratings, but did not show a consistent negative effect for the considered objective metrics. Stronger roll washout had a large and consistent negative effect on the subjective ratings. This is confirmed by the obtained objective measurements, which show high control activity and less realistic vehicle trajectories during
the deceleration and stabilization phase of the maneuver for conditions with strong roll washout. We conclude that roll and lateral gain are more effective than roll washout to attenuate the simulated motion.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/AHS-2014-Drop.pdfpublished14Subjective and Objective Metrics for the Evaluation of Motion Cueing Fidelity for a Roll-Lateral Reposition Maneuver1501715422D039CruzPLCBSGHVAFBKKKPFSBKKMLSGTOC20147MD'CruzHPatelLLewisSCobbMBuesOStefaniTGroblerKHelinJViitaniemiSAromaaBFrohlichSBeckAKunertAKulikIKaraseitanidisPPsonisNFrangakisMSlaterIBergstromKKilteniEKokkinaraBMohlerMLeyrerFSoykaEGaiaDTedoneMOlbertMCappitelliMinneapolis, MN, USA2014-02-00167168Our vision is that regardless of future variations in the interior of airplane cabins, we can utilize ever-advancing state-of-the-art virtual and mixed reality technologies with the latest research in neuroscience and psychology to achieve high levels of comfort for passengers. Current surveys on passenger's experience during air travel reveal that they are least satisfied with the amount and effectiveness of their personal space, and their ability to work, sleep or rest. Moreover, considering current trends it is likely that the amount of available space is likely to decrease and therefore the passenger's physical comfort during a flight is likely to worsen significantly. Therefore, the main challenge is to enable the passengers to maintain a high level of comfort and satisfaction while being placed in a restricted physical space.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published1Demonstration: VR-HYPERSPACE - The innovative use of virtual reality to increase comfort by changing the perception of self and space150171501715422OlivariNBP20147MOlivariFMNieuwenhuizenHHBülthoffLPolliniNational Harbor, MD, USA2014-01-00163173External aids are required to increase safety and performance during the manual control of an aircraft. Automated systems allow to surpass the performance usually achieved by pilots. However, they suffer from several issues caused by pilot unawareness of the control command from the automation. Haptic aids can overcome these issues by showing their control command through forces on the control device. To investigate how the transparency
of the haptic control action in uences performance and pilot behavior, a quantitative comparison between haptic aids and automation is needed. An experiment was conducted in which pilots performed a compensatory tracking task with haptic aids and with automation. The haptic aid and the automation were designed to be equivalent when the pilot was
out-of-the-loop, i.e., to provide the same control command. Pilot performance and control effort were then evaluated with pilots in-the-loop and contrasted to a baseline condition without external aids. The haptic system allowed pilots to improve performance compared with the baseline condition. However, automation outperformed the other two conditions. Pilots control effort was reduced by the haptic aid and the automation in a similar way. In addition, the pilot open-loop response was estimated with a non-parametric estimation method. Changes in the pilot response were observed in terms of increased crossover
frequency with automation, and decreased neuromuscular peak with haptics.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/SCITECH-2014-Olivari.pdfpublished10An Experimental Comparison of Haptic and Automated Pilot Support Systems1501715422NieuwenhuizenB20147FMNieuwenhuizenHHBülthoffNational Harbor, MD, USA2014-01-00154162Highway-in-the-sky displays and haptic shared control could provide an easy-to-use control interface for non-expert pilots. In this paper, various display and haptic approaches
are evaluated in a ight control task with a personal aerial vehicle. It is shown that a tunnel or a wall representation of the ight trajectory lead to best performance and lowest control activity and effort. Similar results are obtained when haptic guidance cues are based on the error of a predicted position of the vehicle with respect to the ight trajectory. Such haptic cues are also subjectively preferred by the pilots. This study indicates that the combination of a haptic shared control framework and highway-in-the-sky display can provide non-expert pilots with an easy-to-use control interface for ying a personal aerial vehicle.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/SCITECH-2014-FMN.pdfpublished8Evaluation of Haptic Shared Control and a Highway-in-the-Sky Display for Personal Aerial Vehicles1501715422ThorntonHB20147IMThorntonTSHorowitzHHBülthoffLeuven, Belgium2014-10-001128Multiple Object Tracking (MOT) has proven to be a very useful laboratory tool for exploring the
limits of divided attention. Compared to many other attention tasks, MOT appears to capture much
of the complexity of our day-to-day environment. Often though, for example when driving or playing
sport, we need to act on the environment as well as simply monitor it. In the current work, we asked
whether the need to make focused, task-relevant movements, would interfere with the ability to
track multiple objects. Sixteen participants completed single-task versions of standard MOT and a
new collision-avoidance task that we call interactive multiple object tracking (iMOT). In the iMOT
task, which is based on the popular mobile app games Flight Controller and Harbor Master, the goal
is to stop objects colliding by using touch control to perturb trajectories. Compared to single-task
baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be
performed together. Although strategic allocation of resources may partly account for this pattern of
cost and benefits, it seems clear that actions can be planned and executed at the same time as tracking
multiple objects.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1128Does action disrupt multiple object tracking?1501715422ChangBd2014_37D-SChangHHBülthoffSde la RosaTübingen, Germany2014-09-00S33S34Introduction
Human actions contain an extensive array of socially relevant information. Previous studies have shown that even brief exposure to visually-observed human actions can lead to accurate predictions of goals or intentions accompanying human actions. For example, motion kinematics can enable predicting the success of a basketball shot, or whether a hand movement is carried out with cooperative or competitive
intentions. It has been also reported that gestures accompanying a conversation can serve as a rich source of information for decision making to judge about the trustworthiness of another person. Based on these previous findings we wondered whether humans could actually predict the cooperativeness of another individual by identifying visible social cues. Would it be possible to predict the cooperativeness of a person by just observing everyday actions such as walking or running? Wehypothesized that even brief excerpts of human actions depicted and
presented as biological motion cues (i.e. point-light-figures) would provide sufficient information to predict cooperativeness. Using motion-capture technique and a game-theoretical interaction setup we explored whether prediction of cooperation was possible merely by
observing biological motion cues of everyday actions, and which actions were enabling these predictions.
We recorded six different human actions—walking, running, greeting, table tennis playing, choreographed dancing (Macarena) and spontaneous dancing—in normal participants using an inertia-based motion capture system. We used motion capture technology (MVN Motion Capture Suit from XSense, Netherlands) to record all actions. A total
number of 12 participants (6 male, 6 female) participated in motion recording. All actions were then post-processed to short movies (ca. 5 s) showing point light stimuli. These actions were then evaluated by 24 other participants in terms of personality traits such as cooperativeness
and trustworthiness, on a Likert scale ranging from 1 to 7. The original participants who provided the recorded actions then returned a few months later to be tested for their actual cooperativeness performance. They were given standard social dilemmas used in game theory such as the give some game, stag hunt game, and public goods game. In those interaction games, they were asked to exchange or give
tokens to another player, and depending on their choices they were able to win or lose an additional amount of money. The choice of behavior for each participant was then recorded and coded for cooperativeness. This cooperativeness performance was then compared with the perceived cooperativeness based on the different ratings of
their actions performed and evaluated by other participants.
Results and Discussion
Preliminary results showed a significant correlation between cooperativeness ratings and actual cooperativeness performance. The actions showing a consistent correlation were Walking, Running and Choreographed Dancing (Macarena). No significant correlation was observed for actions such as Greeting, Table tennis playing or Spontaneous Dancing. A similar tendency was consistently observed across all actions, although no significant correlations were found for
all social dilemmas. The ratings of different actors and actions were highly consistent across different raters and high inter-rater-reliability was achieved. It seems possible that natural and constrained actions carry more social cues enabling prediction of cooperation than actions
showing more variance across different participants. Further studies with higher number of actors and raters are planned to confirm whether accurate prediction of cooperation is really possible.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Actions revealing cooperation: predicting cooperativeness in social dilemmasfrom the observation of everyday actions1501715422MeilingerFMB20147TMeilingerJFrankensteinBJMohlerHHBülthoffTübingen, Germany2014-09-00S53S54Knowledge underlying everyday navigation is distinguished into route and survey knowledge (Golledge 1999). Route knowledge allows re-combining and navigating familiar routes. Survey knowledge is used for pointing to distant locations or finding novel shortcuts. We show that within one’s city of residency route and survey knowledge root in separate memories of the same environment and are represented within different reference frames. Twenty-six Tu¨bingen residents who lived there for seven years in
average faced a photo- realistic virtual model of Tübingen and completed a survey task in which they pointed to familiar target locations from various locations and orientations. Each participant’s performance was most accurate when facing north, and errors increased as participants’ deviation from a north-facing orientation
increased. This suggests that participants’ survey knowledge was organized within a single, north-oriented reference frame. One week later, 23 of the same participants conducted route knowledge tasks comprising of the very same start and goal locations used in the survey task before. Now participants did not point to a goal location, but used arrow keys of a keyboard to enter
route decisions along an imagined route leading to the goal.
Deviations from the correct number of left, straight, etc. decisions and response latencies were completely uncorrelated to errors and latencies in pointing. This suggests that participants employed different and independent representations for the matched route and survey tasks. Furthermore, participants made fewer route errors when asked to respond from an imagined horizontal walking perspective rather than from an imagined constant aerial perspective which replaced left, straight, right decisions by up, left, right, down as in a map with the
order tasks balanced. This performance advantage suggests that participants did not rely on the single, north-up reference used for pointing. Route and survey knowledge were organized along different reference frames. We conclude that our participants’ route knowledge employed
multiple local reference frames acquired from navigation whereas their survey knowledge relied on a single north-oriented reference frame learned from maps. Within their everyday environment, people seem to use map or navigation-based knowledge according to which best suits the task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0How to remember Tübingen? Reference frames in route and survey knowledge of one’s city of residency150171542215017GlatzBC20147CGlatzHHBülthoffLLChuangTübingen, Germany2014-09-00S38Automated collision avoidance systems promise to reduce accidents and relieve the driver from the demands of constant vigilance. Such systems direct the operator’s attention to potentially critical regions of the environment without compromising steering performance. This
raises the question: What is an effective warning cue?
Sounds with rising intensities are claimed to be especially salient. By evoking the percept of an approaching object, they engage a neural network that supports auditory space perception and attention (Bach et al. 2008). Indeed, we are aroused by and faster to respond to ‘looming’ auditory tones, which increase heart rate and skin conductance
activity (Bach et al. 2009). Looming sounds can differ in terms of their rising intensity profiles. While it can be approximated by a sound whose amplitude increases linearly with time, an approaching object that emits a constant tone is better described as having an amplitude that increases
exponentially with time. In a driving simulator study, warning cues that had a veridical looming profile induced earlier braking responses than ramped profiles with linearly increasing loudness (Gray 2011). In the current work, we investigated how looming sounds might serve, during a primary steering task, to alert participants to the
appearance of visual targets. Nine volunteers performed a primary steering task whilst occasionally discriminating visual targets. Their primary task was to minimize the vertical distance between an erratically moving cursor and the horizontal mid-line, by steering a joystick towards the latter. Occasionally, diagonally oriented Gabor patches (108 tilt; 18 diameter; 3.1 cycles/deg; 70 ms duration) would appear on either the left or right of the cursor. Participants were instructed to respond with a button-press whenever a pre-defined target appeared. Seventy percent of the time, these visual stimuli were preceded by a 1,500 ms warning tone, 1,000 ms before they appeared. Overall, warning cues resulted in significantly faster and more sensitive detections of the visual target stimuli (F1,8 = 7.72, p\0.05; F1,8 = 9.63, p\0.05). Each trial would present one of three possible warning cues. Thus,
a warning cue (2,000 Hz) could either have a constant intensity of 65 dB, a ramped tone with linearly increasing intensity from 60 dB to approximately 75 dB or a comparable looming tone with an exponentially increasing intensity profile. The different warning cues did not vary in their influence of the response times to the visual targets
and recognition sensitivity (F2,16 = 3.32, p = 0.06; F2,16 = 0.10, p = 0.90). However, this might be due to our small sample size. It is noteworthy that the different warning tones did not adversely affect steering performance (F2,16 = 1.65, p\0.22). Nonetheless, electroencephalographic
potentials to the offset of the warning cues were
significantly earlier for the looming tone, compared to both the constant and ramped tone. More specifically, the positive component of the event- related po tential was significantly earlier for the looming tone by about 200 ms, relative to the constant and ramped tone, and sustained for a longer duration (see Fig. 1). The current findings highlight the behavioral benefits of auditory warning cues. More importantly, we find that a veridical looming tone
induces earlier event-related potentials than one with a linearly increasing intensity. Future work will investigate how this benefit might diminish with increasing time between the warning tone and the event that is cued for.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Looming auditory warnings initiate earlier event-related potentials in a manual steering task1501715422MeilingerFWBH20147TMeilingerJFrankensteinKWatanabeHHBülthoffCHölscherBremen, Germany2014-09-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Map-based Reference Frames Are Used to Organize Memory of Subsequent Navigation Experience1501715422HohmanndB20147MRHohmannSde la RosaHHBülthoffTübingen, Germany2014-09-00S46S47Action recognition research has mainly focused on investigating the perceptual processes in the recognition of isolated actions from biological motion patterns. Surprisingly little is known about the cognitive representation underlying action recognition. A fundamental
question concerns whether actions are represented
independently or interdependently. Here we examined, whether
cognitive representation of static (action image) and dynamic (action movie) actions are dependent on each other and whether cognitive representations for static and dynamic actions overlap. Adaptation paradigms are an elegant way to examine the presence of relationship between different cognitive representations. In an adaptation experiment, participants view a stimulus, the adaptor, for a
prolonged amount of time and afterwards report their perception of a second, ambiguous test stimulus. Typically, the perception of the second stimulus will be biased away from the adaptor stimulus. The presence of an antagonistic perceptual bias (adaptation effect) is often taken as evidence for the interdependency of the cognitive representation between test and adaptor stimulus. We manipulated the dynamic content (dynamic vs. static) of the
test and adaptor stimulus independently. The ambiguous test stimulus was created by a weighted linear morph between the spatial positions of the two adapting actions (hand shake, high five). 30 participants categorized the ambiguous dynamic or static action stimuli after being adapted to dynamic or static actions. Afterwards, we calculated the
perceptual bias for each participant by fitting a psychometric function to the data. We found an action-adaptation after-effect in some but not all experimental conditions. Specifically, the effect was only present
if the presentation of the adaptor and the test stimulus was congruent, i.e. if both were presented in either a dynamic or a static manner (p\0.001). This action-adaptation after-effect indicates a dependency between cognitive representations when adaptor and test
stimuli have the same dynamic content (i.e. both static or dynamic). Future studies are needed to relate those results to other findings in the field of action recognition and to incorporate a neurophysiological perspective.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0On the perception and processing of social actions1501715422SymeonidouOBC20147E-RSymeonidouMOlivariHHBülthoffLLChuangTübingen, Germany2014-09-00S71Haptic feedback systems can be designed to assist vehicular steering by sharing manual control with the human operator. For example, direct haptic feedback (DHF) forces, that are applied over the control device, can guide the operator towards an optimized trajectory, which he can either augment, comply with or resist according to his preferences.
DHF has been shown to improve performance (Olivari et al.
submitted) and increase safety (Tsoi et al. 2010). Nonetheless, the human operator may not always benefit from the haptic support system. Depending on the amount of the haptic feedback, the operator might demonstrate an over- reliance or an opposition to this haptic assistance (Forsyth and MacLean 2006). Thus, it is worthwhile to
investigate how different levels of haptic assistance influence shared control performance. The current study investigates how different gain levels of DHF influence performance in a compensatory tracking task. For this
purpose, 6 participants were evenly divided into two groups according to their previous tracking experience. During the task, they had to compensate for externally induced disturbances that were visualized as the difference between a moving line and a horizontal reference standard. Briefly, participants observed how an unstable aircraft symbol, located in the middle of the screen, deviated in the roll axis from a stable artificial horizon. In order to compensate for the roll angle, participants were instructed to use the control joystick. Meanwhile, different DHF forces were presented over the control joystick for gain levels of 0, 12.5, 25, 50 and 100 %. The maximal DHF level was chosen according to the procedure described in
(Olivari et al. 2014) and represents the best stable performance of skilled human operators. The participants’ performance was defined as the reciprocal of the median of the root mean square error (RMSE) in each condition.
Figure 1a shows that performance improved with in- creasing
DHF gain, regardless of experience levels. To evaluate the operator’s contribution, relative to the DHF contribution, we calculated the ratio of overall performance to estimated DHF performance without human input. Figure 1b shows that the subject’s contribution in both groups decreased with increasing DHF up to the 50 % condition. The contribution of experienced subjects plateaued between the 50 and 100 % DHF levels. Thus, the increase in performance for the 100 %
condition can mainly be attributed to the higher DHF forces alone. In contrast, the inexperienced subjects seemed to completely rely on the DHF during the 50 % condition, since the operator’s contribution approximated 1. However, this changed for the 100 % DHF level. Here, the participants started to actively contribute to the task (operator’s contribution [1). This change in behavior resulted in
performance values similar to those of the experienced group Our findings suggest that the increase of haptic support with our DHF system does not necessarily result in over-reliance and can improve performance for both experienced and inexperienced subjects.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The Role of Direct Haptic Feedback in a Compensatory
Tracking Task1501715422FademrechtBd2014_27LFademrechtIBülthoffSde la RosaBeograd, Serbia2014-08-00103Recognizing actions of others in the periphery is required for fast and appropriate reactions to events in our environment (e.g. seeing kids running towards the street when driving). Previous results show that action recognition is surprisingly accurate even in far periphery (<=60° visual angle (VA)) when actions were directed towards the observer (front view). The front view of a person is considered to be critical for social cognitive processes (Schillbach et al., 2013). To what degree does the orientation of the observed action (front vs. profile view) influence the identification of the action and the recognition of the action's valence across the horizontal visual field? Participants saw life-size stick figure avatars that carried out one of six motion-captured actions (greeting actions: handshake, hugging, waving and aggressive actions: slapping, punching and kicking). The avatar was shown on a large screen display at different positions up to 75° VA. Participants either assessed the emotional valence of the action or identified the action either as ‘greeting’ or as ‘attack’. Orientation had no significant effect on accuracy. Reaction times were significantly faster for profile than for front views (p=0.003) for both tasks, which is surprising in light of recent suggestionsnonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-103A matter of perspective: action recognition depends on stimulus orientation in the periphery1501715422delaRosaHB20147Sde la RosaMHohmannHHBülthoffBeograd, Serbia2014-08-0071Visual action recognition is a prerequisite for humans to physically interact with other humans. Do we use similar perceptual mechanisms when recognizing actions from a photo (static) or a movie (dynamic)? We used an adaptation paradigm to explore whether static and dynamic action information is processed in separate or interdependent action-sensitive channels. In an adaptation paradigm participants' perception of an ambiguous test stimulus is biased after the prolonged exposure to an adapting stimulus (adaptation aftereffect (AA)). This is often taken as evidence for the existence of interdependent perceptual channels. We used a novel action morphing technique to produce ambiguous test actions that were a weighted linear combination of the two adaptor actions. We varied the dynamics of the content (i.e. static vs. dynamic) of the test and adaptor stimuli independently and were interested in whether the AA was modulated by the congruency of motion information between test and adaptor. The results indicated that the AA only occurred when the dynamics of the content between the test and adaptor were congruent (p<0.05) but not when they were incongruent (p>0.05). These results provide evidence that static and dynamic action information are processed to some degree separately.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-71Actions in motion: Separate perceptual channels for processing dynamic and static action information1501715422PiryankovaSRdBM2014_37IVPiryankovaJKStefanucciJRomeroSde la RosaMJBlackBJMohlerVancouver, Canada2014-08-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/SIGGRAPH-2014-Piryankova.pdfpublished0Can I recognize my body's weight? The influence of shape and texture on the perception of self150171542215017ChangBd2014_27D-SChangHHBülthoffSde la RosaBeograd, Serbia2014-08-00103How does the brain discriminate between different actions? Action recognition has been an active field of research for long time, still little is known about how the representations of different actions in the brain are related to each other. We wanted to find out whether different actions were ordered according to their semantic meaning or kinematic motion by employing a novel visual action adaptation paradigm. A total of 24 participants rated four different social actions in terms of their perceived differences in either semantic meaning or kinematic motion. Then, the specific perceptual bias for each action was determined by measuring the size of the action adaptation aftereffect in each participant. Finally, the meaning and motion ratings were used to predict the measured adaptation aftereffect for each action using linear regression. Semantic meaning and the interaction of meaning and motion significantly predicted the adaptation aftereffects, but kinematic motion alone was not a significant predictor. These results imply that differences between distinct actions are rather encoded in terms of their meaning than motion in the brain. The current experimental paradigm could be a useful method for further mapping the relationship between different actions in the human brain.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-103Does Action Recognition Depend more on the Meaning or Motion of Different Actions?1501715422ZhaoB2014_27MZhaoIBülthoffBeograd, Serbia2014-08-0076Many studies have demonstrated better recognition of own- than other-race faces. However, little is known about whether memories of unfamiliar own- and other-race faces decay similarly with time. We addressed this question by probing participants’ memory about own- and other-race faces both immediately after learning (immediate test) and one week later (delayed test). In both learning and test phases, participants saw short movie wherein a person was talking in front of the camera (the sound was turned off). Two main results emerged. First, we observed a cross-race deficit in recognizing other-race faces in both immediate and delayed tests, but the cross-race deficit was reduced in the latter. Second, recognizing faces immediately after learning was not better than recognizing them one week later. Instead, overall performance was even better at delayed test than at immediate test. This result was mainly due to improved recognition for other-race female faces, which showed comparatively low performance when tested immediately. These results demonstrate that memories of both own- and other-race faces sustain for a relative long time. Although other-race faces are less well recognized than own-race faces, they seem to be maintained in long-term memory as well as, and even better than, own-race faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-76Long-term memory for own- and other-race faces1501715422NooijPBN20147SAENooijPPrettoHHBülthoffANestiAmsterdam, The Netherlands2014-06-13235Vestibular models can predict many aspects of self-motion perception. However, it is still not completely understood how linear and angular cues combine to form the overall perception of 3D motion in space. Here, we investigated the perception of heading and travelled path during a circular trajectory. According to model predictions (Merfeld et al. 1993) we expected a bias in perceived heading (i.e., facing outward the curve) and a distorted travelled path when in darkness, but close to veridical perception when visual information was also provided. Participants were moved along a circular trajectory using the MPI CyberMotion Simulator (www.cyberneum.de), either blindfolded or viewing congruent visual motion (random dot cloud). The orientation of the body midline with respect to the motion path (heading) was varied using an adaptive procedure. Participants indicated whether they were facing inward or outward the travelled curve. In a separate session aiming at collecting continuous measures (darkness only), participants continuously pointed towards a distant imaginary earth-fixed target, or towards the direction of perceived motion. They also provided drawings of the perceived travelled trajectory. Results show that heading sensitivity in darkness for curved trajectories was significantly lower than generally found for straight paths. Most of the participants showed a heading bias, but its direction was opposite to the predictions of the Merfeld-model. Perceived heading based on continuous pointing and drawings were not always consistent. These results ask for changes of the current model and show that the various components of perception are not always consistent when investigated in isolation.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-235Nonveridical perception of heading and travelled path during curved trajectories1501715422KimBKCHCP20147JKimHHBülthoffS-PKimYGChungSWHanS-CChungJ-YParkHamburg, Germany2014-06-11Introduction:
Recently multi-voxel pattern analysis (MVPA) has been introduced in the analysis of functional magnetic resonance imaging (fMRI) and allows us to examine distributed spatial patterns of neural activity in response to various tactile stimuli . Taking advantage of its higher sensitivity , MVPA has been employed in a wide range of somatosensory research fields as a complement to the traditional univariate analysis. However, current research on tactile MVPA is mostly focused on delineating neuronal activation patterns in response to the tactile stimuli. Relatively less attention has been devoted towards understanding how neural activation patterns underlie diverse human behavioral outcomes during tactile manipulation tasks. In this study, we aim to investigate how the multi-voxel neural patterns varied with the behavioral discriminative performance in a roughness discrimination task. For this purpose, we search for the brain regions carrying roughness discriminative information using searchlight MVPA  and how each region correlates with the human behavioral performance.
Sixteen subjects participated in this study approved by Korea University Institutional Review Board (KU-IRB-11-46-A-0). Anatomical (T1-weighted 3D MPRAGE) and functional images (T2*-weighted gradient EPI, TR = 3,000 ms, voxel size = 2.0×2.0×2.0 mm) were obtained using a Siemens 3T scanner (Magnetom TrioTim). Before the fMRI scanning, all the participants performed the behavioral roughness discrimination task. Five different roughness levels of aluminum-oxide abrasive papers (Sumitomo 3-M), which were validated and employed in the previous study , were used. In each trial of the task, the participants explored two randomly presented abrasive papers with the index fingertip of right hand and reported which of them felt rougher. Behavioral discriminative sensitivity was measured as the difference of roughness values between 25th and 75th percentile of a psychometric function. This was referred to the just noticeable difference (JND) . An fMRI scanning consisted of five blocks with twenty trials. Each trial was made up two consecutive periods; an exploration for 6 s followed by a resting for 15 s. Following instructions, the participants explored a presented abrasive paper with their index fingertip of right hand. Brain signals were analyzed using a searchlight MVPA approach  and decoding accuracy of each significant cluster was obtained. Finally, we evaluated a correlation between the JND and the decoding accuracy using the Pearson correlation coefficient.
A random-effects group analysis revealed that four clusters exhibited statistically significant decoding capabilities to differentiate five distinct roughness levels (p<0.0001 uncorr., cluster size>30). These four clusters were located in the superior portion of the bilateral temporal pole (STP), supplementary motor area (SMA), and contralateral postcentral gyrus (S1). Decoding accuracies for roughness discrimination were significantly exceeded the chance level (=20%) for every clusters (SMA: 40.3±4.4%; contralateral S1: 38.0±6.7%; contralateral STP: 33.6±4.7%; ipsilateral STP: 33.1±3.7%). Among these clusters, the significant Pearson correlation coefficient was obtained only for SMA (r=-0.547, p<0.05).
In this study, we statistically assessed each set of multi-voxel patterns across the whole brain and revealed that bilateral STP, SMA, and contralateral S1 exhibited neural activity patterns specific to the roughness discrimination. Remarkably, decoding performance using SMA activity showed a significant correlation with the behavioral performance. The negative correlation in SMA indicates that individuals with higher decoding accuracy of roughness from SMA also show better performance in a roughness discrimination task. Our findings suggest that the pattern of activity in SMA may be closely related to the ability to discriminate tactile roughness.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0A correlation study of behavioral and neural decoding performance for roughness discrimination1501715422RoheN20147TRoheUNoppeneyHamburg, Germany2014-06-11Introduction:
To form a reliable percept of the multisensory environment, the brain integrates signals across the senses. However, it should integrate signals only when caused by a common source, but segregate those from different sources (Shams and Beierholm, 2010). Bayesian Causal inference provides a rational strategy to arbitrate between information integration and segregation: In the case of a common source, signals should be integrated weighted by their sensory reliability (Ernst and Banks, 2002; Alais and Burr, 2004; Fetsch et al., 2012). In case of separate sources, they should be processed independently. Yet, in everyday life, the brain does not know whether signals come from common or different sources, but needs to infer the probabilities of these casual structures from the sensory signals. A final estimate can then be obtained by averaging the estimates under the two causal structures weighted by their posterior probabilities (i.e. model averaging). Indeed, human observers locate audiovisual signal sources by combining the spatial estimates under the assumptions of common and separate sources weighted by their probabilities (Kording et al., 2007). Yet, the neural basis of Bayesian Causal Inference during spatial localization remains unknown. This study combines Bayesian Modeling and multivariate fMRI decoding to characterize how Bayesian Causal Inference is performed by the auditory and visual cortical hierarchies (Fig. 1A-C).
Participants (N = 5) were presented with auditory and visual signals that were independently sampled from four locations along the azimuth. The spatial reliability of the visual signal was high or low. In a selective attention paradigm, participants localized either the auditory or the visual spatial signal. After fitting the Bayesian causal inference model to participants' localization responses, we obtained condition-specific auditory and visual spatial estimates under the assumption of (i) common (SAV,C=1) and (ii) separate sources (SA,C=2, SV,C=2) and (iii) the final combined spatial estimate after model averaging (SA, SV), i.e. five spatial estimates in total (Fig. 1C). Using cross-validation, we trained a support vector regression model to decode these auditory or visual spatial estimates from fMRI voxel response patterns in regions along the visual and auditory cortical hierarchies. We evaluated the decoding accuracy for each spatial estimate in terms of the correlation coefficient between the spatial estimate decoded from fMRI and predicted from the Bayesian Causal Inference model. To determine the spatial estimate that is primarily encoded in a region, we next computed the exceedance probability that a correlation coefficient of one spatial estimate was greater than any of the other spatial estimates (Fig. 1D).
Bayesian Causal Inference emerged along the auditory and visual hierarchies: Lower level visual and auditory areas encoded auditory and visual estimates under the assumption of separate sources (i.e. information segregation). Posterior intraparietal sulcus (IPS1-2) represented the reliability-weighted average of the signals under common source assumptions. Anterior IPS (IPS3-4) represented the task-relevant auditory or visual spatial estimate obtained from model averaging.
This is the first demonstration that the computational operations underlying Bayesian Causal Inference are performed by the human brain in a hierarchical fashion. Critically, the brain explicitly encodes not only the spatial estimates under the assumption of full segregation (primary visual and auditory areas), but also under forced fusion (IPS1-2). These spatial estimates under the causal structures of common and separate sources are then averaged into task-relevant auditory or visual estimates according to model averaging (IPS3-4). Our study provides a novel hierarchical perspective on multisensory integration in human neocortex.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0A cortical hierarchy performs Bayesian Causal Inference for multisensory perception15017154221501718826LeitaoTTN20147JLeitaoAThielscherJTuennerhoffUNoppeneyHamburg, Germany2014-06-11Introduction:
Despite sustained attention, weak sensory event often evade our perceptual awareness. The neural mechanisms that determine whether a stimulus is consciously perceived remain poorly understood. Conscious visual perception is thought to rely on a widespread neural system encompassing primary and higher order visual areas, frontoparietal areas and subcortical regions such as the thalamus. This concurrent TMS-fMRI study applied TMS to the right anterior intraparietal sulcus (IPS) and in a sham control to investigate how perturbations to IPS influence the neural systems underlying visual perception of weak sensory events.
7 subjects took part in the concurrent TMS-fMRI experiment (3T Siemens Magnetom Tim Trio System, GE-EPI, TR = 3290ms, TE = 35ms, 40 axial slices, size = 3mm x 3mm x 3.3mm). The 2x2x2 factorial design manipulated: (i) visual target (present, absent), (ii) visual percept (yes, no) and (ii) TMS condition (IPS, Sham). In a visual target detection task, subjects fixated a cross in the centre of the screen. On 50% of the trials a weak visual target was presented in their left lower visual field. Subjects were instructed to answer 'yes' only when completely sure. Visual stimuli were individually tailored to yield a detection threshold of 70% in visual present trials. Bursts of 4 TMS pulses (10Hz) were applied in image acquisition gaps at 100ms after each trial onset over the right IPS (x=42.3, y=-50.3, z=64.4) and during a sham condition using a MagPro X100 stimulator (MagVenture, Denmark) and a MR-compatible figure of eight TMS coil (MRi-B88). Stimulation intensity was 69% for IPS and was adjusted during Sham stimulation to evoke similar side effects. Trials were presented in blocks of 12 that were interleaved with baseline periods of 13s. Each run consisted of 7 blocks with 4 runs per TMS condition, giving a total of 168 trials per condition. Each TMS condition was performed in different sessions and all conditions were counterbalanced across subjects.
Behavioral responses were categorized in hit, miss, false alarm and correct rejection (CR). Performance measures for each category were computed separately for IPS- and Sham-TMS and averaged across subjects. While each condition was modelled at the 1st level (using SPM8), 2nd level random effects analyses (one-sample t-tests) were restricted to target present trials (i.e. hits, misses). We tested for the main effects of TMS, visual percept and their interaction. Results are reported at p<0.05 at cluster level corrected for the whole brain using an auxiliary uncorrected voxel threshold of p=0.01.
Visual detection involves perceptual decisions based on uncertain sensory representations. As participants set a high criterion for determining whether they are aware of targets, missed trials were associated with more uncertainty as indexed by long response times and thereby placed more demands on decisional processes. TMS to IPS perturbed this neural system involved in perceptual decisions and awareness. Critically, while the right precentral/middle frontal gyrus associated with the frontal eye field usually discriminates between hits and misses, TMS-IPS abolishes this difference in activation indicating that IPS-FEF closely interact in perceptual awareness and decisions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Using TMS-fMRI to investigate the neural correlates of visual perception150171542215017188261501718821FademrechtBd2014_37LFademrechtIBülthoffSde la RosaTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Peripheral Vision and Action Recognition1501715422ZhaoHB2014_37MZhaoWGHaywardIBülthoffTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Race of Face Affects Various Face Processing Tasks Differently1501715422KimSRWWB20147JKimJSchultzTRoheCWallravenSWHHBülthoffTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Supramodal Representations of Associated Emotions1501715422SymeonidouOBC2014_27E-RSymeonidouMOlivariHHBülthoffLLChuangTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0The Role of Direct Haptic Feedback in a Compensatory Tracking Task1501715422JuW20147LJuCWallravenTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0User Experience in Stereoscopic Driving Games1501715422Chang2014_27D-SChangHHBülthoffSde la RosaTübingen, Germany2014-06-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Visual Adaptation to Social Actions: The Role of Meaning vs. Motion for Action Recognition1501715422Yuksel20147BYükselCSecchiAFranchiHong Kong, China2014-05-31nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/fileadmin/user_upload/files/publications/2014/ICRA-WS-2014-Yueksel.pdfpublished0Aerial Physical Interaction via Reshaping of the Physical Properties: Passivity-based Control Methods and Nonlinear Force Observers1501715422EsinsBS20147JEsinsIBülthoffJSchultzSt. Pete Beach, FL, USA2014-05-211436Humans rely strongly on the shape of other peoples faces to recognize them. However, faces also change appearance between encounters, for example when people put on glasses or change their hair-do. This can affect face recognition in certain situations, e.g. when recognizing faces that we do not know very well or for congenital prosopagnosics. However, additional cues can be used to recognize faces: faces move as we speak, smile, or shift gaze, and this dynamic information can help to recognize other faces (Hill & Johnston, 2001). Here we tested if and to what extent such dynamic information can help congenital prosopagnosics to improve their face recognition. We tested 15 congenital prosopagnosics and 15 age- and gender matched controls with a test created by Raboy et al. (2010). Participants learned 18 target identities and then performed an old-new-judgment on the learned faces and 18 distractor faces. During the test phase, half the target faces exhibited everyday changes (e.g. modified hairdo, glasses added, etc.) while the other targets did not change. Crucially, half the faces were presented as short film sequences (dynamic stimuli) while the other half were presented as five random frames (static stimuli) during learning and test. Controls and prosopagnosics recognized identical better than changed targets. While controls recognized faces better in the dynamic than in the static condition, prosopagnosics performance was not better for dynamic compared to static stimuli. This difference between groups was significant. The absence of a dynamic advantage in prosopagnosics suggests that dysfunctions in congenital prosopagnosia might not only be restricted to ventral face-processing regions, but might also involve lateral temporal regions where facial motion is known to be processed (e.g. Haxby et al., 2000).nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1436Facial motion does not help face recognition in congenital prosopagnosics1501715422ZhaoB20147MZhaoIBülthoffSt. Pete Beach, FL, USA2014-05-201262Previous studies have shown that face race influences various aspects of face processing, including face identification (Meissner & Brigham, 2001), holistic processing (Michel et al., 2006), and processing of featural and configural information (Hayward et al., 2008). However, whether these various aspects of other-race effects (ORE) arise from the same underlying mechanism or from independent ones remain unclear. To address this question, we measured those manifestations of ORE with different tasks, and tested whether the magnitude of those OREs are related to each other. Each participant performed three tasks. (1) The original and a Chinese version of Cambridge Face Memory Tests (CFMT, Duchaine & Nakayama, 2006; McKone et al., 2012), which were used to measure the ORE in face memory. (2) A part/whole sequential matching task (Tanaka et al., 2004), which was used to measure the ORE in face perception and in holistic processing. (3) A scrambled/blurred face recognition task (Hayward et al., 2008), which was used to measure the ORE in featural and configural processing. We found a better recognition performance for own-race than other-race faces in all three tasks, confirming the existence of an ORE across various tasks. However, the size of the ORE measured in all three tasks differed; we found no correlation between the OREs in the three tasks. More importantly, the two measures of the ORE in configural and holistic processing tasks could not account for the individual differences in the ORE in face memory. These results indicate that although face race always influence face recognition as well as configural and featural processing, different underlying mechanisms are responsible for the occurrence of ORE for each aspect of face processing tested here.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1262Face Race Affects Various Types of Face Processing, but Affects Them Differently1501715422FademrechtBd20147LFademrechtIBülthoffSde la RosaSt. Pete Beach, FL, USA2014-05-201006The recognition of actions is critical for human social functioning and provides insight into both the active and the inner states (e.g. valence) of another person. Although actions often appear in the visual periphery little is known about action recognition beyond foveal vision. Related previous research showed that object recognition and object valence (i.e. positive or negative valence) judgments are relatively unaffected by presentations up to 13° visual angle (VA) (Calvo et al. 2010). This is somewhat surprising given that recognition performance of words and letters sharply decline in the visual periphery. Here participants recognized an action and evaluated its valence as a function of eccentricity. We used a large screen display that allowed presentation of stimuli over a visual field from -60 to +60° VA. A life-size stick figure avatar carried out one of six motion captured actions (3 positive actions: handshake, hugging, waving; 3 negative actions: slapping, punching and kicking). 15 participants assessed the valence of the action (positive or negative action) and another 15 participants identified the action (as fast and as accurately as possible). We found that reaction times increased with eccentricity to a similar degree for the valence and the recognition task. In contrast, accuracy performance declined significantly with eccentricity for both tasks but declined more sharply for the action recognition task. These declines were observed for eccentricities larger than 15° VA. Thus, we replicate the findings of Calvo et al. (2010) that recognition is little affected by extra-foveal presentations smaller than 15° VA. Yet, we additionally demonstrate that visual recognition performance of actions declined significantly at larger eccentricities. We conclude that large eccentricities are required to assess the effect of peripheral presentation on visual recognition.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1006Influence of eccentricity on action recognition1501715422delaRosaFB20147Sde la RosaGFullerHHBülthoffSt. Pete Beach, FL, USA2014-05-201005Physical interactions with other people (social interactions) are an integral part of human social life. Surprisingly, little is known about the visual processes underlying social interaction recognition. Many studies have examined visual processes underlying the recognition of individual actions and only a few examined the visual recognition of social interactions (Dittrich, 1993; de la Rosa et al. 2013, Neri et al. 2007; Manera et al. 2011a,b). An important question concerns to what degree the recognition of individual actions and social interactions share visual processes. We addressed this question in two experiments (15 participants each) using a visual adaptation paradigm in which participants saw an action (handshake or high 5) carried out by one individual (individual action) for a prolonged amount of time during the adaptation period. According to previous adaptation results, we expected that the subsequent perception of an ambiguous test stimulus (an action-morph between handshake and high 5) would be biased away from the adapting stimulus (action adaptation aftereffect (AAA)). Using these stimuli, participants were adapted to individual actions and tested on individual actions in experiment 1. In line with previous studies, we expected an adaptation effect in experiment 1. In experiment 2, participants were adapted to individual actions and tested on social interactions (two instead of one individual carrying out the actions of experiment 1). If social interaction recognition requires completely different or additional visual processes to the ones employed in the recognition of individual actions, we expected the AAA in experiment 2 to be absent or smaller than in experiment 1. In contrast, we found a significant AAA in both experiments (p<0.001) that did not differ across the two experiments (p=0.130). Social interaction and individual action recognition seem to be based on similar visual processes if paying attention to the interaction is not enforced.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-1005Social interaction recognition: the whole is not greater than the sum of its parts1501715422SaultonDBd20147ASaultonTDoddsHHBülthoffSde la RosaSt. Pete Beach, FL, USA2014-05-19845Stored representations of body size and shape as derived from somatosensation (body model) are considered to be critical components of perception and action. It is commonly believed that the body model can be measured using a localization task and be distinguished from other visual representations of the body using a visual template matching task. Specifically, localization tasks have shown distorted hand representations consisting of an overestimation of hand width and an underestimation of finger length [Longo and Haggard, 2010, PNAS,107 (26), 11727-11732]. In contrast, template matching tasks indicate that visual hand representations (body image) do not show such distortions [Longo and Haggard, 2012, Acta Psychologica, 141, 164-168]. We examined the specificity of the localization and visual template matching tasks to measure body related representations. Participants conducted a localization and template matching task with objects (box, post-it, rake) and their own hand. The localization task revealed that all items' dimensions were significantly distorted (all p <.0018) except for the width of the hand and rake. In contrast, the template matching task indicated no significant differences between the estimated and actual item's shape for all items (all p>0.05) except for the box (p<0.01) suggesting that the visual representation of items is almost veridical. Moreover, the performance across these tasks was significantly correlated for the hand and rake (p<.001). Overall, these results show that effects considered to be body-specific, i.e. distortions of the body model, are actually more general than previously thought as they are also observed with objects. Because localizing points on an object is unlikely to be aided by somatosensation, the assessed representations are unlikely to be mainly based on somatosensation but might reflect more general cognitive processes e.g. visual memory. These findings have important implications for the nature of the body image and the body model.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-845Body and objects representations are associated with similar distortions1501715422NestiBPB2014_27ANestiKBeykirchPPrettoHHBülthoffSt. Pete Beach, FL, USA2014-05-18485Whilst moving through the environment humans use vision to discriminate different self-motion intensities and to control their action, e.g. maintaining balance or controlling a vehicle. Yet, the way different intensities of the visual sensory stimulus affect motion sensitivity is still an open question. In this study we investigate human sensitivity to visually induced circular self-motion perception (vection) around the vertical (yaw) axis. The experiment is conducted on a motion platform equipped with a projection screen (70 x 90 degrees FoV). Stimuli consist of a realistic virtual environment (360 degrees panoramic color picture of a forest) rotating at constant velocity around participants’ head. Visual rotations are terminated by participants only after vection arises. Vection is facilitated by the use of mechanical vibrations of the participant’s seat. In a two-interval forced choice task, participants discriminate a reference velocity from a comparison velocity (adjusted in amplitude after every presentation) by indicating which rotation felt stronger. Motion sensitivity is measured as the smallest perceivable change in stimulus velocity (differential threshold) for 8 participants at 5 rotation velocities (5, 15, 30, 45 and 60 deg/s). Differential thresholds for circular vection increase with stimulus intensity, following a trend best described by a power law with an exponent of 0.64. The time necessary for vection to arise is significantly longer for the first stimulus presentation (average 11.6 s) than for the second (9.1 s), and does not depend on stimulus velocity. Results suggest that lower sensitivity (i.e. higher differential thresholds) for increasing velocities reflects prior expectations of small rotations, more common than large rotations during everyday experience. A probabilistic model is proposed that combines sensory information with prior knowledge of the expected motion in a statistically optimal fashion. Results also suggest that vection rise is facilitated by a recent exposure.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-485Human self-motion sensitivity to visual yaw rotations1501715422PrettoNNLB20147PPrettoANestiSAENooijMLosertHHBülthoffSt. Pete Beach, FL, USA2014-05-17279In vehicle simulation (flight, driving) simulator tilt is used to reproduce sustained acceleration. In order to feel realistic, this tilt is performed at a rate below the tilt-rate detection threshold, which is usually measured in darkness, and assumed constant. However, it is known that many factors affect the threshold, like visual information, simulator motion in additional directions, or active vehicle control. Since all these factors come together in vehicle simulation, we aimed at investigating the effect of each of these factors on roll-rate detection threshold during simulated curve driving. The experiment was conducted on a motion-based driving simulator. Roll-rate detection thresholds were determined under four conditions: (i) roll only in darkness; (ii) combined roll/sway in darkness; (iii) combined roll/sway and visual information whilst passively moved through a curve; (iv) combined roll/sway and visual information whilst actively driving around a curve. For all conditions, motion was repeatedly provided and ten participants reported the detection of roll in a yes-no task. Thresholds were measured by adjusting roll-rate saturation value according to a single-interval adjustment matrix (SIAM) at every trial. Mean detection threshold for roll-rate increased from 0.7 deg/s with roll only (i) to 6.3 deg/s in active driving (iv) (mean threshold was 3.9 deg/s and 3.3 deg/s in conditions (ii) and (iii) respectively). However, large differences between participants were observed: for some the threshold did not increase from passive to active driving; while for others about 3 times higher threshold was measured, and lower level of attention was reported on questionnaires. We conclude that tilt-rate perception in vehicle simulation is affected by the combination of different simulator motions. Similarly, an active control task seems to increase detection threshold for tilt-rate, i.e. impair motion sensitivity. Results suggest that this is related to the level of attention during the task.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published-279Tilt-rate perception in vehicle simulation: the role of motion, vision and attention1501715422NoainIWSVPMLMSSMGSB20147DNoainLLImbachEWerthSRSchreglmannPOValkoMPennerMMorawskaTLiAMaricESymeonidouJStoverLMicaYVGavrilovTEScammellCRBaumannLuzern, Switzerland2014-05-16nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Increased sleep need after traumatic brain injury: A comparative behavioural and histological study in rats and humans1501715422ChangBd20147D-SChangHHBülthoffSde la RosaNew York, NY, USA2014-05-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Visual Adaptation to Social Actions: The Role of Meaning vs. Motion for Action Recognition1501715422NestiBPB20147ANestiKABeykirchPPrettoHHBülthoffAmsterdam, The Netherlands2014-04-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Human sensitivity to visual-inertial self-motion1501715422NooijPNB20147SAENooijPPrettoANestiHHBülthoffAmsterdam, The Netherlands2014-04-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Perception of heading and travelled path during curvilinear trajectories1501715422ReichenbachBBT20147AReichenbachJ-PBrescianiHHBülthoffAThielscherAmsterdam, The Netherlands2014-04-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Reaching with the sixth sense: vestibulomotor control in the human right parietal cortex15017154221501718821FladC20147NFladLLChuangGünne, Germany2014-03-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Setting up a high-fidelity flight simulator to study closed-loop control and physiological workload1501715422Geluardi20147SGeluardiPisa, Italy2014-01-09Congestion problems in the transportation system have led to regulators considering implementing drastic changes in methods of transportation for the general public. A possible solution would be to combine the best of ground-based and air-based transportation and produce a personal air transport system. This project aims to investigate the interaction between a pilot with limited flying skills and augmented vehicles, that are part of the personal air transport system, and to verify if it is possible to reach similar performance to a highly-trained pilot, also in dangerous environmental or demanding conditions.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Augmented Systems for Personal Air Vehicles1501715422Olivari20147MOlivariPisa, Italy2014-01-09Haptic aids have been largely used in manual control tasks to complement the visual information through the sense of touch. To analytically design the haptic aid, adequate knowledge is needed about how pilots adapt their visual response and the biomechanical properties of their arm to a generic haptic aid. Two novel identification methods were proposed to estimate the pilot dynamic responses. The two methods were applied to experimental data from closed-loop control tasks with pilots, with the aim of estimating the pilot responses to different external aids. Different haptic aids were designed and tested during the experiments: a Direct Haptic Aid (DHA) and an Indirect Haptic Aid (IHA). Furthermore, an automated system was designed to be equivalent to the haptic aids when the pilot was out-of-the-loop, i.e., to provide the same control command as the haptic aid. All the experimental conditions with the external aids were contrasted to a baseline condition without external aids.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published0Human-Centered Design of Haptic Aids for Aerial Vehicles1501715422Dobs201415KDobs2014-12-00nonotspecifiedpublishedBehavioral and Neural Mechanisms Underlying Dynamic Face Perception1501715422Rohe201415TRohe2014-12-00nonotspecifiedpublishedCausal inference in multisensory perception and the brain1501715422Kaulard201415KKaulard2014-12-00nonotspecifiedpublishedVisual perception of emotional and conversational facial expressions1501715422Piryankova201415IPiryankova2014-11-28nonotspecifiedpublishedThe influence of a self-avatar on space and body perception in immersive virtual reality150171542215017Volkova201415EVolkova2014-11-00nonotspecifiedpublishedPerception of Emotional Body Expressions in Narrative Scenarios and Across Cultures150171542215017Browatzki201415BBrowatzki2014-10-00nonotspecifiedpublishedMultimodal object perception for robotics1501715422Leyrer2014_215MLeyrer2014-10-00nonotspecifiedpublishedUnderstanding and Manipulating Eye Height to Change the User's Experience of Perceived Space in Virtual Reality150171542215017Masone201415CMasone2014-07-16nonotspecifiedpublishedPlanning and control for robotic tasks with a human-in-the-loop [Planung und Steuerung von Roboter-Mensch Systemen]1501715422Giani201415AGiani2014-05-00nonotspecifiedpublishedFrom multiple senses to perceptual awareness1501715422Venrooij201415JVenrooij2014-03-21nonotspecifiedpublishedMeasuring, modeling and mitigating biodynamic feedthrough1501715422Grabe201415VGrabe2014-03-00nonotspecifiedpublishedTowards Robust Visual-Controlled Flight of Single and Multiple UAVs in GPS-Denied Indoor Environments1501715422Bieg2014_215H-JBieg2014-02-00nonotspecifiedpublishedOn the coordination of saccades with hand and smooth pursuit eye movements1501715422Bulthoff2014_1141HHBülthoffBulthoff2014_1010IBülthoffBulthoff2014_810HHBülthoffBulthoff2014_910HHBülthoffNieuwenhuizenB2014_210FMNieuwenhuizenHHBülthoffKatliar201410MKatliarBulthoff2014_510HHBülthoffVenrooij2014_310JVenrooijBulthoff2014_610HHBülthoffNieuwenhuizen201410FMNieuwenhuizendelaRosa201410Sde la RosadelaRosa2014_210Sde la RosaTesch201410JTeschdelaRosa2016_210Sde la RosaMeilinger2014_210TMeilingerChangBd2014_410D-SChangHHBülthoffSde la RosaBulthoff2014_710HHBülthoffBulthoff2014_410HHBülthoffMeilingerHBM201410TMeilingerAHensonHHBülthoffHAMallotMeilinger2014_410TMeilingerChuang2014_210LChuangdelaRosa2014_310Sde la RosaChuang201410LLChuangBulthoffZ201410IBülthoffMZhaoWallraven2014_210CWallravenMeilinger2014_310TMeilingerVenrooij2014_410JVenrooijVenrooij2014_210JVenrooijBulthoff2014_310HHBülthoffSoyka201410FSoykaChiovettoCEG201410EChiovettoCCurioDEndresMGiesedelaRosaSB201410Sde la RosaSStreuberHHBülthoffdelaRosaB201410Sde la RosaHHBülthoffBreidt201410MBreidtChuangFSNB201410LLChuangNFladMScheerFMNieuwenhuizenHHBülthoffO039MalleyBM201410MO'MalleyHHBülthoffTMeilingerMeilinger201410TMeilingerKWatanabeBulthoff2014_210HHBülthoffBulthoff201410IBülthoffDobricki2014_210MDobrickidelaRosaC201410D-SChangSde la RosaDobricki201410MDobrickiBulthoffR2014HHBülthoffPRobuffo Giordano2014-01-21The invention relates to a teleoperation method and a human robot interface for remote control of a machine by a human operator (5) using a remote control unit, particularly for remote control of a drone, wherein a vestibular feedback is provided to the operator (5) to enhance the situational awareness of the operator (5), wherein the vestibular feedback represents a real motion of the remote-controlled machine.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/publishedTeleoperation method and human robot interface for remote control of a machine by a human operator1501715422Soyka20131FSoykaLogos VerlagBerlin, Germany2013-00-00Self-motion describes the motion of our body through the environment and is an essential part of our everyday life. The aim of this thesis is to improve our understanding of how humans perceive self-motion, mainly focusing on the role of the vestibular system. Following a cybernetic approach, this is achieved by systematically gathering psychophysical data and then describing it based on mathematical models of the vestibular sensors. Three studies were performed investigating perceptual thresholds for translational and rotational motions and reaction times to self-motion stimuli. Based on these studies, a model is introduced which is able to describe thresholds for arbitrary motion stimuli varying in duration and acceleration profile shape. This constitutes a significant addition to the existing literature since previous models only took into account the effect of stimulus duration, neglecting the actual time course of the acceleration profile. In the first and second study model parameters were identified based on measurements of direction discrimination thresholds for translational and rotational motions. These models were used in the third study to successfully predict differences in reaction times between varying motion stimuli proving the validity of the modeling approach. This work can allow for optimizing motion simulator control algorithms based on self-motion perception models and developing perception based diagnostics for patients suffering from vestibular disorders.Tübingen, Univ., Diss. 2013nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published96A Cybernetic Approach to Self-Motion Perception1501715422SteinickeVCL20131FSteinickeYVisellJCamposALécuyerSpringerNew York, NY, USA2013-00-00nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published402Human walking in virtual environments: perception, technology, and applications15017188241501715422Streuber20131SStreuberLogos VerlagBerlin, Germany2013-00-00Humans are social beings and they often act jointly together with other humans (joint actions) rather than alone. Prominent theories of joint action agree on visual information being critical for successful joint action coordination but are vague about the exact source of visual information being used during a joint action. Knowing which sources of visual information are used, however, is important for a more detailed characterization of the functioning of action coordination in joint actions.
The current Ph.D. research examines the importance of different sources of visual information on joint action coordination under realistic settings. In three studies I examined the influence of different sources of visual information (Study 1), the functional role of different sources of visual information (Study 2), and the effect of social context on the use of visual information (Study 3) in a table tennis game. The results of these studies revealed that (1) visual anticipation of the interaction partner and the interaction object is critical in natural joint actions, (2) different sources of visual information are critical at different temporal phases during the joint action, and (3) the social context modulates the importance of different sources of visual information. In sum, this work provides important and new empirical evidence about the importance of different sources of visual information in close-to-natural joint actions.Tübingen, Univ., Diss., 2013nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published114The Influence of Different Sources of Visual Information on Joint Action Performance1501715422MohlerRSS201328BMohlerBRaffinHSaitoOStaadtVenrooijPMvB20133JVenrooijMDPavelMMulderFCTvan der HelmHHBülthoff2013-12-0044421432Biodynamic feedthrough (BDFT) occurs when vehicle accelerations feed through the pilot’s body and cause involuntary motions of limbs, resulting in involuntary control inputs. BDFT can severely reduce ride comfort, control accuracy and, above all, safety during the operation of rotorcraft. Furthermore, BDFT can cause and sustain rotorcraft-pilot couplings. Despite many different studies conducted in past decades—both within and outside of the rotorcraft community—BDFT is still a poorly understood phenomenon. The complexities involved in BDFT have kept researchers and manufacturers in the rotorcraft domain from developing robust ways of dealing with its effects. A practical BDFT pilot model, describing the amount of involuntary control inputs as a function of accelerations, could pave the way to account for adverse BDFT effects. In the current paper, such a model is proposed. Its structure is based on the model proposed by Mayo (15th European Rotorcraft Forum, Amsterdam, pp. 81-001–81-012 1989), and its accuracy and usability are improved by incorporating insights from recently obtained experimental data. An evaluation of the model performance shows that the model describes the measured data well and that it provides a considerable improvement to the original Mayo model. Furthermore, the results indicate that the neuromuscular dynamics have an important influence on the BDFT model parameters.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published11A practical biodynamic feedthrough model for helicopters1501715422WallravenH20133CHerdtweckCWallraven2013-12-00128114We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine “behavior” for different image manipulations and image scene types.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Estimation of the Horizon in Photographed Outdoor Scenes by Human and Machine1501715422DropPDVM20123FMDropDMPoolHJDamveldMMvan PaassenMMulder2013-12-0064319361949In the manual control of a dynamic system, the human controller (HC) often follows a visible and predictable reference path. Compared with a purely feedback control strategy, performance can be improved by making use of this knowledge of the reference. The operator could effectively introduce feedforward control in conjunction with a feedback path to compensate for errors, as hypothesized in literature. However, feedforward behavior has never been identified from experimental data, nor have the hypothesized models been validated. This paper investigates human control behavior in pursuit tracking of a predictable reference signal while being perturbed by a quasi-random multisine disturbance signal. An experiment was done in which the relative strength of the target and disturbance signals were systematically varied. The anticipated changes in control behavior were studied by means of an ARX model analysis and by fitting three parametric HC models: two different feedback models and a combined feedforward and feedback model. The ARX analysis shows that the experiment participants employed control action on both the error and the target signal. The control action on the target was similar to the inverse of the system dynamics. Model fits show that this behavior can be modeled best by the combined feedforward and feedback model.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published13Identification of the Feedforward Component of Manual Control in Tasks with Predictable Target Signals1501715422MichelRBHV20133CMichelBRossionIBülthoffWGHaywardQCVuong2013-12-009-102112021223Faces from another race are generally more difficult to recognize than faces from one's own race. However, faces provide multiple cues for recognition and it remains unknown what are the relative contribution of these cues to this “other-race effect”. In the current study, we used three-dimensional laser-scanned head models which allowed us to independently manipulate two prominent cues for face recognition: the facial shape morphology and the facial surface properties (texture and colour). In Experiment 1, Asian and Caucasian participants implicitly learned a set of Asian and Caucasian faces that had both shape and surface cues to facial identity. Their recognition of these encoded faces was then tested in an old/new recognition task. For these face stimuli, we found a robust other-race effect: Both groups were more accurate at recognizing own-race than other-race faces. Having established the other-race effect, in Experiment 2 we provided only shape cues for recognition and in Experiment 3 we provided only surface cues for recognition. Caucasian participants continued to show the other-race effect when only shape information was available, whereas Asian participants showed no effect. When only surface information was available, there was a weak pattern for the other-race effect in Asians. Performance was poor in this latter experiment, so this pattern needs to be interpreted with caution. Overall, these findings suggest that Asian and Caucasian participants rely differently on shape and surface cues to recognize own-race faces, and that they continue to use the same cues for other-race faces, which may be suboptimal for these faces.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published21The contribution of shape and surface information in the other-race face effect1501715422Dobrickid20133MDobrickiSde la Rosa2013-12-0012819Previous research suggests that bodily self-identification, bodily self-localization, agency, and the sense of being present in space are critical aspects of conscious full-body self-perception. However, none of the existing studies have investigated the relationship of these aspects to each other, i.e., whether they can be identified to be distinguishable components of the structure of conscious full-body self-perception. Therefore, the objective of the present investigation is to elucidate the structure of conscious full-body self-perception. We performed two studies in which we stroked the back of healthy individuals for three minutes while they watched the back of a distant virtual body being synchronously stroked with a virtual stick. After visuo-tactile stimulation, participants assessed changes in their bodily self-perception with a custom made self-report questionnaire. In the first study, we investigated the structure of conscious full-body self-perception by analyzing the responses to the questionnaire by means of multidimensional scaling combined with cluster analysis. In the second study, we then extended the questionnaire and validated the stability of the structure of conscious full-body self-perception found in the first study within a larger sample of individuals by performing a principle components analysis of the questionnaire responses. The results of the two studies converge in suggesting that the structure of conscious full-body self-perception consists of the following three distinct components: bodily self-identification, space-related self-perception (spatial presence), and agency.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published8The structure of conscious bodily self-perception during full-body illusions1501715422HeydrichDAHBMB2013_23LHeydrichTJDoddsJEAspellBHerbelinHHBülthoffBJMohlerOBlanke2013-12-009464115In neurology and psychiatry the detailed study of illusory own body perceptions has suggested close links between bodily processing and self-consciousness. One such illusory own body perception is heautoscopy where patients have the sensation of being reduplicated and to exist at two or even more locations. In previous experiments, using a video head-mounted display, self-location and self-identification were manipulated by applying conflicting visuo-tactile information. Yet the experienced singularity of the self was not affected, i.e., participants did not experience having multiple bodies or selves. In two experiments presented in this paper, we investigated self-location and self-identification while participants saw two virtual bodies (video-generated in study 1 and 3D computer generated in study 2) that were stroked either synchronously or asynchronously with their own body. In both experiments, we report that self-identification with two virtual bodies was stronger during synchronous stroking. Furthermore, in the video generated setup with synchronous stroking participants reported a greater feeling of having multiple bodies than in the control conditions. In study 1, but not in study 2, we report that self-location – measured by anterior posterior drift – was significantly shifted towards the two bodies in the synchronous condition only. Self-identification with two bodies, the sensation of having multiple bodies, and the changes in self-location show that the experienced singularity of the self can be studied experimentally. We discuss our data with respect to ownership for supernumerary hands and heautoscopy. We finally compare the effects of the video and 3D computer generated head-mounted display technology and discuss the possible benefits of using either technology to induce changes in illusory self-identification with a virtual body.nonotspecifiedhttp://www.kyb.tuebingen.mpg.de/published14Visual capture and the experience of having two bodies: evidence from two different virtual reality techniques1501715422deWinkelSBBGW20133KNde WinkelFSoykaMBarnett-CowanHHBülthoffELGroenPJWerkhoven2013-11-002231209218