Michael Barnett-Cowan

Alumni of the Department Human Perception, Cognition and Action
Alumni of the Group Cybernetics Approach to Perception and Action

Main Focus

NEWS! I have moved to at (formally The University of Western Ontario) in London, Ontario, Canada. Please visit my for latest updates and publications.

While at the MPI I led the research group and was project leader at the Max Planck Institute for the F7 EU Research Grant. My main research interests concern how information from different senses affect the perception of space and time. I am specifically interested in the vestibular (balance) system and how information about head movement is combined with sensory information from the visual and kinesthetic systems.

Combining psychophysical, computational modeling and neuroimaging techniques I measure perceived spatial orientation in complex multisensory environments. This approach identifies the relative influence of vestibular, visual and kinesthetic cues that underlie spatial perception. Working with differently abled populations, this research program is dedicated to determine how different neural circuits in the normal, ageing and diseased brain analyze multisensory information, generate perceptions of the external world, make decisions, and execute movements. The ultimate goal of this research is to identify sensitive markers of disease and test the effectiveness of therapeutic and rehabilitation efforts to combat disorientation.

I also use psychophysical and neuroimaging techniques to measure perceived timing of multisensory events. Converging evidence from different experiments on the perceived timing of vestibular stimulation suggests that vestibular perception is slow compared to the other senses. This result is surprising considering the speed with which the vestibular system detects and responds to self-motion and may have practical applications in calibrating virtual reality environments and vestibular prostheses.

Vestibular perception is slow

Involuntary physical responses to vestibular stimulation are very fast. The vestibulo-ocular reflex, for example, occurs approximately 20ms after the onset of vestibular stimulation (Lorente de No, 1933). Surprisingly, despite these fast responses, reaction time (RT) to the perceived onset of vestibular stimulation occurs as late as 438ms after galvanic vestibular stimulation, which is approximately 220ms later than RTs to visual, somatosensory and auditory stimuli (Barnett-Cowan & Harris, 2009). Here we investigate why vestibular perception is slow. An initial investigation tested the hypothesis that RTs to natural vestibular stimulation (as opposed to galvanic vestibular stimulation) are also slow. Participants were passively moved forwards using the Stewart motion platform and were asked to press a button relative to the onset of physical motion. RTs to auditory and visual stimuli were also collected. RTs to physical motion occurred significantly later (about 100ms) than RTs to auditory and visual stimuli. Event related potentials (ERPs) were simultaneously recorded where the onset of the vestibular-ERP in both RT and non-RT trials occurred about 200ms or more after stimulus onset while the onset of the auditory- and visual-ERPs occurred less than 100ms after stimulus onset. All stimuli ERPs occurred approximately 135ms prior to RTs. These results provide further evidence that vestibular perception is slow compared to the other senses and that this perceptual latency may be related to latent cortical responses to physical motion. The next phase of our investigations will assess reaction time and ERP responses to passive motion while manipulating peak velocity, acceleration and jerk. Perceived simultaneity of passive motion paired with moving visual stimuli will also be assessed.

REFERENCES:

Lorente de No R (1933) Vestibulo-ocular reflex arc. Arch Neurol Psychiat 30:245–291

Barnett-Cowan, M., H. Nolan, J. S. Butler, J. J. Foxe, R. B. Reilly and H. H. Bülthoff: Reaction time and event-related potentials to visual, auditory and vestibular stimuli. Vision Sciences Society 2010

Principle Investigator:

Collaborators: , Hugh Nolan,

Facilities: ,

Perceived object stability

Knowing an object’s physical stability affects our expectations about its behaviour and our interactions with it. Objects topple over when the gravity-projected centre-of-mass (COM) lies outside the support area. The critical angle (CA) is the orientation for which an object is perceived to be equally likely to topple over or right itself, which is influenced by global shape information about an object‘s COM and its orientation relative to gravity. When observers lie on their sides, the perceived direction of gravity is tilted towards the body. Here we investigate the contribution of the orientation of the body when estimating the stability of objects. Our initial investigation tested the hypothesis that the CA of falling objects is affected by the internal representation of gravity rather than the direction of physical gravity. Observers sat upright or lay left- or right-side-down, and observed images of objects with different 3D mass distributions that were placed close to the right edge of a table in various orientations. Observers indicated whether the objects were more likely to fall back onto or off the table. The subjective visual vertical was also tested as a measure of perceived gravity. Our results show the CA increases when lying right-side-down and decreases when left-side-down relative to an upright posture, consistent with estimating the stability of rightward falling objects as relative to perceived and not physical gravity. The next phase of our investigations will assess the extent to which physical and perceived gravity affect the CA in the absence of visual orientation cues and when the body is put in multiple orientations relative to gravity using the MPI Cyber Motion Simulator.

REFERENCES:

Principle Investigator:

Collaborators: , ,

Facilities:

Shape from shading

In environments where orientation is ambiguous, the visual system uses prior knowledge about lighting coming from above to recognize objects, reorient the body, and determine which way is up (where is the sun?). It has been shown that when observers are tilted to the side relative to gravity, the orientation of the light-from-above prior will change in a direction between the orientation of the body, gravity and the visual surround. Here we investigate the contribution of ocular torsion in this change of the light-from-above prior has been acknowledged but not specifically addressed. Our initial investigation tested the hypothesis that when lighting direction is the only available visual orientation cue, change in orientation of the light-from-above prior is accounted for by ocular torsion. In this experiment observers made convex-concave judgments of a central shaded disk, flanked by three similarly- and three oppositely-shaded disks. Lighting was tested every 15° in roll in the fronto-parallel plane. Using the MPI Cyber Motion Simulator to move observers into different orientations, observers were tested when upright, supine, and tilted every 30 ° in role relative to gravity. Our results show that change of the light-from-above prior is well predicted from a sum of two sines; one consistent with predicted ocular torsion, the other consistent with an additional component varying with twice the frequency of body tilt. The next phase of our investigations will address the nature of this second component as well as assess the relative contribution of additional lighting cues added to the surrounding environment.

REFERENCES:

Barnett-Cowan, M., M. O. Ernst and H. H. Bülthoff: “Where is the sun?” The sun is ‘up’ in the eye of the beholder. European Conference on Visual Perception 2010

Principle Investigator:

Collaborators: ,

Facilities:

Three Dimensional Path Integration

Path integration is a process in which self-motion is integrated over time to obtain an estimate of one’s current position relative to a starting point. Humans can do path integration based exclusively on visual, auditory, or inertial cues. However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement. One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. Using the MPI Cyber Motion Simulator, we have found that observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone (Barnett-Cowan et al., 2010; In Press). These results suggest that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

REFERENCES:

Barnett-Cowan M, Meilinger T, Vidal M & Bülthoff HH (2010) Path navigation in the third dimension. Journal of Vestibular Research 20: 282:283

Barnett-Cowan M, Meilinger T, Vidal M, Teufel H & Bülthoff HH (In Press) MPI CyberMotion Simulator: Implementation of a novel motion simulator to investigate path integration in three dimensions. Journal of Visualized Experiments

Principle Investigator:

Collaborators: , ,

Facilities:

The Investigation of Perceptual Thresholds

We want to determine how to move people without having them perceive it, because this would help us to drive our simulator into a neutral position without the pilot noticing it. An open question is how perceptual thresholds actually depend on the specific motion profile. Using the MPI Cyber Motion Simulator, we are investigating how the rate of change of acceleration, called jerk, influences perceptual thresholds and the perceived onset of self-motion.

Principle Investigator:

Collaborators: ,

Facilities:

The Integration of Visual and Vestibular Sensory Information

In collaboration with scientists from TNO, Soesterberg (Netherlands) we investigated if humans integrate visual and vestibular information in a statistically optimal fashion when discriminating rotational self-motion stimuli. Participants were consecutively rotated twice (2s sinusoidal acceleration) on a chair about an earth-vertical axis in vestibular-only, visual-only and visual-vestibular trials. The task was to report which rotation was perceived as faster and just-noticeable differences (JND) were estimated by fitting psychometric functions. Predictions for the visual-vestibular JNDs were calculated based on the unisensory JND measurements and optimal integration theory.

The visual-vestibular JND measurements are too high compared to the predictions and there is no JND reduction between visual-vestibular and visual-alone estimates. These findings may be explained by visual capture. Alternatively, the visual precision may not be equal between visual-vestibular and visual-alone conditions, since it has been shown that visual motion sensitivity is reduced during inertial self-motion. Therefore, measuring visual-alone JNDs with an underlying uncorrelated inertial motion might yield higher visual-alone JNDs compared to the stationary measurement. Theoretical calculations show that higher visual-alone JNDs would result in predictions consistent with the JND measurements for the visual-vestibular condition.

REFERENCES:

Soyka F, de Winkel K, Barnett-Cowan M, Groen E, Bülthoff HH (2011) Integration of visual and vestibular information used to discriminate rotational self-motion, 12th International Multisensory Research Forum (IMRF 2011), Fukuoka, Japan, i-Perception, 2(8) 855.

de Winkel K, Soyka F, Barnett-Cowan M, Groen E, Bülthoff HH (2011) Multisensory integration in the perception of self-motion about an Earth-vertical yaw axis, 34th European Conference on Visual Perception (ECVP 2011), Toulouse, France, Perception, 40(ECVP Abstract Supplement) 183.

Principle Investigator:

Collaborators: , Eric Groen, 

Facilities:

Roll rate thresholds and perceived realism in driving simulation

Active driving simulation provides a variety of visual and vestibular cues as well as demands on attention which vary with task difficulty. It is thus important to measure vestibular perceptual thresholds in conditions that closely resemble typical driving simulation to determine how different sensory and cognitive factors contribute to the sensation of realistic driving. Knowing the relative contribution of these components will lead to more optimized simulated driving.

Principle Investigator:

Collaborators: ,

Facilities:

Go to Editor View