Perception & Learning
Learning is the key to survival. Because the environment and the human body constantly changes the human brain needs mechanisms to adapt to these changes in order to garantee robust perception. Here we investigate several aspects of learning as they concern human perception and action. Leaarning is interpreted in the framework of statisitcal optimal models.
Learning to integrate
When different perceptual signals of the same physical property are integrated, e.g., an objects’ size, which can be seen and felt, they form a more reliable sensory estimate (e.g., Ernst & Banks, 2002). This however implies that the sensory system already knows which signals belong together and how they relate. In other words, the system has to know the mapping between the signals. Can such a mapping between two arbitrary sensory signals from vision and touch be learned from their statistical co-occurrence such that they become integrated? in general, how adaptive is human multisensory integration and what are the conditions under which it will occu? One of our recent publications starts anwering some of these questions: Ernst, Journal of Vision (2007).
Statistical Determinants of Visuomotor Adaptation
Rapid reaching to a target is generally accurate, but contains random and systematic error. Random error is due to noise in visual measurement, motor planning, and reach execution. Systematic error is caused by systematic changes in the mapping between the visual estimate of target location and the reach endpoint. Systematic errors occur, for example, when the visual image is distorted by new spectacles or when the reach is affected by external forces on the arm. Humans minimize systematic errors (i.e., maintain accurate reaching) by recalibrating the visuomotor system. We investigated how different sorts of error affect recalibration rate by manipulating the statistical properties of systematic error and the reliability with which the error could be measured. We modeled the process using an optimal predictive filter- the Kalman filter. Model and human behavior was similar: less reliable measurements decreased recalibration rate; more variation in systematic error caused an increase.
Here we investigate perceptual learning in the broad sense. Generally perceptual learning is understood as the improvement in performance of a perceptual task due to prolonged exposure of the learned signal and/or training of the task. An example would be that we are better able to discriminate a certain motion direction from others after training with that specific motion direction. But perceptual learning incorporates more than just the improvement or adjustment in the perception of signals that we are already well aware of. One question we are persuing at the moment is to what degree a person has to be aware of the stimulus for perceptual learning to occur. And another question concerns the transfer of learning to novel situations and tasks. or details on these projects see the personal page of Loes van Dam.
Learning Cue reliability and Bayesian Priors
To interpret complex and ambiguous input, the human visual system uses prior knowledge or assumptions about the world. We show that the ‘light-from-above’ prior, used to extract information about shape from shading is modified in response to active experience with the scene. The resultant adaptation is not specific to the learned scene but generalizes to a different task, demonstrating that priors are constantly adapted by interactive experience with the environment. Adams et al. Nature Neuroscience (2004)
The visual system uses several signals to deduce the three-dimensional structure of the environment, including binocular disparity, texture gradients, shading and motion parallax. Although each of these sources of information is independently insufficient to yield reliable three-dimensional structure from everyday scenes, the visual system combines them by weighting the available information; altering the weights would therefore change the perceived structure. We report that haptic feedback (active touch) increases the weight of a consistent surface-slant signal relative to inconsistent signals. Thus, appearance of a subsequently viewed surface is changed: the surface appears slanted in the direction specified by the haptically reinforced signal. Ernst et al. Nature Neuroscience (2000)