During natural behavior, much of the motion signal falling into our eyes is due to our own movements. Therefore, in order to correctly perceive motion in our environment, it is important to parse visual motion signals into those due to self-motion or due to external motion. The visual system therefore needs to combine retinal signals, such as eye-movements, and extraretinal signals in order to successfully infer a stable perception of our environment. Using functional magnetic resonance imaging (fMRI) and human psychophysics, I am interested in investigating the functional involvement of known and novel regions involved in various aspects of visual motion perception. The central question of my current work is how the visual system and thereby various brain regions accomplish to disentangle self-motion from objective or object motion by processing and integrating self-induced and externally induced visual signals. To address this question in a series of human fMRI experiments, I systematically investigated various visual and motion responsive regions and their responses to three self-motion cues such as objective (real') motion, smooth pursuit eye movements, and retinal motion, both in context of 2D translation, as well as in context of 3D visual flow.
1) Investigation of brain regions involved in self-induced planar visual motion
Many visual areas are responsive to visual motion. However, comparably little is known about the degree to which these regions distinguish between retinal motion and real objective motion. The two motion cues are frequently combined in real life situations during visual pursuit, when self-induced motion combines with real motion to a summed retinal motion signal. In this study we used a paradigm that combined physical planar motion with pursuit in such a way that responses to objective as well as to retinal motion could be separated without confounds related to eye movements. We analyzed responses in individually localized areas V3A, V3B, V5/MT, MST, V6 and VPS, and additionally examined voxel-wise responses across the whole brain.
2) Predictive coding in the context of retinal motion and eye-movements
Predictive coding describes a model of visual processing in which lower-order visual areas function as residual errors detectors, signaling the difference between an input signal and its statistical prediction based on top-down feedback from higher order visual areas. This suggests a hierarchical cycle, where top-down information concurrently modulates lower-level estimates and bottom-up signals are modified based on higher-level estimates (Rao and Ballard, 1999). Various studies have supported this approach, showing that activity in early visual areaV1 increases for unpredictable stimuli like random/incoherent motion when compared to coherent motion (McKeefry et al., 1997, Braddick et al., 2001, Bartels et al., 2008). However, a higher response in V1 to incoherent motion is also compatible with alternative accounts, e.g. a preference to differential motion. In line with the predictive coding theory, Murray et al. (Murray et al., 2002) has shown that when bars move in a predictable manner, for example in a diamond shape, the shape processing lateral occipital complex (LOC) increased its responses while area V1 decreased its activity, compared to non-predictable formations. Similarly, a recent study has suggested that predictable motion direction or motion onsets lead to smaller responses in V1 compared to when they were not predictable (Alink et al., 2010). In this study we examine whether parafoveal activity observed in the occipital poles during random motion is also compatible with a predictive coding account in the context of processing of self-induced visual motion.
In future experiments, we plan to combine TMS with fMRI in order to test for connectivity and causal involvement of various motion sensitive regions such as V3A and V6.
Alink A, Schwiedrzik CM, Kohler A, Singer W, Muckli L (2010) Stimulus predictability reduces responses in primary visual cortex. J Neurosci30:2960-2966.
Bartels A, Zeki S, Logothetis NK (2008) Natural vision reveals regional specialization to local motion and to contrast-invariant, global flow in the human brain. Cereb Cortex 18:705-717.
Braddick OJ, O'Brien JM, Wattam-Bell J, Atkinson J, Hartley T, Turner R (2001) Brain areas sensitive to coherent visual motion. Perception 30:61-72.
McKeefry DJ, Watson JDG, Frackowiak RSJ, Fong K, Zeki S (1997) The Activity in Human Areas V1/V2, V3 and V5 During the Perception of Coherent and Incoherent Motion. Journal of Neurophysiology 5:1-12.
Murray SO, Kersten D, Olshausen BA, Schrater P, Woods DL (2002) Shape perception reduces activity in human primary visual cortex. Proc Natl Acad Sci U S A 99:15164-15169.