Contact

Dr. Cesare Valerio Parise

E-Mail: Cesare.Parise

 

Bild von Parise, Cesare Valerio, Dr.

Cesare Valerio Parise

Position: Wissenschaftler  Abteilung: Alumni Ernst

The physical properties of the distal stimuli activating our senses are often correlated in nature. Stimulus correlations can be contingent and readily available to the senses (like the temporal correlation between mouth movements and vocal sounds in speech), or can be the results of the statistical co-occurrence of certain stimulus properties that can be learnt over time (like the relation between the frequency of acoustic resonance and the size of the resonator). My research, funded be the Bernstein Center for Computational Neuroscience, focuses on the role of signal compatibility as a modulatory factor for multisensory integration and interaction.

 

Selected publications

• Parise CV and Spence C (in press) Audiovisual crossmodal correspondences In:Oxford handbook of synaesthesia, (Ed) J. Simner, Oxford University Press.

• Parise CV, Spence C and Ernst MO (2012) When Correlation Implies Causation in Multisensory Integration Current Biology 22(1) 46-49. link1 link2

• Parise CV and Pavani F (2011) Evidence of sound symbolism in simple vocalizationsExperimental Brain Research 214(3) 373-380. link

• Parise CV and Spence C (2009) "When Birds of a Feather Flock Together": Synesthetic Correspondences Modulate Audiovisual Integration in Non-SynesthetesPLoS ONE 4(5) 1-7. link

 

The full list of publications can be found here.

 

In the media

Max Planck Institute news When correlation implies causation
BBC news People may be able to taste words
University of Oxford news The synaesthete in all of us.
Suite101.com People Can Hear Shapes and Taste Words
BCCN news What you say is what you see
Moebius, Radio 24 Quello strano comportamento umano che chiamiamo sinestesia
POPSCI
CSMonitor Language that is in good taste
Le Scienze Web News Quel che dici e’ quel che vedi (1)
Le Scienze Web News Quel che dici e’ quel che vedi (2)
Le Scienze Web News Quando chi si somiglia si piglia: associazioni sinestetiche in non sinesteti

 

Education

• 2012 D.Phil. in Experimental Psychology, University of Oxford

• 2006 Laurea in General and Experimental Psychology, University of Milano-Bicocca

 

Academic appointments

• since 2011 Visiting post doctoral scientist, University of Bielefeld

• since 2010 Research scientist, Max Planck Institute for Biological Cybernetics

• 2009 -2010 Research fellow, University of Trento

 

Awards

• 2013 Best Ph.D. dissertation awarded by the Italian Association of Psychology

• 2011 Student award at the International Multisensory Research Forum

Parise C and Ernst M (June-12-2014) Abstract Talk: What is the origin of cross-modal correspondences?, 15th International Multisensory Research Forum (IMRF 2014), Amsterdam, The Netherlands21.
There are many seemingly arbitrary associations between different perceptual properties across modalities, such as the frequency of a tone and spatial elevation, or the color of an object and temperature. Such associations are often termed crossmodal correspondences, and they represent a hallmark of human and animal perception. The pervasiveness of crossmodal correspondences, however, is at odds with their apparent arbitrariness: why encoding arbitrary mappings across sensory attributes in such a consistent manner? Aren’t they misleading unless they represent some fundamental properties of the world around us? Over the last few years a number of studies have demonstrated that crossmodal correspondences are not arbitrary at all: they faithfully represent the statistics of natural scenes, which can be learnt over time and exploited to better process multisensory information. Here, we provide an overview of such most recent evidence, with a particular emphasis on the mapping between auditory pitch and spatial elevation, a celebrated case of crossmodal correspondence whose apparent arbitrariness has baffled neuroscientists for more than a century. By combining a direct measurement of environmental statistics with bioacoustics, psychophysics, and Bayesian modeling, we have recently shown that such mapping is not only already present in the environment: it is also directly encoded in the bioacoustics, that is, in the shape of the human outer ear. Taken together, current evidence calls for a thorough characterization of the environmental statistics as the most critical step towards a comprehensive understanding of the origins and the roles of cross-modal correspondences in perception, cognition, and action.
html CiteID: PariseE2014

Parise CV (September-18-2013) Abstract Talk: Signal compatibility as a modulatory factor for audiovisual multisensory integration, XIX Congresso di Psicologia sperimentale (AIP 2015), Roma, Italy21.
The physical properties of the signals activating our senses are often correlated in nature; it would therefore be advantageous to exploit such correlations to better process sensory information. Stimulus correlations can be contingent and readily available to the senses (like the temporal correlation between mouth movements and vocal sounds in speech), or can be the results of the statistical co-occurrence of certain stimulus properties that can be learnt over time (like the relation between the frequency of acoustic resonance and the size of the resonator). Over the last century, a large body of research on multisensory processing has demonstrated the existence of compatibility effects between individual features of stimuli from different sensory modalities. Such compatibility effects, termed crossmodal correspondences, possibly reflect the internalization of the natural correlation between stimulus properties. The present dissertation assesses the effects of crossmodal correspondences on multisensory processing and reports a series of experiments demonstrating that crossmodal corresp ondences influence the processing rate of sensory information, distort perceptual experiences and lead to stronger multisensory integration. After a brief introduction to the topic of multisensory processing and crossmodal correspondence in Chapter 1, in Chapter 2, the literature on crossmodal correspondences is critically reviewed. Based on a large body of research on crossmodal correspondences accumulated over more than a century, an inventory of the defining features of crossmodal correspondence is provided. Next, a taxonomy of crossmodal correspondence is developed. Finally the literature on the effects of audiovisual correspondence on human information processing is reviewed. In Chapter 3, novel evidence for the effect of crossmodal correspondences on the speed and accuracy of human behavior is presented. A number of well-known examples of crossmodal correspondence, including the Mil-Mal effect, the Takete-Maluma effect, and the correspondence between auditory pitch and visual size are investigated using a modified version of the Implicit Association Test (IAT). Moreover, evidence is provided for two new crossmodal correspondences, namely the association between pitch and size of angles, and between the waveform of auditory signals and the roundedness of visual shapes. In Chapter 4, psychophysical evidence is presented that crossmodal correspondences operate on a perceptual level, and systematically distort perceptual experiences. Human observers sometimes find it easier to judge the temporal order in which two visual stimuli have been presented if one sound is presented before the first visual stimulus and a second sound is presented after the second visual stimulus. This phenomenon has been term temporal ventriloquism. A manipulation of the crossmodal congruency between the visual and the auditory stimuli revealed a systematic modulation of the magnitude of this perceptual effect: Temporal sensitivity was higher for pairs of congruent auditory and visual stimuli than for incongruent pairs of stimuli. These results therefore provide the first empirical evidence that crossmodal correspondences operate on a perceptual level, and systematically distort perceptual experiences. In Chapter 5, a series of psychophysical experiments showing that crossmodal correspondences modulate multisensory integration are described. Observers were presented with pairs of asynchronous or spatially discrepant visual and auditory stimuli that were either crossmodally congruent or incongruent, and had to report the relative temporal order of presentation or the relative spatial locations of the two stimuli. Sensitivity to spatial and temporal offsets between auditory and visual stimuli was lower for pairs of congruent as compared to incongruent audiovisual stimuli. Recent studies of multisensory integration have demonstrated that reduced sensitivity to perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between unisensory signals. These results therefore indicate a stronger coupling of congruent vs. incongruent stimuli and provide the first psychophysical evidence that crossmodal correspondences promote multisensory integration. In Chapter 6, an experiment investigating the role of the similarity of the temporal structure of visual and auditory signals for multisensory integration is presented. Inferring which signals have a common underlying cause, and hence should be integrated (i.e., solving the correspondence problem), is a primary challenge for a perceptual system dealing with multiple sensory inputs. Here the role of correlation between the temporal structures of auditory and visual signals in causal inference is explored. Specifically, it is tested whether correlated signals are inferred to originate from the same event and hence integrated optimally. In a pointing task with visual, auditory, and combined audiovisual targets, the improvement in precision for combined relative to unimodal targets was statistically optimal only when the audiovisual signals were correlated. These results therefore demonstrate for the first time that humans use the similarity in the temporal structure of multiple sensory signals to solve the crossmodal correspondence problem, hence inferring causation from correlation. In Chapter 7, a Bayesian framework is proposed to interpret the present results whereby stimulus correlations, represented on the prior distribution of expected crossmodal co-occurrence, operate as cues to solve the correspondence problem, that is, to bind those signals that likely originate from the same environmental source while keeping separate those signals that likely belong to different objects/events. Finally, the findings of the present thesis are interpreted on the light of multisensory perceptual learning and development, and the relation between crossmodal correspondences and synesthesia is thoroughly discussed. In spite of a century of research, the role of signal correlation on multisensory processing was largely unknown. Taken together, the present results demonstrate for the first time that human observers exploit the statistical correlation between multiple signals to solve the multisensory correspondence problem, and to more effectively process multisensory information.
html CiteID: Parise2013_2

Parise CV and Ernst M (August-29-2013) Abstract Talk: Multisensory mechanisms for perceptual disambiguation: A classification image study on the stream-bounce illusion, 36th European Conference on Visual Perception (ECVP 2013), Bremen, Germany, Perception42 (ECVP Abstract Supplement) 239.
Sensory information is inherently ambiguous, and observers must resolve such ambiguity to infer the actual state of the world. Here, we take the stream-bounce illusion as a tool to investigate disambiguation from a cue-integration perspective, and explore how humans gather and combine sensory information to resolve ambiguity. In a classification task, we presented two bars moving in opposite directions along the same trajectory, meeting at the centre. Observers classified such ambiguous displays as streaming or bouncing. Stimuli were embedded in audiovisual noise to estimate the perceptual templates used for the classification. Such templates, the classification images, describe the spatiotemporal noise properties selectively associated to either percept. Results demonstrate that audiovisual noise strongly biased perception. Computationally, observers’ performance is well explained by a simple model involving a matching stage, where the sensory signals are cross-correlated with the internal templates, and an integration stage, where matching estimates are linearly combined. These results reveal analogous integration principles for categorical stimulus properties (stream/bounce decisions) and continuous estimates (object size, position…). Finally, the time-course of the templates reveals that most of the decisional weight is assigned to information gathered before the crossing of the stimuli, thus highlighting a predictive nature of perceptual disambiguation.
html doi CiteID: PariseE2013_2

Parise CV and Ernst MO (June-6-2013) Abstract Talk: Multisensory mechanisms for perceptual disambiguation: A classification image study on the stream-bounce illusion , 14th International Multisensory Research Forum (IMRF 2013), Jerusalem, Israel14 .
Sensory information is inherently ambiguous, and a given signal can in principle correspond to infinite states of the world. A primary task for the observer is therefore to disambiguate sensory information and accurately infer the actual state of the world. Here, we take the stream-bounce illusion as a tool to investigate perceptual disambiguation from a cue-integration perspective, and explore how humans gather and combine sensory information to resolve ambiguity. In a classification task, we presented two bars moving in opposite directions along the same trajectory meeting at the centre. We asked observers to classify such ambiguous displays as streaming or bouncing. Stimuli were embedded in dynamic audiovisual noise, so that through a reverse correlation analysis, we could estimate the perceptual templates used for the classification. Such templates, the classification images, describe the spatiotemporal statistical properties of the noise, which are selectively associated to either percept. Our results demonstrate that the features of both visual and auditory noise, and interactions thereof, strongly biased the final percept towards streaming or bouncing. Computationally, participants’ performance is explained by a model involving a matching stage, where the perceptual systems cross-correlate the sensory signals with the internal templates; and an integration stage, where matching estimates are linearly combined to determine the final percept. These results demonstrate that observers use analogous MLE-like integration principles for categorical stimulus properties (stream/bounce decisions) as they do for continuous estimates (object size, position, etc…). Finally, the time-course of the classification images reveal that most of the decisional weight for disambiguation is assigned to information gathered before the physical crossing of the stimuli, thus highlighting a predictive nature of perceptual disambiguation.
html CiteID: pariseE2013

Parise CV (April-2013) Invited Lecture: Metaphors in the ear and in the world, Meeting of the Experimental Psychology Society (EPS 2013), Lancaster, UK.
CiteID: Parise2013

Parise C , Ernst M , Dwarakanath A and Hartcher-O'Brien J (November-2012) Abstract Talk: Motion parallax serves as an independent cue in sound source disambiguation, 13th Conference of the Junior Neuroscientists of Tübingen (NeNA 2012): Science and Education as Social Transforming Agents, Schramberg, Germany13 6.
In the absence of dominant cues to the distance of a sound source from the observer, estimating absolute or relative distance becomes difficult. Motion parallax may contribute to this estimation. However, its role as an independent cue has not yet been investigated. To address this issue, we designed an experiment that included logarithmically varying distance of sound source along the depth plane of the observer, elimination of distance related loudness using perceptual loudness equalization and to and fro (laterally) movement of subjects while the sounds were generated in three conditions a simultaneous playback, sequential playback and simultaneous playback of phase-interrupted sounds. Sequential presentation of the low and high sound subjects showed a substantial improvement in distance estimates relative to the baseline static condition. Improvement was also observed for the simultaneous phase interrupted sound condition. Here we demonstrate for the first time the existence of auditory motion parallax from lateral self- motion and show that it aids distance estimation of sound position. Interestingly, a bias to perceive low frequency sounds as farther away was also observed. Auditory depth perception is improved by lateral observer motion, which alters the inter-aural difference cues available.
html CiteID: DwarakanathPHE2012

Parise CV (June-20-2012): Crossmodal correspondences, 13th International Multisensory Research Forum (IMRF 2012), Oxford, UK, Seeing and Perceiving25 (0) 68.
For more than a century now, researchers have acknowledged the existence of crossmodal congruency effects between dimensions of sensory stimuli in the general (i.e., non-synesthetic) population. Such phenomena, known by a variety of terms including `crossmodal correspondences', involve individual stimulus properties, rely on a crossmodal mapping of unisensory features, and appear to be shared by the majority of individuals. Over the last few years, a number of studies have shed light on many key aspects of crossmodal correspondences, ranging from their role in multisensory integration, their developmental trajectories, their occurrence in non-human mammals, their neural underpinning and the role of learning. I will present a brief overview of the latest findings on crossmodal correspondences, highlight standing questions and provide direction for future research.
html doi CiteID: Parise2012

Parise CV , Spence C and Deroy O (December-12-2011) Abstract Talk: Crossmodal correspondences and Synaesthesia: Similarities and Differences, Redefining Synesthesia Symposium, London, UK.
CiteID: SpenceDP2011

Parise CV (October-24-2011) Invited Lecture: The role of signals’ correlation in multisensory integration, Kyoto University, Kyoto, Japan.
The physical properties of the distal stimuli activating our senses are often correlated in nature; it would therefore be advantageous to exploit such correlations to better process sensory information. Stimulus correlations can be contingent and readily available to the sensory systems (like the temporal correlation between mouth movements and vocal sounds in speech), or can be the results of the statistical co-occurrence of certain stimulus properties that can be learnt over time (like the relation between the frequency of acoustic resonance and the size of the resonator). Over the last century, a large body of research on multisensory processing has demonstrated the existence of compatibility effects between individual features of stimuli presented in different sensory modalities. Such compatibility effects, termed crossmodal correspondences, possibly reflect the internalization of the natural correlation between stimulus properties. During this talk, I will assesses the effects of crossmodal correspondences on multisensory processing and report experiments demonstrating that crossmodal correspondences influence the processing rate of sensory information, distort perceptual experiences and lead to stronger multisensory integration. Moreover, a final experiment will be described investigating the effects of contingent signal correlation on multisensory processing, the results of which demonstrate the key role that temporal correlation plays in inferring whether or not two signals have a common physical cause (i.e., the correspondence problem). A Bayesian framework is proposed to interpret the present results whereby stimulus correlations, represented on the prior distribution of expected crossmodal co-occurrence, operates as cues to solve the correspondence problem.
html CiteID: Parise2011

Parise CV , Spence C and Deroy O (October-19-2011) Abstract Talk: Crossmodal correspondences, 12th International Multisensory Research Forum (IMRF 2011), Fukuoka, Japan, i-Perception2 (8) 887.
In many everyday situations, our senses are bombarded by numerous different unisensory signals at any given time. In order to gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain ‘know’ which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the role that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. So, for example, people will consistently match high-pitched sounds with small, bright objects that are located high up in space. In this talk, the latest literature is reviewed. We will argue that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains to solve the crossmodal binding problem. Crossmodal correspondences will also be distinguished from synaesthesia.
html doi CiteID: SpencePD2011

Parise CV and Spence C (September-2009) Abstract Talk: Quando chi si somiglia si piglia': corrispondenze sinestetiche modulano l‘integrazione audiovisiva in soggetti non-sinesteti, Congresso Nazionale della Sezione Sperimentale dell'Associazione Italiana di Psicologia (AIT 2009), Chieti, Italy.
CiteID: 6565

Parise CV , Spence CJ, Navarra J, Vatakis A and Hartcher-O'Brien J (August-2009) Abstract Talk: The multisensory perception of synchrony, 32nd European Conference on Visual Perception, Regensburg, Germany, Perception38 (ECVP Abstract Supplement) 113.
The last few years have seen a rapid growth of interest in issues related to the temporal aspects of multisensory perception. We will highlight recent research that has investigated people's sensitivity to temporal asynchrony for both simple (eg, beeps, flashes, punctuate touch, laser pain) and more complex stimuli (eg, speech, music, object action video clips) using both simultaneity and temporal order judgment tasks. We will review some of the latest findings to have emerged from our laboratory looking at how the brain responds (ie, adapts) to various kinds of on-going asynchronous stimulation (again using both simple and complex stimuli). Recent findings demonstrating the effect of the "unity effect" on multisensory temporal perception will be outlined, as will research showing that synesthetic correspondences can modulate multisensory integration (both temporal and spatial) in normal participants.
html doi CiteID: 6551

Parise CV and Spence C (July-2009) Abstract Talk: "When Birds of a Feather Flock Together": Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes, 10th International Multisensory Research Forum (IMRF 2009), New York, NY, USA10 .
Synesthesia is a condition in which the stimulation of one sensory modality elicits an additional experience, often in a different (i.e., unstimulated) sense. Although only a small proportion of the population is synesthetic, there is growing evidence to suggest that neurocognitively-normal individuals also experience some form of synesthetic association between the stimuli presented to different sensory modalities (i.e., between auditory pitch and visual size, where lower frequency tones are associated with large objects and higher frequency tones with small objects). While previous research has highlighted crossmodal interactions between synesthetically corresponding dimensions, the possible role of synesthetic associations in multisensory integration has not been considered previously. Here we investigate the effects of synesthetic associations by presenting pairs of temporally asynchronous or spatially discrepant visual and auditory stimuli that were either synesthetically matched or mismatched. In a series of three psychophysical experiments, involving unspeeded two alternatives forced-choice discrimination tasks, participants reported the relative temporal order of presentation or the relative spatial locations of the two stimuli. The results showed that the reliability of non-synesthetic participants’ estimates of both audiovisual temporal asynchrony and spatial discrepancy were lower for pairs of synesthetically matched (i.e. high-pitched tones and small visual stimuli) as compared to synesthetically mismatched audiovisual (i.e. high-pitched tones and large visual stimuli) stimuli. Recent studies of multisensory integration have shown that the reduced reliability of perceptual estimates regarding intersensory conflicts constitutes the marker of a stronger coupling between the unisensory signals. These results therefore indicate a stronger coupling between synesthetically matched vs. mismatched stimuli and provide the first psychophysical evidence that synesthetic congruency can promote multisensory integration. Synesthetic crossmodal correspondences therefore appear to play a crucial (if unacknowledged) role in the multisensory integration of auditory and visual information.
html CiteID: 6562

Parise CV , Spence C and Chen Y-C (July-2009) Abstract Talk: Explaining the Colavita visual dominance effect, 10th International Multisensory Research Forum (IMRF 2009), New York, NY, USA10 .
The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm (Colavita, 1974), a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. In this presentation, the empirical literature on the Colavita visual dominance effect is reviewed, and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. We put forward a novel explanation of the Colavita effect, based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality (see Sinnett et al., 2008).
html CiteID: 6563

Parise CV and Petroni L (January-2007) Abstract Talk: Migliorare l’uso dei canali on-line verso gli utenti, La Vetrina delle Innovazioni, Marghera, Italy.
CiteID: PetroniP2007

Präferenzen: 
Referenzen pro Seite: Jahr: Medium:

  
Zeige Zusammenfassung

Artikel (12):

Parise CV und Ernst MO (Juni-2016) Correlation detection as a general mechanism for multisensory integration Nature Communications 7(11543) 1-9.
Parise CV, Knorre K und Ernst MO (April-2014) Natural auditory scene statistics shapes human spatial hearing Proceedings of the National Academy of Sciences of the United States of America 111(16) 6104–6108.
Senna I, Maravita A, Bolognini N und Parise CV (März-2014) The Marble-Hand Illusion PLoS ONE 9(3) 1-6.
Parise CV, Harrar V, Ernst MO und Spence C (Juli-2013) Cross-correlation between Auditory and Visual Signals Promotes Multisensory Integration Multisensory Research 26(3) 307–316.
Parise CV und Spence C (August-2012) Audiovisual crossmodal correspondences and sound symbolism: a study using the implicit association test Experimental Brain Research 220(3-4) 319-333.
Spence C und Parise CV (Juni-2012) The cognitive neuroscience of crossmodal correspondences i-Perception 3(7) 410-412.
Parise CV und Spence C (April-2012) Assessing the associations between brand packaging and brand attributes using an indirect performance measure Food Quality and Preference 24(1) 17–23.
Parise CV, Spence C und Ernst MO (Januar-2012) When Correlation Implies Causation in Multisensory Integration Current Biology 22(1) 46-49.
Parise CV und Pavani F (Oktober-2011) Evidence of sound symbolism in simple vocalizations Experimental Brain Research 214(3) 373-380.
Spence C und Parise C (März-2010) Prior-entry: A review Consciousness and Cognition 19(1) 364-379.
Parise CV und Spence C (Mai-2009) "When Birds of a Feather Flock Together": Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes PLoS ONE 4(5) 1-7.
Parise CV und Spence C (September-2008) Synesthetic congruency modulates the temporal ventriloquism effect Neuroscience Letters 442(3) 257-261.

Beiträge zu Büchern (3):

Van Dam LCJ, Parise CV und Ernst MO: Modeling multisensory integration, 209-230. In: Sensory Integration and the Unity of Consciousness, (Ed) D. Bennett, MIT Press, Cambridge, MA, USA, (Oktober-2014).
Parise CV und Spence C: Audiovisual Cross-Modal Correspondences in the General Population, 790-815. In: Oxford Handbook of Synesthesia, (Ed) J. Simner, Oxford University Press, Oxford, UK, (Dezember-2013).
Spence C, Parise CV und Chen Y-C: The Colavita visual dominance effect, 529-556. In: The neural bases of multisensory processes, (Ed) M.T. Murray, CRC Press, Boca Raton, FL, USA, (Januar-2012).

Poster (13):

Senna I, Parise CV und Ernst MO (Juni-13-2014): Analogous motion illusion in vision and audition, 15th International Multisensory Research Forum (IMRF 2014), Amsterdam, The Netherlands.
Parise CV, Knorre K und Ernst M (September-2013): On pitch-elevation mapping: Nature, nurture and behaviour, Bernstein Conference 2013, Tübingen, Germany.
Parise CV, Knorre K und Ernst MO (Juni-4-2013): On pitch-elevation mapping: Nature, nurture and behaviour, 14th International Multisensory Research Forum (IMRF 2013), Jerusalem, Israel, Multisensory Research, 26(1) 190.
Senna I, Maravita A, Bolognini N und Parise CV (Juni-4-2013): The marble-hand illusion, 14th International Multisensory Research Forum (IMRF 2013), Jerusalem, Israel, Multisensory Research, 26(1) 203.
Parise CV, Spence CV und Ernst MO (September-14-2012): When correlation implies causation in multisensory integration, Bernstein Conference 2012, München, Germany, Frontiers in Computational Neuroscience, Conference Abstract: Bernstein Conference 2012 177.
Parise C, Spence C und Ernst M (August-2012): When correlation implies causation in multisensory integration, 12th Annual Meeting of the Vision Sciences Society (VSS 2012), Naples, FL, USA, Journal of Vision, 12(9) 611.
Parise CV, Harrar V, Spence C und Ernst M (Oktober-19-2011): Multisensory integration: When correlation implies causation, 12th International Multisensory Research Forum (IMRF 2011), Toulouse, France, i-Perception, 2(8) 901.
Parise CV, Harrar V, Spence C und Ernst MO (September-2011): Multisensory integration: When correlation implies causation, Bernstein Cluster D Symposium: Multisensory Perception and Action, Tübingen, Germany.
Parise CV, Harrar V, Spence C und Ernst MO (September-2011): Multisensory integration: When correlation implies causation, 34th European Conference on Visual Perception, Toulouse, France, Perception, 40(ECVP Abstract Supplement) 185.
Parise CV, Stewart N, Föcker J, Ngo M, Browning M, Roeder B, Spence C und Rogers RD (Juni-2010): Emotional information enhances audiovisual speech integration, 11th International Multisensory Research Forum (IMRF 2010), Liverpool, UK.
Parise CV, Di Luca M und Ernst MO (Juni-2010): Multiple criteria for multisensory signals, 11th International Multisensory Research Forum (IMRF 2010), Liverpool, UK.
Parise CV und Spence C (Juli-2008): Synaesthetic links in audiovisual temporal integration, 9th International Multisensory Research Forum (IMRF 2008), Hamburg, Germany.
Parise CV, Bricolo E, Vezzani S und Zavagno D (September-2006): Somiglianze cross-modali fra stimoli visivi e stimoli uditivi, Congresso Nazionale della Sezione Sperimentale dell'Associazione Italiana di Psicologia (AIP 2006), Rovereto, Italy.

Abschlussarbeiten (1):

Parise CV: Signal compatibility as a modulatory factor for audiovisual multisensory integration, University of Oxford, (2013). PhD thesis

Vorträge (15):

Parise C und Ernst M (Juni-12-2014) Abstract Talk: What is the origin of cross-modal correspondences?, 15th International Multisensory Research Forum (IMRF 2014), Amsterdam, The Netherlands 21.
Parise CV (September-18-2013) Abstract Talk: Signal compatibility as a modulatory factor for audiovisual multisensory integration, XIX Congresso di Psicologia sperimentale (AIP 2015), Roma, Italy 21.
Parise CV und Ernst M (August-29-2013) Abstract Talk: Multisensory mechanisms for perceptual disambiguation: A classification image study on the stream-bounce illusion, 36th European Conference on Visual Perception (ECVP 2013), Bremen, Germany, Perception, 42(ECVP Abstract Supplement) 239.
Parise CV und Ernst MO (Juni-6-2013) Abstract Talk: Multisensory mechanisms for perceptual disambiguation: A classification image study on the stream-bounce illusion, 14th International Multisensory Research Forum (IMRF 2013), Jerusalem, Israel.
Parise CV (April-2013) Invited Lecture: Metaphors in the ear and in the world, Meeting of the Experimental Psychology Society (EPS 2013), Lancaster, UK.
Dwarakanath A, Parise C, Hartcher-O'Brien J und Ernst M (November-2012) Abstract Talk: Motion parallax serves as an independent cue in sound source disambiguation, 13th Conference of the Junior Neuroscientists of Tübingen (NeNA 2012): Science and Education as Social Transforming Agents, Schramberg, Germany 6.
Parise CV (Juni-20-2012): Crossmodal correspondences, 13th International Multisensory Research Forum (IMRF 2012), Oxford, UK, Seeing and Perceiving, 25(0) 68.
Spence C, Deroy O und Parise CV (Dezember-12-2011) Abstract Talk: Crossmodal correspondences and Synaesthesia: Similarities and Differences, Redefining Synesthesia Symposium, London, UK.
Parise CV (Oktober-24-2011) Invited Lecture: The role of signals’ correlation in multisensory integration, Kyoto University, Kyoto, Japan.
Spence C, Parise CV und Deroy O (Oktober-19-2011) Abstract Talk: Crossmodal correspondences, 12th International Multisensory Research Forum (IMRF 2011), Fukuoka, Japan, i-Perception, 2(8) 887.
Parise CV und Spence C (September-2009) Abstract Talk: Quando chi si somiglia si piglia': corrispondenze sinestetiche modulano l‘integrazione audiovisiva in soggetti non-sinesteti, Congresso Nazionale della Sezione Sperimentale dell'Associazione Italiana di Psicologia (AIT 2009), Chieti, Italy.
Spence CJ, Navarra J, Vatakis A, Hartcher-O'Brien J und Parise CV (August-2009) Abstract Talk: The multisensory perception of synchrony, 32nd European Conference on Visual Perception, Regensburg, Germany, Perception, 38(ECVP Abstract Supplement) 113.
Parise CV und Spence C (Juli-2009) Abstract Talk: "When Birds of a Feather Flock Together": Synesthetic Correspondences Modulate Audiovisual Integration in Non-Synesthetes, 10th International Multisensory Research Forum (IMRF 2009), New York, NY, USA.
Spence C, Parise CV und Chen Y-C (Juli-2009) Abstract Talk: Explaining the Colavita visual dominance effect, 10th International Multisensory Research Forum (IMRF 2009), New York, NY, USA.
Petroni L und Parise CV (Januar-2007) Abstract Talk: Migliorare l’uso dei canali on-line verso gli utenti, La Vetrina delle Innovazioni, Marghera, Italy.

Export als:
BibTeX, XML, pubman, Edoc, RTF
Last updated: Montag, 22.05.2017