RESEARCH ARTICLE Impact of the spatial congruence of redundant targets on within-modal and cross-modal integration S. Girard • M. Pelland • F. Lepore • O. Collignon Received: 22 May 2012 / Accepted: 10 October 2012 / Published online: 25 November 2012 Ó Springer-Verlag Berlin Heidelberg 2012 Abstract Although the topic of sensory integration has raised increasing interest, the differing behavioral outcome of combining unisensory versus multisensory inputs has surprisingly only been scarcely investigated. In the present experiment, observers were required to respond as fast as possible to (1) lateralized visual or tactile targets presented alone, (2) double stimulation within the same modality or (3) double stimulation across modalities. Each combination was either delivered within the same hemispace (spatially aligned) or in different hemispaces (spatially misaligned). Results show that the redundancy gains (RG) obtained from the cross-modal conditions were far greater than those obtained from combinations of two visual or two tactile targets. Consistently, we observed that the reaction time distributions of cross-modal targets, but not those of within-modal targets, surpass the predicted reaction time distribution based on the summed probability distributions of each constituent stimulus presented alone. Moreover, we found that the spatial alignment of the targets did not influence the RG obtained in cross-modal conditions, whereas within-modal stimuli produced a greater RG when the targets where delivered in separate hemispaces. These results suggest that within-modal and cross-modal integration are not only distinguishable by the amount of facilitation they produce, but also by the spatial configu- ration under which this facilitation occurs. Our study strongly supports the notion that estimates of the same event that are more independent produce enhanced inte- grative gains. Keywords Multisensory integration Á Visual Á Tactile Á Simple reaction time Á Redundancy gain Introduction The brain’s ability to integrate information coming from separate sensory estimates is critical for creating a unified and coherent representation of the environment. The inte- gration of cross-modal (Spence and Driver 2004; Stein and Stanford 2008; Meredith and Stein 1986) and within-modal (Schro ¨ter et al. 2007; Murray et al. 2001; Savazzi and Marzi 2002, 2008) stimuli offers many benefits such as enhanced discrimination and accelerated reaction to objects. Surprisingly, only very few studies explored how the beneficial effects obtained in multisensory conditions differ from those obtained when combining redundant stimuli of the same sensory modality (Forster et al. 2002; Laurienti et al. 2006; Gingras et al. 2009). Whereas inputs derived from different senses provide independent estimates of the same event, inputs from the same modality can exhibit substantial covariance in the information they provide. Thus, it might be expected that two spatio-temporally concordant stimuli from two differ- ent modalities will produce a greater gain in performance than the combination of two concordant stimuli from the same modality (Ernst and Banks 2002; Stein et al. 2009). In contrast, one might assume that both multisensory and S. Girard Á M. Pelland Á F. Lepore Centre de Recherche en Neuropsychologie et Cognition (CERNEC), Universite ´ de Montre ´al, Montreal, Canada O. Collignon Centre de Recherche du CHU Sainte-Justine, Universite ´ de Montre ´al, Montreal, Canada O. Collignon (&) Centre for Mind/Brain Sciences (CIMeC), University of Trento, via delle Regole, 101, Mattarello, TN, Italy e-mail: olivier.collignon@unitn.it 123 Exp Brain Res (2013) 224:275–285 DOI 10.1007/s00221-012-3308-0