On-line Orthographic Influences on Spoken Language in a Semantic Task Chotiga Pattamadilok 1,2 , Laetitia Perre 3,4 , Ste ´phane Dufau 3,4 , and Johannes C. Ziegler 3,4 Abstract & Literacy changes the way the brain processes spoken lan- guage. Most psycholinguists believe that orthographic effects on spoken language are either strategic or restricted to meta- phonological tasks. We used event-related brain potentials (ERPs) to investigate the locus and the time course of orthographic effects on spoken word recognition in a semantic task. Partic- ipants were asked to decide whether a given word belonged to a semantic category (body parts). On no-go trials, words were presented that were either orthographically consistent or incon- sistent. Orthographic inconsistency (i.e., multiple spellings of the same phonology) could occur either in the first or the second syllable. The ERP data showed a clear orthographic consistency effect that preceded lexical access and semantic effects. More- over, the onset of the orthographic consistency effect was time- locked to the arrival of the inconsistency in a spoken word, which suggests that orthography influences spoken language in a time- dependent manner. The present data join recent evidence from brain imaging showing orthographic activation in spoken lan- guage tasks. Our results extend those findings by showing that orthographic activation occurs early and affects spoken word recognition in a semantic task that does not require the explicit processing of orthographic or phonological structure. & INTRODUCTION Learning to read and write alters the way people process spoken words (Frith, 1998). This idea has been con- firmed in several studies showing that performance in spoken language tasks was influenced by orthographic consistency (i.e., the fact that a phonological unit in a given language can have several spellings). For example, the rhyme is consistent in English because it has only one possible spelling (‘‘uck’’), whereas the rhyme is inconsistent because it has several possible spell- ings (‘‘ight,’’ ‘‘ite,’’ or ‘‘yte’’). Much of the evidence for literacy effects in spoken language comes from meta-phonological tasks. For in- stance, participants were faster in deciding whether two words rhymed when they had the same spellings, such as toast and roast, than when they had different spellings, such as toast and ghost (e.g., McPherson, Ackerman, & Dykman, 1997; Zecker, 1991; Zecker, Tanenhaus, Alderman, & Siqueland, 1986; Rack, 1985; Donnenwerth-Nolan, Tanenhaus, & Seidenberg, 1981; Seidenberg & Tanenhaus, 1979). In agreement with this finding, a functional mag- netic resonance imaging (fMRI) study by Booth et al. (2004) found that rhyme decisions produced activation in the left fusiform gyrus, an area that is typically in- volved in processing orthographic information (Dehaene et al., 2001, 2004). More recently, Bolger, Hornickel, Cone, Burman, and Booth (2007) studied the effects of orthographic and phonological inconsistency in the visual modality, with fMRI using a rhyming (does jazz rhyme with has) and a spelling task. They found greater activation for inconsis- tent compared with consistent words in several brain re- gions including the left inferior temporal gyrus, the left superior temporal cortex, the left fusiform gyrus, and the bilateral medial frontal gyrus/anterior cingulate cortex. Higher-skill readers were more sensitive to the consis- tency manipulation than lower-skill readers in the fusi- form gyrus and the precuneus/posterior cingulate cortex. Together, these data suggest that visual word recogni- tion is best described as a coupling between orthograph- ic and phonological information in a widely distributed cortical network. Finally, literacy clearly changes the way the brain pro- cesses spoken language. For example, Castro-Caldas, Petersson, Reis, Stone-Elander, and Ingvar (1998) found that illiterates who have never learned a written language do not activate the same brain areas as do literate people when processing spoken language (see also Petersson, Reis, Askelo ¨f, Castro-Caldas, & Ingvar, 2000). Interestingly, the differences were restricted to the processing of pseu- dowords. The processing of pseudowords requires a fine- grained (sublexical) analysis of the speech signal. Thus, it 1 Universite´ Libre de Bruxelles (ULB), Belgium, 2 Fonds de la Recherche Scientifique (F.N.R.S.), Belgium, 3 Aix-Marseille Uni- versite´,France, 4 CNRS, Laboratoire de Psychologie Cognitive, France D 2008 Massachusetts Institute of Technology Journal of Cognitive Neuroscience 21:1, pp. 169–179