Journal of Psycholinguistic Research, Vol. 30, No. 3, 2001
Functional Anatomy of Speech Perception and Speech Production: Psycholinguistic Implications Gregory Hickok1 This paper presents evidence for a new model of the functional anatomy of speech/language (Hickok & Poeppel, 2000) which has, at its core, three central claims: (1) Neural systems supporting the perception of sublexical aspects of speech are essentially bilaterally organized in posterior superior temporal lobe regions; (2) neural systems supporting the production of phonemic aspects of speech comprise a network of predominately left hemisphere systems which includes not only frontal regions, but also superior temporal lobe regions; and (3) the neural systems supporting speech perception and production partially overlap in left superior temporal lobe. This model, which postulates nonidentical but partially overlapping systems involved in the perception and production of speech, explains why psycho- and neurolinguistic evidence is mixed regarding the question of whether input and output phonological systems involve a common network or distinct networks. KEY WORDS: fMRI; aphasia; auditory cortex; sensory-motor integration.
INTRODUCTION AND PRELIMINARIES Models of both speech perception and speech production typically postulate a processing level that involves some form of phonological encoding. There is disagreement, however, on the question of whether there are separate phonological encoding systems for perception versus production (Dell et al., 1997; Levelt et al., 1999), or whether a single system participates in phonological encoding for both the input and output of speech (Allport, 1984; Coleman, 1998; MacKay, 1987). For example, in their recent review of lexical processing Shelton and Caramazza (1999) state that “ . . . the results that have been cited in support of separate input and output processing This work was supported by NIH Grant DC03681. 1 Department of Cognitive Sciences, University of California, Irvine, California 92697. email:
[email protected] or http://www.lcbr.uci.edu 225 0090-6905/01/0500-0225$19.50/0 © 2001 Plenum Publishing Corporation
226
Hickok
components can usually be accommodated by models specifying single processing components” (p. 6). In what follows, we will review the evidence for a recent model of the functional anatomy of speech/language (Hickok, 2000; Hickok & Poeppel, 2000), which has proposed, as one component, a system in the left posterior superior temporal lobe which participates in phoneme-level aspects of both speech perception and production. This region appears to be a site of overlap between the input and output systems, which are otherwise differently distributed: speech perception systems involve several superior temporal regions bilaterally, whereas speech production systems involve a network of predominately left hemisphere regions, including posterior superior temporal, frontal, and, perhaps, parietal cortex. This claim of nonidentical, but partially overlapping networks involved in speech perception and production is consistent both with evidence suggesting that input and output phonological processes can be dissociated (because portions of the two networks are distinct) and with the evidence suggesting that there is computational overlap between input and output phonological processes (because portions of the two networks overlap). A summary of the model is provided in Figure 1. In this paper we will concentrate on evidence implicating the posterior superior temporal lobe in phoneme-level processes in speech perception and production. Details on other aspects of the model can be found in Hickok and Poeppel (2000) and Hickok (2000). Before turning to the data, it is worth mentioning a pair of related issues. The first is terminological. The term “phoneme-level” (and similar terms) is used here very loosely to refer to any or all of a range of sublexical processes involved in speech perception and production. The current state of knowledge simply does not yet allow one to map, with confidence, the relation between the various linguistic processing levels (e.g., phonetic, phonemic) and brain systems. Second, for reasons detailed immediately below, the discussion will remain agnostic on the question of the linguistic specificity of the brain regions involved. Much research on the neural basis of language is aimed at discovering those brain regions that are language specific. This is an important issue, but it is important to bear in mind that studies of the neural basis of language should not be restricted to this goal. One can think of language processes in the simplest case as a set of transformations (not necessarily in series) which, on the input side, map acoustic signals onto a conceptual representation, or, on the output side, map a conceptual representation onto a motor representation. Some of these transformations are deemed “nonlinguistic” by virtue of their additional involvement in nonlinguistic sensory or motor events (e.g., frequency coding in auditory cortex) and are studied under the rubric of psychoacoustics or motor control; other transformational stages are
Anatomy of Speech Perception and Production
227
Fig. 1. A model of the functional neuroanatomy of speech perception and related language processes as proposed by Hickok and Poeppel (2000). It is proposed that systems supporting sound-based representation of speech are located in the pSTG bilaterally. From that point, two more strongly left lateralized divergent processing streams emerge: a dorsal parieto–frontal stream, which is involved in language processes that require relatively direct mappings between sound- and articulatory-based representations (e.g., repetition, phonological working memory), and a dorsal temporal–parietal–occipital stream, which is involved in language processes that require mappings between sound- and meaningbased representations (at least the lexical level). [See Hickok (2000) for a description of how this model relates to the classical aphasias.] From Hickok and Poeppel (2000), with permission.
deemed “linguistic” by virtue of their presumed unique role in processing language-related signals and are studied under the rubric of phonetics, psycholinguistics, or linguistics. Clearly, though, each transformational stage, whether “linguistic” or “nonlinguistic,” interacts in important ways with other stages, and each stage is critical to the entire transformational process. A complete understanding of how language is processed in the brain, then, will involve an understanding of each processing stage involved in the translations between conceptual representations and the sensory and motor periphery. Viewed in this way, whether a processing stage is labeled “linguistic” or “nonlinguistic” is irrelevant to understanding the neural basis of language. Rather, such labels are only relevant to the question of whether or not a given processing stage is also involved in nonlinguistic computations—again, an interesting empirical question, but one that need not be answered to understand fully the neural basis of language. Finally, this view of language processes as a set of transformational mappings carries with it the implicit assumption that the particular neural network involved in any language operation will vary depending on what your mapping to and from. Acoustic speech input, for
228
Hickok
example, can be mapped onto a conceptual representation for comprehension, or mapped onto a motor representation for verbatim repetition. The transformational operations involved in these two tasks are shared up to some point, but then must diverge in accordance with the different transformational requirements entailed by the endpoints of the mapping process (Hickok & Poeppel, 2000). This example, of course, uses radically different endpoints to illustrate the situation, but the issue surely arises also in cases where the difference between the endpoint representations is less clearly delineated, such as when mapping an acoustically presented sentence onto a judgment of grammaticality versus onto conceptual representation. If this view is correct, a careful consideration of the tasks (mapping relations) employed in psychoand neurolinguistic research will be critical to understanding the functional organization of language.
EVIDENCE FOR BILATERAL ORGANIZATION OF SPEECH PERCEPTION Several lines of evidence converge on the view that speech perception is mediated bilaterally in the posterior superior temporal lobe (Hickok & Poeppel, 2000). Evidence from aphasia appears to be contradictory on this question, but, in fact, yields a clear picture when task considerations are taken into account. The apparent contradiction comes from the observation that left hemisphere-damaged aphasics of a variety of sorts can present with deficits in the ability to identify or discriminate syllables, yet none of the classic (i.e., unilateral) aphasia syndromes have auditory comprehension deficits that can be attributed to profound phonemic perception deficits (Bachman & Albert, 1988; Baker et al., 1981; Blumstein, 1995). The evidence indicates that auditory comprehension deficits in aphasia, even Wernicke’s aphasia, do not arise primarily from prelexical perceptual deficits, and, indeed, postlexical deficits is a greater contributor to such deficits (Barde et al., 2000). This pattern of results makes sense if we assume that the systems involved in mapping speech input onto meaning (auditory comprehension) are partially distinct from the systems involved in mapping speech input onto an explicit (i.e., conscious) representation of sublexical structure (syllable identification/discrimination). We have argued elsewhere that the latter relies on a left dominant frontoparietal auditory–motor integration network, whereas the former relies on a more ventral–posterior network (Hickok & Poeppel, 2000). The fact that syllable identification/discrimination and auditory comprehension abilities are double-dissociable supports this view (Hickok & Poeppel, 2000). The takehome message here is that no matter what part of the left hemisphere is damaged, you do not find profound phonemic perception deficits in auditory
Anatomy of Speech Perception and Production
229
comprehension tasks, even when contextual cues are controlled for. This demonstrates that the right hemisphere has reasonably good speech perception ability. If speech perception systems are organized bilaterally, then we should expect that bilateral superior temporal lobe lesions will produce profound speech perception deficits. This is exactly the case in “pure word deafness.” The auditory comprehension deficit in word deafness, unlike in Wernicke’s aphasia, appears to reflect auditory speech perception difficulties (Albert & Bear, 1974; Yaqub, et al., 1988). The vast majority of word-deaf cases present with bilateral lesions to the superior temporal gyrus, just as we would expect if speech perception is organized bilaterally (Buchman et al., 1986). The isolated right hemisphere also has good speech perception ability. Studies of split brain patients and carotid amobarbital injection studies both indicate that, in some cases, at least, the isolated right hemisphere has the ability to understand simple (i.e., not syntactically complex) speech (McGlone, 1984; Wada & Rasmussen, 1960; Zaidel, 1985). We tested the speech perception ability of the isolated right hemisphere in a recent case study: Split brain patient, JW, listened to auditory words and then made a match/no-match decision on individual pictures, which were lateralized to one or the other visual field. JW performed well overall (LVF ⫽ 92%, RVF ⫽ 94% correct), and was able to accurately discriminate matching items from phonological foils in both hemispheres (A-prime: LVF ⫽ .96, RVF ⫽ .995) indicating a very high level of speech perception ability in the isolated right hemisphere. Finally, physiological evidence from several sources supports the view that speech perception involves the superior temporal lobe bilaterally. A range of functional imaging studies of passive perception of speech stimuli—including PET (Mazoyer et al., 1993; Petersen et al., 1988; Price et al., 1996; Zatorre, et al., 1996), fMRI (Binder et al., 1994; Dhankhar et al., 1997; Schlosser et al., 1998), and MEG (Gage et al., 1998; Kuriki et al., 1995; Poeppel et al., 1996)—have consistently found bilateral activation in the superior temporal gyrus. It is impossible to know which aspect of the speech stimulus is producing these activations based on the functional imaging data alone. Nonetheless, it is clear that bilateral posterior superior temporal lobe activation is the most consistent finding across studies involving heard speech.
EVIDENCE FOR LEFT POSTERIOR SUPERIOR TEMPORAL GYRUS (pSTG) INVOLVEMENT IN SPEECH PRODUCTION Conduction aphasia is major source of evidence for the involvement of the left posterior superior temporal lobe in speech production (Hickok, 2000). Conduction aphasia is characterized by good auditory comprehension, fluent
230
Hickok
production, which nonetheless contains relatively frequent phonemic paraphasias and naming difficulty (Damasio, 1992). Although verbatim repetition of heard speech is also impaired, it is a mistake to assume that conduction aphasia is a disorder of repetition, as speech production difficulties are not limited to repetition tasks (Goodglass, 1992). The classic interpretation of conduction aphasia is that of a disconnection syndrome in which anterior and posterior language areas can no longer communicate because of damage to a white matter fiber bundle, the arcuate fasciculus (Geschwind, 1965). Recent work, however, has pointed out many empirical limitations of this account, and has suggested that conduction aphasia is a disorder of cortical dysfunction, rather than a disconnection syndrome (Anderson et al., 1999; Hickok, 2000; Hickok et al., 2000). Contrary to common assumptions, there are two lesion patterns associated with conduction aphasia. The most commonly known pattern is the one which involves the left supramarginal gyrus and underlying white matter. This is likely the most well-known because it is consistent with the disconnection hypothesis: the white matter underlying the supramarginal gyrus contains fibers of the arcuate fasciculus. However, conduction aphasia has also been described following damage to the left posterior superior temporal gyrus with the supramarginal gyrus and arcuate fasciculus completely spared (Damasio & Damasio, 1980). Furthermore, a recent study has shown that conduction aphasia, complete with phonemic paraphasias (not just on repetition), can be induced by cortical stimulation of the lateral surface of the pSTG, (Anderson et al., 1999). The fact that a defining symptom of conduction aphasia is a predominance of phonemic paraphasias—indeed, some authors view conduction aphasia as a disorder of phonemic encoding for production (Wilshire & McCarthy, 1996)—and that conduction aphasia can be caused by lesions involving the pSTG suggests that this cortical region is involved in phonemic aspects of speech production. Additional evidence comes from functional imaging studies, which have reported activation in the pSTG in tasks which have a speech production component (overt or covert), including reading (Price, et al., 1996), word generation tasks (Wise, et al., 1991), object naming (Bookheimer et al., 1995; Hickok et al., 2000), and syllable rehearsal (Paus, et al., 1996). A recent magnetoencephalography (MEG) study of object naming suggests that this region is activated in a time window consistent with the phonological encoding stage (Levelt et al., 1998), which suggested that this region is involved in encoding prior to articulation and not merely to an articulatory feedback (e.g., internal monitoring) activation. Based on an extensive review of functional imaging studies of word production, Indefrey and Levelt (2000) conclude that the left pSTG is part of a network important for phonological code retrieval. These authors also suggest that this region may also play a similar function for speech comprehension.
Anatomy of Speech Perception and Production
231
DIRECT EVIDENCE FOR SPEECH PERCEPTION/ PRODUCTION OVERLAP We have reviewed evidence indicating that the left pSTG generally is a site important for phonemic-level aspects of both speech perception and production. Are the same cortical fields involved in both perception and production or are the systems involved in perception and production supported by distinct subfields within this region? Two studies have looked at this directly. In one study, Papathanassiou et al. (2000) examined overlap in activations associated with speech perception (listing to stories) and speech production (generating verbs associated with heard nouns) tasks. They identified a large region of overlap centered on the pSTG (as well as some additional areas). However, the production task in that study is a fairly complex one and the data were averaged across subjects. Both of these facts make interpretation difficult. We have recently completed a study (Buchsbaum, et al., 2001) which investigated this issue using an event-related fMRI paradigm (Buckner et al., 1996; Hickok et al., 1997). We used tasks that involved both perception and production components. Seven subjects were presented with 24 trials, comprised of the following sequence of events. Three multisyllabic pseudowords were auditorily presented at a rate of one per second. Subjects then rehearsed the list of three words silently for 27 s; subjects were cued to stop rehearsing by a tone of 500–ms duration. The tone was followed by an 18-s rest period, and then a new trial was initiated. Our predictions were as follows: (1) Primary auditory cortex (bilaterally) would respond predominantly to the perceptual component of the task; (2) left frontal cortex (particularly dorsal premotor areas) would respond predominantly to the production component of the task; and (3) portions of the left pSTG would respond to both task components. Using linear regression, we found at least two regions within the left pSTG in each subject with an activation pattern that was better explained by the linear combination of the auditory and motor activation patterns, than by either of these activation patterns alone (Fig. 2). The strongest activation was consistently found on the dorsal posterior STG at the boundary of the temporal and parietal lobes. This experiment provides rather direct evidence in support of the view that the left pSTG contains systems that are involved both in the perception and production of speech.
CONCLUSIONS Results from a variety of studies suggest the existence of cortical fields in the left pSTG, that participate both in speech perception and in speech production. However, there is not complete overlap in the phonological
232
Hickok
Fig. 2. Sample data from one subject showing the hemodynamic response pattern in three different brain regions to the sequence of events illustrated at the bottom of the graph (see text for further details). The yellow line, reflecting activation in primary auditory cortex, shows robust signal increases in response to the two auditory stimuli and is not activated during the rehearsal period. The blue line, reflecting activation in a region of premotor cortex of the left frontal lobe, does not show an increase in activity until the onset of the rehearsal period. The green line, reflecting activation of a region at the posterior end of the Sylvian fissure (arrow in brain image), shows a robust increase in signal prior to the rehearsal period, indicating a sensitivity to the auditory stimulus, but maintains a level of activation during the entire rehearsal period indicating its involvement in production-related processes.
Anatomy of Speech Perception and Production
233
input/output systems. While both the left and right pSTG appear to participate in speech perception, only the left pSTG plays a prominent role in production. Additional work from lesion (Dronkers, 1996) and functional imaging (Wise, et al., 1999) studies have shown that the anterior insula is associated with speech production (not perception), which may include phonological aspects, and frontal areas have also been implicated in phonemic aspects of speech production. Finally, Levelt and Indefrey (2000) have suggested that access to phonological codes may be supported by the left pSTG, whereas processes such as syllabification may be supported by left frontal structures. In summary, a host of studies using a variety of methodologies suggest that subfields within the left pSTG are a site of overlap between systems involved in phonological aspects of speech perception and production. The evidence also indicates that the overlap between phonological input and output systems is partial. This arrangement, with overlapping, but nonidentical systems may explain why previous work has found evidence both for shared input/output phonemic-level systems and for separable input/output systems. REFERENCES Albert, M. L., & Bear, D. (1974). Time to understand; a case study of word deafness with reference to the role of time in auditory comprehension. Brain, 97, 373–384. Allport, D. A. (1984). Speech production and comprehension: One lexicon or two? In W. Prinz & A. F. Sanders (Eds.), Cognition and motor processes (pp. 209–228). Berlin: SpringerVerlag. Anderson, J. M., Gilmore, R., Roper, S., Crosson, B., Bauer, R. M., Nadeau, S., Beversdorf, D. Q., Cibula, J., Rogish III, M., Kortencamp, S., Hughes, J. D., Gonzalez Rothi, L. J., & Heilman, K. M. (1999). Conduction aphasia and the arcuate fasciculus: A reexamination of the Wernicke-Geschwind model. Brain and Language, 70, 1–12. Bachman, D. L., & Albert, M. L. (1988). Auditory comprehension in aphasia. In F. Boller, & J. Grafman (Eds.), Handbook of neuropsychology, Vol.1 (pp. 281–306). New York: Elsevier. Baker, E., Blumsteim, S. E., & Goodglass, H. (1981). Interaction between phonological and semantic factors in auditory comprehension. Neuropsychologia, 19, 1–15. Barde, L. F., Baynes, K., Gage, N., & Hickok, G. (2000). “Phonemic” perception in aphasia and in the isolated right hemisphere. Cognitive Neuroscience Society Annual Meeting Program, 2000, 43. Binder, J. R., Rao, S. M., Hammeke, T. A., Yetkin, F. Z., Jesmanowicz, A., Bandettini, P. A., Wong, E. C., Estkowski, L. D., Goldstein, M. D., Haughton, V. M., & Hyde, J. S. (1994). Functional magnetic resonance imaging of human auditory cortex. Annals of Neurology, 35, 662–672. Blumstein, S. (1995). The neurobiology of the sound structure of language. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (pp. 913–929). Cambridge, MA: MIT Press. Bookheimer, S. Y., Zeffiro, T. A., Blaxton, T., Gaillard, W., & Theodore, W. (1995). Regional cerebral blood flow during object naming and word reading. Human Brain Mapping, 3, 93–106. Buchman, A. S., Garron, D. C., Trost-Cardamone, J. E., Wichter, M. D., & Schwartz, M.
234
Hickok
(1986). Word deafness: One hundred years later. Journal of Neurology, Neurosurgery, and Psychiatry, 49, 489–499. Buchsbaum, B., Hickok, G., & Humphries, C. (2001). Role of left posterior superior temporal gyrus in phonological processing for speech perception and production. Cognitive Science, in press. Buckner, R. L., Bandettini, P. A., O’Craven, K. M., Savoy, R. L., Petersen, S. E., Raichle, M. E., & Rosen, B. R. (1996). Detection of cortical activation during averaged single trials of a cognitive task using functional magnetic resonance imaging. Proceedings of the National Academy of Sciences, 93, 14878–14883. Coleman, J. (1998). Cognitive reality and the phonological lexicon: A review. Journal of Neurolinguistics, 11(3), 295–320. Damasio, A. R. (1992). Aphasia. New England Journal of Medicine, 326, 531–539. Damasio, H., & Damasio, A. R. (1980). The anatomical basis of conduction aphasia. Brain, 103, 337–350. Dell, G. S., Schwartz, M. F., Martin, N., Saffran, E. M., & Gagnon, D. A. (1997). Lexical access in aphasic and nonaphasic speakers. Psychological Review, 104, 801–838. Dhankhar, A., Wexler, B. E., Fulbright, R. K., Halwes, T., Blamire, A. M., & Shulman, R. G. (1997). Functional magnetic resonance imaging assessment of the human brain auditory cortex response to increasing word presentation rates. Journal of Neurophysiology, 77, 476–483. Dronkers, N. F. (1996). A new brain region for coordinating speech articulation. Nature, 384, 159–161. Gage, N., Poeppel, D., Roberts, T. P. L., & Hickok, G. (1998). Auditory evoked M100 reflects onset acoustics of speech sounds. Brain Research, 814, 236–239. Geschwind, N. (1965). Disconnextion syndromes in animals and man. Brain, 88, 237–294, 585–644. Goodglass, H. (1992). Diagnosis of conduction aphasia. In S. E. Kohn (Ed.), Conduction aphasia (pp. 39–49). Hillsdale, N.J.: Lawrence Erlbaum Associates. Hickok, G. (2000). Speech perception, conduction aphasia, and the functional neuroanatomy of language. In Y. Grodzinsky, L. Shapiro, & D. Swinney (Eds.), Language and the brain (pp. 87–104). San Diego: Academic Press. Hickok, G., Erhard, P., Kassubek, J., Helms-Tillery, A. K., Naeve-Velguth, S., Strupp, J. P., Strick, P. L., & Ugurbil, K. (2000). A functional magnetic resonance imaging study of the role of left posterior superior temporal gyrus in speech production: implications for the explanation of conduction aphasia. Neuroscience Letters, 287, 156–160. Hickok, G., Love, T., Swinney, D., Wong, E. C., & Buxton, R. B. (1997). Functional MR imaging of auditorily presented words: A single-item presentation paradigm. Brain and Language, 58, 197–201. Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences, 4, 131–138. Indefrey, P., & Levelt, W. J. M. (2000). The neural correlates of language production. In M. S. Gazzaniga (Ed.), The new cognitive neurosciences (pp. 845–865). Cambridge, MA: MIT Press. Kuriki, S., Okita, Y., & Hirata, Y. (1995). Source analysis of magnetic field responses from the human auditory cortex elicited by short speech sounds. Experimental Brain Research, 104, 144–152. Levelt, W. J. M., Praamstra, P., Meyer, A. S., Helenius, P., & Salmelin, R. (1998). An MEG study of picture naming. Journal of Cognitive Neuroscience, 10, 553–567. Levelt, W. J. M., Roelofs, A., & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral & Brain Sciences, 22(1), 1–75.
Anatomy of Speech Perception and Production
235
MacKay, D. G. (1987). The organization of perception and action: a theory for language and other cognitive skills. New York: Springer-Verlag. Mazoyer, B. M., Tzourio, N., Frak, V., Syrota, A., Murayama, N., Levrier, O., Salamon, G., Dehaene, S., Cohen, L., & Mehler, J. (1993). The cortical representation of speech. Journal of Cognitive Neuroscience, 5, 467–479. McGlone, J. (1984). Speech comprehension after unilateral injection of sodium amytal. Brain and Language, 22, 150–157. Papathanassiou, D., Etard, O., Mellet, E., Zago, L., Mazoyer, B., & Tzourio-Mazoyer, N. (2000). A common language network for comprehension and production: A contribution to the definition of language epicenters with PET. Neuroimage, 11, 347–357. Paus, T., Perry, D. W., Zatorre, R. J., Worsley, K. J., & Evans, A. C. (1996). Modulation of cerebral blood flow in the human auditory cortex during speech: Role of motor-to-sensory discharges. European Journal of Neuroscience, 8, 2236–2246. Petersen, S. E., Fox, P. T., Posner, M. I., Mintun, M., & Raichle, M. E. (1988). Positron emission tomographic studies of the cortical anatomy of single-word processing. Nature London, 331, 585–589. Poeppel, D., Yellin, E., Phillips, C., Roberts, T. P. L., Rowley, H., Wexler, K., & Marantz, A. (1996). Task-induced asymmetry of the auditory evoked M100 neuromagnetic field elicited by speech sounds. Cognitive Brain Research, 4, 231–242. Price, C. J., Wise, R. J. S., Warburton, E. A., Moore, C. J., Howard, D., Patterson, K., Frackowiak, R. S. J., & Friston, K. J. (1996). Hearing and saying: The functional neuroanatomy of auditory word processing. Brain, 119, 919–931. Schlosser, M. J., Aoyagi, N., Fulbright, R. K., Gore, J. C., & McCarthy, G. (1998). Functional MRI studies of auditory comprehension. Human Brain Mapping, 6, 1–13. Shelton, J. R., & Caramazza, A. (1999). Deficits in lexical and semantic processing: Implications for models of normal language. Psychonomic Bulletin & Review, 6, 5–27. Wada, J., & Rasmussen, T. (1960). Intracarotid injection of sodium amytal for the lateralization of cerbral speech dominance. Journal of Neurosurgery, 17, 266–282. Wilshire, C. E., & McCarthy, R. A. (1996). Experimental investigations of an impairement in phonological encoding. Cognitive Neuropsychology, 13, 1059–1098. Wise, R., Chollet, F., Hadar, U., Friston, K., Hoffner, E., & Frackowiak, R. (1991). Distribution of cortical neural networks involved in word comprehension and word retrieval. Brain, 114(Pt 4)(5), 1803–17. Wise R. J. S., Greene, J., Bu¨chel, C., & Scott, S. K. (1999). Brain regions involved in articulation. The Lancet, 353, 1057–1061. Yaqub, B. A., Gascon, G. G., Alnosha, M., & Whitaker, H. (1988). Pure wood deafness (acquired verbal auditory agnosia) in an Arabic speaking patient. Brain, 111, 457–466. Zaidel, E. (1985). Language in the right hemisphere. In D. F. Benson & E. Zaidel (Eds.), The dual brain: Hemispheric specialization in humans (pp. 205–231). New York: Guilford Press. Zatorre, R. J., Meyer, E., Gjedde, A., & Evans, A. C. (1996). PET studies of phonetic processing of speech: Review, replication, and reanalysis. Cerebral Cortex, 6, 21–30.