首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Damage to the anterior peri-intrasylvian cortex of the dominant hemisphere may give rise to a fairly consistent syndrome of articulatory deficits in the absence of relevant paresis of orofacial or laryngeal muscles (apraxia of speech, aphemia, or phonetic disintegration). The available clinical data are ambiguous with respect to the relevant lesion site, indicating either dysfunction of the premotor aspect of the lower precentral gyrus or the anterior insula in the depth of the Sylvian fissure. In order to further specify the functional anatomic substratum of this syndrome, functional magnetic resonance imaging (fMRI) was performed during reiteration of syllables differing in their demands on articulatory/phonetic sequencing (CV versus CCCV versus CVCVCV). Horizontal tongue movements and a polysyllabic lexical item served as control conditions. Repetition of the CV and CCCV monosyllables elicited a rather bilateral symmetric hemodynamic response at the level of the anterior and posterior bank of the central sulcus (primary sensorimotor cortex), whereas a more limited area of neural activity arose within this domain during production of lexical and nonlexical polysyllables, significantly or exclusively lateralized toward the left hemisphere. There is neurophysiological evidence that primary sensorimotor cortex mediates the "fractionation" of movements. Assuming that the polysyllables considered are organized as coarticulated higher-order units, the observed restricted and lateralized cortical activation pattern, most presumably, reflects a mode of "nonindividualized" motor control posing fewer demands on "movement fractionation." These findings may explain the clinical observation of disproportionately worse repetition of trisyllabic items as compared to monosyllables in apraxia of speech. The various test materials failed to elicit significant activation of the anterior insula. If at all, only horizontal tongue movements yielded a hemodynamic reaction extending beyond the sensorimotor cortex to premotor areas. Since limbic projections target the inferior dorsolateral frontal lobe, the enlarged region of activation during horizontal tongue movements might reflect increased attentional requirements of this task.  相似文献   

2.
Previous research has shown that phonetic categories have a graded internal structure that is highly dependent on acoustic-phonetic contextual factors, such as speaking rate; these factors alter not only the location of phonetic category boundaries, but also the location of a category's best exemplars. The purpose of the present investigation, which focused on the voiceless category as specified by voice onset time (VOT), was to determine whether a higher order linguistic contextual factor, lexical status, which is known to alter the location of the voiced-voiceless phonetic category boundary, also alters the location of the best exemplars of the voiceless category. The results indicated that lexical status has a more limited and qualitatively different effect on the category's best exemplars than does the acoustic-phonetic factor of speaking rate. This dissociation is discussed in terms of a production-based account in which perceived best exemplars of a category track contextual variation in speech production.  相似文献   

3.
4.
Four experiments investigated acoustic-phonetic similarity in the mapping process between the speech signal and lexical representations (vertical similarity). Auditory stimuli were used where ambiguous initial phonemes rendered a phoneme sequence lexically ambiguous (perceptual-lexical ambiguities). A cross-modal priming paradigm (Experiments 1, 2, and 3) showed facilitation for targets related to both interpretations of the ambiguities, indicating multiple activation. Experiment 4 investigated individual differences and the role of sentence context in vertical similarity mapping. The results support a model where spoken word recognition proceeds via goodness-of-fit mapping between speech and lexical representations that is not influenced by sentence context.  相似文献   

5.
This study explored a number of temporal (durational) parameters of consonant and vowel production in order to determine whether the speech production impairments of aphasics are the result of the same or different underlying mechanisms and in particular whether they implicate deficits that are primarily phonetic or phonological in nature. Detailed analyses of CT scan lesion data were also conducted to explore whether more specific neuroanatomical correlations could be made with speech production deficits. A series of acoustic analyses were conducted including voice-onset time, intrinsic and contrastive fricative duration, and intrinsic and contrastive vowel duration as produced by Broca's aphasics with anterior lesions (A patients), nonfluent aphasics with anterior and posterior lesions (AP patients), and fluent aphasics with posterior lesions (P patients). The constellation of impairments for the anterior aphasics including both the A and AP patients suggests that their disorder primarily reflects an inability to implement particular types of articulatory gestures or articulatory parameters rather than an inability to implement particular phonetic features. They display impairments in the implementation of laryngeal gestures for both consonant and vowel production. These patterns seem to relate to particular anatomical sites involving Broca's area, the anterior limb of the internal capsule, and the lowest motor cortex areas for larynx and tongue. The posterior patients also show evidence of subtle phonetic impairments suggesting that the neural instantiation of speech may require more extensive involvement, including the perisylvian area, than previously suggested.  相似文献   

6.
Despite spectral and temporal discontinuities in the speech signal, listeners normally report coherent phonetic patterns corresponding to the phonemes of a language that they know. What is the basis for the internal coherence of phonetic segments? According to one account, listeners achieve coherence by extracting and integrating discrete cues; according to another, coherence arises automatically from general principles of auditory form perception; according to a third, listeners perceive speech patterns as coherent because they are the acoustic consequences of coordinated articulatory gestures in a familiar language. We tested these accounts in three experiments by training listeners to hear a continuum of three-tone, modulated sine wave patterns, modeled after a minimal pair contrast between three-formant synthetic speech syllables, either as distorted speech signals carrying a phonetic contrast (speech listeners) or as distorted musical chords carrying a nonspeech auditory contrast (music listeners). The music listeners could neither integrate the sine wave patterns nor perceive their auditory coherence to arrive at consistent, categorical percepts, whereas the speech listeners judged the patterns as speech almost as reliably as the synthetic syllables on which they were modeled. The outcome is consistent with the hypothesis that listeners perceive the phonetic coherence of a speech signal by recognizing acoustic patterns that reflect the coordinated articulatory gestures from which they arose.  相似文献   

7.
Previous research has shown that phonetic categories have a graded internal structure that is highly dependent on acoustic-phonetic contextual factors, such as speaking rate; these factors alter not only the location of phonetic category boundaries, but also the location of a category’s best exemplars. The purpose of the present investigation, which focused on the voiceless category as specified by voice onset time (VOT), was to determine whether a higher order linguistic contextual factor, lexical status, which is known to alter the location of the voiced—voiceless phonetic category boundary, also alters the location of the best exemplars of the voiceless category. The results indicated that lexical status has a more limited and qualitatively different effect on the category’s best exemplars than does the acousticphonetic factor of speaking rate. This dissociation is discussed in terms of a production-based account in which perceived best exemplars of a category track contextual variation in speech production.  相似文献   

8.
We examine the mechanisms that support interaction between lexical, phonological and phonetic processes during language production. Studies of the phonetics of speech errors have provided evidence that partially activated lexical and phonological representations influence phonetic processing. We examine how these interactive effects are modulated by lexical frequency. Previous research has demonstrated that during lexical access, the processing of high frequency words is facilitated; in contrast, during phonetic encoding, the properties of low frequency words are enhanced. These contrasting effects provide the opportunity to distinguish two theoretical perspectives on how interaction between processing levels can be increased. A theory in which cascading activation is used to increase interaction predicts that the facilitation of high frequency words will enhance their influence on the phonetic properties of speech errors. Alternatively, if interaction is increased by integrating levels of representation, the phonetics of speech errors will reflect the retrieval of enhanced phonetic properties for low frequency words. Utilizing a novel statistical analysis method, we show that in experimentally induced speech errors low lexical frequency targets and outcomes exhibit enhanced phonetic processing. We sketch an interactive model of lexical, phonological and phonetic processing that accounts for the conflicting effects of lexical frequency on lexical access and phonetic processing.  相似文献   

9.
A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech perception is unclear. Using paired-coil focal transcranial magnetic stimulation (TMS) in healthy subjects, we demonstrate that Tpj → M1 and pIFG → M1 effective connectivity increased when listening to speech compared to white noise. A virtual lesion induced by continuous theta-burst TMS (cTBS) of the pIFG abolished the task-dependent increase in pIFG → M1 but not Tpj → M1 effective connectivity during speech perception, whereas cTBS of Tpj abolished the task-dependent increase of both effective connectivities. We conclude that speech perception enhances effective connectivity between areas of the auditory dorsal stream and M1. Tpj is situated at a hierarchically high level, integrating speech perception into motor activation through the pIFG.  相似文献   

10.
The present study examined the contribution of lexically based sources of information to acoustic-phonetic processing in fluent and nonfluent aphasic subjects and age-matched normals. To this end, two phonetic identification experiments were conducted which required subjects to label syllable-initial bilabial stop consonants varying along a VOT continuum as either /b/ or /p/. Factors that were controlled included the lexical status (word/nonword) and neighborhood density values corresponding to the two possible syllable interpretations in each set of stimuli. Findings indicated that all subject groups were influenced by both lexical status and neighborhood density in making phonetic categorizations. Results are discussed with respect to theories of acoustic-phonetic perception and lexical access in normal and aphasic populations.  相似文献   

11.
In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/ → [leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation by both Dutch and Hungarian listeners. Two additional experiments showed that this was due to the acoustic properties of the assimilated utterance, confirming earlier reports that phonetic detail is important in compensation for assimilation. Our data indicate that compensation for assimilation can occur without experience with an assimilation rule, in line with phonetic-phonological theories that assume that speech production is influenced by speech-perception abilities.  相似文献   

12.
Inner speech is typically characterized as either the activation of abstract linguistic representations or a detailed articulatory simulation that lacks only the production of sound. We present a study of the speech errors that occur during the inner recitation of tongue-twister-like phrases. Two forms of inner speech were tested: inner speech without articulatory movements and articulated (mouthed) inner speech. Although mouthing one’s inner speech could reasonably be assumed to require more articulatory planning, prominent theories assume that such planning should not affect the experience of inner speech and, consequently, the errors that are “heard” during its production. The errors occurring in articulated inner speech exhibited the phonemic similarity effect and the lexical bias effect—two speech-error phenomena that, in overt speech, have been localized to an articulatory-feature-processing level and a lexical—phonological level, respectively. In contrast, errors in unarticulated inner speech did not exhibit the phonemic similarity effect—just the lexical bias effect. The results are interpreted as support for a flexible abstraction account of inner speech. This conclusion has ramifications for the embodiment of language and speech and for the theories of speech production.  相似文献   

13.
A series of three experiments examined children's sensitivity to probabilistic phonotactic structure as reflected in the relative frequencies with which speech sounds occur and co-occur in American English. Children, ages 212 and 312 years, participated in a nonword repetition task that examined their sensitivity to the frequency of individual phonetic segments and to the frequency of combinations of segments. After partialling out ease of articulation and lexical variables, both groups of children repeated higher phonotactic frequency nonwords more accurately than they did low phonotactic frequency nonwords, suggesting sensitivity to phoneme frequency. In addition, sensitivity to individual phonetic segments increased with age. Finally, older children, but not younger children, were sensitive to the frequency of larger (diphone) units. These results suggest not only that young children are sensitive to fine-grained acoustic-phonetic information in the developing lexicon but also that sensitivity to all aspects of the sound structure increases over development. Implications for the acoustic nature of both developing and mature lexical representations are discussed.  相似文献   

14.
Abstract

The ability of Italian subjects to make phonological judgements was investigated in three experiments. The judgements comprised initial sound similarity and stress assignment on pain of both written words and pictures. Stress assignment on both words and pictures as well as initial sound similarity on pictures required the activation of phonological lexical representations, but this was not necessarily the case for initial sound similarity judgements on word pairs. The first study assessed the effects of concurrent articulatory suppression on the judgements. Experiment 2 used a concomitant task (chewing), which shares with suppression the use of articulatory components but does not involve speech programming and production. The third experiment investigated the effects of unattended speech on the phonological judgements. The results of these three experiments showed that articulatory suppression had a significant disrupting effect on accuracy in all four conditions, while neither articulatory non-speech (chewing) or unattended auditory speech had any effect on the subjects' performance. The results suggest that these phonological judgements involve the operation of an articulatory speech output component, which is not implemented peripherally and does not require the involvement of a non-articulatory input system.  相似文献   

15.
This study investigated the acoustic characteristics of voicing in English fricative consonants produced by anterior aphasics and the effects of phonetic context on these characteristics. Three patients produced voiced and voiceless fricative-vowel syllables in isolation, following a voiced velar stop, and following a voiceless velar stop. Acoustic analyses were conducted of the amplitude and patterning of glottal excitation, as well as fricative noise duration. Results showed that, although the patients are able to coordinate the articulatory gestures for voicing in fricative consonants, they demonstrated abnormal patterns of glottal excitation in the amplitude measures, owing to weaker amplitudes of glottal excitation in voiced fricatives. Context effects failed to emerge because of dysfluent speech. These results suggest that the locus of the speech production deficit of anterior aphasics is not at the higher stages of phoneme selection or planning but rather in articulatory implementation, one related to laryngeal control.  相似文献   

16.
Learning new phonetic categories in a second language may be thought of in terms of learning to focus one's attention on those parts of the acoustic-phonetic structure of speech that are phonologically relevant in any given context. As yet, however, no study has demonstrated directly that training can shift listeners' attention between acoustic cues given feedback about the linguistic phonetic category alone. In this paper we discuss the results of a training study in which subjects learned to shift their attention from one acoustic cue to another using only category-level identification as feedback. Results demonstrate that training redirects listeners' attention to acoustic cues and that this shift of attention generalizes to novel (untrained) phonetic contexts.  相似文献   

17.
汉语是一种声调语言, 在口语韵律表达中存在独特的特点。本项目利用汉语不同于非声调语言的特点, 探讨已有研究中尚未澄清的问题, 即汉语声调在早期自动加工中对语义激活的作用, 以及在不同认知阶段中, 汉语声调和语调加工的神经机制。这些问题是当前的研究热点, 但仍存在着各种分歧和激烈的争论。本项目拟采用ERP方法, 结合LORETA源定位技术, 通过不同的实验范式考察以下内容:(1)在早期自动加工过程中, 汉语声调对词汇语义激活的作用; (2)在早期阶段, 汉语声调和语调加工的大脑激活模式; (3)在晚期阶段, 汉语声调和语调加工的大脑激活模式。对这些问题的考察将有助于澄清当前的激烈争论, 拓展以往基于非声调语言研究建立的口语加工理论的适用范围, 为完善言语认知加工的理论模型提供新的实验证据。  相似文献   

18.
Three experiments demonstrated that the pattern of changes in articulatory rate in a precursor phrase can affect the perception of voicing in a syllable-initial prestress velar stop consonant. Fast and slow versions of a 10-word precursor phrase were recorded, and sections from each version were combined to produce several precursors with different patterns of change in articulatory rate. Listeners judged the identity of a target syllable, selected from a 7-member /gi/-ki/ voice-onset-time (VOT) continuum, that followed each precursor phrase after a variable brief pause. The major results were: (a) articulatory-rate effects were not restricted to the target syllable's immediate context; (b) rate effects depended on the pattern of rate changes in the precursor and not the amount of fast or slow speech or the proximity of fast or slow speech to the target syllable: and (c) shortening of the pause (or closure) duration led to a shortening of VOT boundaries rather than a lengthening as previously found in this phonetic context. Results are explained in terms of the role of dynamic temporal expectancies in determining the response to temporal information in speech, and implications for theories of extrinsic vs. intrinsic timing are discussed.  相似文献   

19.
Models of speech perception attribute a different role to contextual information in the processing of assimilated speech. This study concerned perceptual processing of regressive voice assimilation in French. This phonological variation is asymmetric in that assimilation is partial for voiced stops and nearly complete for voiceless stops. Two auditory-visual cross-modal form priming experiments were used to examine perceptual compensation for assimilation in French words with voiceless versus voiced stop offsets. The results show that, for the former segments, assimilating context enhances underlying form recovery, whereas it does not for the latter. These results suggest that two sources of information -- contextual information and bottom-up information from the assimilated forms themselves -- are complementary and both come into play during the processing of fully or partially assimilated word forms.  相似文献   

20.
言语想象不仅在大脑预处理机制方面起到重要的作用,还是目前脑机接口领域研究的热点。与正常言语产生过程相比,言语想象的理论模型、激活脑区、神经传导路径等均与其有较多相似之处。而言语障碍群体的言语想象、想象有意义的词语和句子时的脑神经机制与正常言语产生存在差异。鉴于人类言语系统的复杂性,言语想象的神经机制研究还面临一系列挑战,未来研究可在言语想象质量评价工具及神经解码范式、脑控制回路、激活通路、言语障碍群体的言语想象机制、词语和句子想象的脑神经信号等方面进一步探索,为有效提高脑机接口的识别率提供依据,为言语障碍群体的沟通提供便利。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号