首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Profiles of right hemisphere language and speech following brain bisection   总被引:1,自引:0,他引:1  
A variety of language tasks were administered to two patients who had undergone staged callosal section in an effort to control otherwise intractable epilepsy. Right hemisphere lexical capacity varied and preliminary results suggest that the case displaying greater semantic power also possessed some syntactic competence. This same case (V.P.) was also capable of expressive language from the right hemisphere. This rare capacity allowed for fresh observations on the dynamic interactions of conscious control that occur in this kind of patient.  相似文献   

2.
Twenty-two right hemisphere brain-damaged and 22 non-brain-damaged patients were given a multiple-choice recognition task which contained true statements, statements which were inferentially true but not actually heard before, and false statements. It was hypothesized that if right hemisphere brain damage disturbs the ability to comprehend inferences, these subjects, unlike their normal counterparts, would not falsely recognize true inferences as heard before. This hypothesis was not confirmed. However, the right hemisphere group was poorer than controls at rejecting false statements. This behavior was speculated to be a retrieval difficulty, which was exacerbated if the information contained spatial or semantically similar material.  相似文献   

3.
The present study investigated the abilities of left-hemisphere-damaged (LHD) non-fluent aphasic, right-hemisphere-damaged (RHD), and normal control individuals to access, in sentential biasing contexts, the multiple meanings of three types of ambiguous words, namely homonyms (e.g., "punch"), metonymies (e.g., "rabbit"), and metaphors (e.g., "star"). Furthermore, the predictions of the "suppression deficit" and "coarse semantic coding" hypotheses, which have been proposed to account for RH language function/dysfunction, were tested. Using an auditory semantic priming paradigm, ambiguous words were incorporated in dominant- or subordinate-biasing sentence-primes followed after a short (100 ms) or long (1,000 ms) interstimulus interval (ISI) by dominant-meaning-related, subordinate-meaning-related or unrelated target words. For all three types of ambiguous words, both the effects of context and ISI were obvious in the performance of normal control subjects, who showed multiple meaning activation at the short ISI, but eventually, at the long ISI, contextually appropriate meaning selection. Largely similar performance was exhibited by the LHD non-fluent aphasic patients as well. In contrast, RHD patients showed limited effects of context, and no effects of the time-course of processing. In addition, although homonymous and metonymous words showed similar patterns of activation (i.e., both meanings were activated at both ISIs), RHD patients had difficulties activating the subordinate meanings of metaphors, suggesting a selective problem with figurative meanings. Although the present findings do not provide strong support for either the "coarse semantic coding" or the "suppression deficit" hypotheses, they are viewed as being more consistent with the latter, according to which RH damage leads to deficits suppressing alternative meanings of ambiguous words that become incompatible with the context.  相似文献   

4.
The ability of anterior aphasics and patients with right-hemisphere damage to comprehend both the literal and nonliteral readings of indirect speech acts was examined. Subjects viewed videotaped episodes in which one actor asked another “Can you X?” and the second actor responded with either an action or a simple “Yes.” Subjects judged whether the response was appropriate given its context. Anterior aphasics could comprehend the nonliteral but not the literal reading, supporting models that posit that people have direct access to nonliteral but conventional readings. Patients with right-hemisphere damage could appreciate the direct reading, but failed to distinguish between appropriate and inappropriate action-responses. This finding suggests that it may be possible to dissociate the pragmatic and syntactic aspects of comprehension of indirect speech acts.  相似文献   

5.
Cognitive Processing - Several speech models have been formed in the past aiming to predict the abilities of nonnative listeners or learners in perceiving and producing speech sounds. The present...  相似文献   

6.
Language scientists have broadly addressed the problem of explaining how language users recognize the kind of speech act performed by a speaker uttering a sentence in a particular context. They have done so by investigating the role played by the illocutionary force indicating devices (IFIDs), i.e., all linguistic elements that indicate the illocutionary force of an utterance. The present work takes a first step in the direction of an experimental investigation of non-verbal IFIDs because it investigates the role played by facial expressions and, in particular, of upper-face action units (AUs) in the comprehension of three basic types of illocutionary force: assertions, questions, and orders. The results from a pilot experiment on production and two comprehension experiments showed that (1) certain upper-face AUs seem to constitute non-verbal signals that contribute to the understanding of the illocutionary force of questions and orders; (2) assertions are not expected to be marked by any upper-face AU; (3) some upper-face AUs can be associated, with different degrees of compatibility, with both questions and orders.  相似文献   

7.
This paper explores the extent of timing deficits in vowels produced by brain-damaged speakers of a language with a phonological contrast in vowel length. Short and long vowels in Thai were produced in isolated monosyllabic words by 20 normal adults, 14 right hemisphere patients, and 17 left hemisphere aphasics. Vowel durations were measured spectrographically. Although the phonological contrast was relatively preserved, as indicated by average duration, a subtle timing deficit in vowels produced by nonfluent aphasics was indicated by a compressed duration continuum and increased variability in vowel production.  相似文献   

8.
9.
Models of speech perception have stressed the importance of investigating recognition of words in fluent speech. The effects of word length and the initial phonemes of words on the speech perception of foreign language learners were investigated. English-speaking subjects were asked to listen for target words in repeated presentations of a prose passage read in French by a native speaker. The four target words were either one or four syllables in length and began with either an initial stop or fricative consonant. Each of the four words was substituted 60 times in identical sentence contexts in place of nouns deleted from the original story. The results indicated that four-syllable words were more easily detected than one-syllable words. Contrary to expectation, stop-initial words were not more accurately detected than fricative-initial words. Based on these findings additional considerations that seem needed in order to apply current models of word recognition to naive listeners are discussed.  相似文献   

10.
The present study examined the differential contribution of cortical and subcortical brain structures in emotional processing by comparing patients with focal cortical lesions (n = 32) to those with primarily subcortical dysregulation of the basal ganglia (Parkinson's disease n = 14). A standardized measure of emotional perception (Tübingen Affect Battery) was used. Only patients in the more advanced stages of Parkinson's disease and patients with focal damage to the (right) frontal lobe differed significantly from controls in both facial expression and affective prosody recognition. The findings imply involvement of the fronto-striatal circuitry in emotional processing.  相似文献   

11.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners’ perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener’s orientation to speech stimuli.  相似文献   

12.
13.
Previous cross-language research has indicated that some speech contrasts present greater perceptual difficulty for adult non-native listeners than others do. It has been hypothesized that phonemic, phonetic, and acoustic factors contribute to this variability. Two experiments were conducted to evaluate systematically the role of phonemic status and phonetic familiarity in the perception of non-native speech contrasts and to test predictions derived from a model proposed by Best, McRoberts, and Sithole (1988). Experiment 1 showed that perception of an unfamiliar phonetic contrast was not less difficult for subjects who had experience with an analogous phonemic distinction in their native language than for subjects without such analogous experience. These results suggest that substantive phonetic experience influences the perception of non-native contrasts, and thus should contribute to a conceptualization of native language-processing skills. In Experiment 2, English listeners' perception of two related nonphonemic place contrasts was not consistently different as had been expected on the basis of phonetic familiarity. A clear order effect in the perceptual data suggests that interactions between different perceptual assimilation patterns or acoustic properties of the two contrasts, or interactions involving both of these factors, underlie the perception of the two contrasts in this experiment. It was concluded that both phonetic familiarity and acoustic factors are potentially important to the explanation of variability in perception of nonphonemic contrasts. The explanation of how linguistic experience shapes speech perception will require characterizing the relative contribution of these factors, as well as other factors, including individual differences and variables that influence a listener's orientation to speech stimuli.  相似文献   

14.
The signing brain: the neurobiology of sign language   总被引:3,自引:0,他引:3  
Most of our knowledge about the neurobiological bases of language comes from studies of spoken languages. By studying signed languages, we can determine whether what we have learnt so far is characteristic of language per se or whether it is specific to languages that are spoken and heard. Overwhelmingly, lesion and neuroimaging studies indicate that the neural systems supporting signed and spoken language are very similar: both involve a predominantly left-lateralised perisylvian network. Recent studies have also highlighted processing differences between languages in these different modalities. These studies provide rich insights into language and communication processes in deaf and hearing people.  相似文献   

15.
16.
Speech intelligibility during the performance of a second task (sorting of small plates), and the frequency of sorting in dependence of the phases of speech processing (input-processing-output) are investigated. A fixed speech level (65 dB) is combined with 5 different noise levels (55, 60, 65, 70, 75 dB). The speech material and the sorting task vary in difficulty (words, sentences, small texts; simple and complicated sorting). By rating 3 questions the subjective quality of both tasks is inquired. Main results: speech intelligibility and frequency of sorting vary in dependence of noise level; frequency of sorting varies in dependence of the phases of speech processing and speech material; subjective ratings are corresponding with the performance of both tasks.  相似文献   

17.
Transcoding Arabic numbers from and into verbal number words is one of the most basic number processing tasks commonly used to index the verbal representation of numbers. The inversion property, which is an important feature of some number word systems (e.g., German einundzwanzig [one and twenty]), might represent a major difficulty in transcoding and a challenge to current transcoding models. The mastery of inversion, and of transcoding in general, might be related to nonnumerical factors such as working memory resources given that different elements and their sequence need to be memorized and manipulated. In this study, transcoding skills and different working memory components in Austrian (German-speaking) 7-year-olds were assessed. We observed that inversion poses a major problem in transcoding for German-speaking children. In addition, different components of working memory skills were differentially correlated with particular transcoding error types. We discuss how current transcoding models could account for these results and how they might need to be adapted to accommodate inversion properties and their relation to different working memory components.  相似文献   

18.
19.
One of the basic goals of cognitive psychology is the analysis of the covert processes that occur between stimulus and response. In the past 20-30 years, the tools available to cognitive psychologists have been augmented by a number of imaging techniques for studying the 'brain in action' in a non-invasive manner. These techniques have their strength in either temporal or spatial information, but not both. We review here recent advances of a new approach, the event-related optical signal (EROS). This method allows measurements of the time course of neural activity in specific cortical structures, thus combining good spatial and temporal specificity. As an example, we show how EROS can be used to distinguish between serial and parallel models of information processing.  相似文献   

20.
The pronunciation of words is highly variable. This variation provides crucial information about the cognitive architecture of the language production system. This review summarizes key empirical findings about variation phenomena, integrating corpus, acoustic, articulatory, and chronometric data from phonetic and psycholinguistic studies. It examines how these data constrain our current understanding of word production processes and highlights major challenges and open issues that should be addressed in future research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号