首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
在四只三岁龄雄性猕猴(Macaca mulatta)中,研究了识别熟悉人与陌生人、熟悉猴与陌生猴嗓声中的脑事件相关电位(ERPs)。结果表明,对熟悉猴和陌生猴的噪声“er”所诱发的ERPs在左颞区和左、右两顶区存在着显著差异。熟悉猴嗓声在左颞区诱发的P200波潜伏期长,左顶区N100波幅值大,在左、右顶区P300波潜伏期短。熟悉人与陌生人嗓声“pa”诱发的ERPs在左、右顶区P200波潜伏期有显著性差异,熟悉“Pa”诱发反应潜伏期短。熟悉人与陌生人噪声“ba”诱发的ERPs没有差异。熟悉猴嗓声与熟悉人嗓声加“pa”诱发的ERPs相比,在右颞区的P300波幅值显著变大,两顶区N100波幅值也增大,在右顶区P200和P300波的潜伏期也明显变短。顶问区N100波潜伏期短、幅值高。陌生人与陌生猴“pa”嗓声诱发的ERPs间没有差异。熟悉猴嗓声和陌生猴嗓声出现的概率不同,并不引起ERPs的明显差异。  相似文献   

2.
Recent anatomo‐clinical correlation studies have extended to the superior temporal gyrus, the right hemisphere lesion sites associated with the left unilateral spatial neglect, in addition to the traditional posterior‐inferior‐parietal localization of the responsible lesion (supramarginal gyrus, at the temporo‐parietal junction). The study aimed at teasing apart, by means of repetitive transcranial magnetic stimulation (rTMS), the contribution of the inferior parietal lobule (angular gyrus versus supramarginal gyrus) and of the superior temporal gyrus of the right hemisphere, in making judgments about the mid‐point of a horizontal line, a widely used task for detecting and investigating spatial neglect. rTMS trains at 25 Hz frequency were delivered over the inferior parietal lobule (angular gyrus and supramarginal gyrus), the superior temporal gyrus and the anterior parietal lobe of the right hemisphere, in 10 neurologically unimpaired participants, performing a line bisection judgment task. rTMS of the inferior parietal lobule at the level of the supramarginal gyrus brought about a rightward error in the bisection judgment, ipsilateral to the side of the rTMS, with stimulation over the other sites being ineffective. The neural correlates of computing the mid‐point of a horizontal segment include the right supramarginal gyrus in the inferior parietal lobule and do not extend to the angular gyrus and the superior temporal gyrus. These rTMS data in unimpaired subjects constrain the evidence from lesion studies in brain‐damaged patients, emphasizing the major role of a subset of relevant regions.  相似文献   

3.
Position emission tomography was used to investigate whether retrieval of perceptual knowledge from long-term memory activates unique cortical regions associated with the modality and/or attribute type retrieved. Knowledge about the typical color, size, and sound of common objects and animals was probed, in response to written words naming the objects. Relative to a nonsemantic control task, all the attribute judgments activated similar left temporal and frontal regions. Visual (color, size) knowledge selectively activated the right posterior inferior temporal (PIT) cortex, whereas sound judgments elicited selective activation in the left posterior superior temporal gyrus and the adjacent parietal cortex. All of the attribute judgments activated a left PIT region, but color retrieval generated more activation in this area. Size judgments activated the right medial parietal cortex. These results indicate that the retrieval of perceptual semantic information activates not only a general semantic network, but also cortical areas specialized for the modality and attribute type of the knowledge retrieved.  相似文献   

4.
Candidate brain regions constituting a neural network for preattentive phonetic perception were identified with fMRI and multivariate multiple regression of imaging data. Stimuli contrasted along speech/nonspeech, acoustic, or phonetic complexity (three levels each) and natural/synthetic dimensions. Seven distributed brain regions' activity correlated with speech and speech complexity dimensions, including five left-sided foci [posterior superior temporal gyrus (STG), angular gyrus, ventral occipitotemporal cortex, inferior/posterior supramarginal gyrus, and middle frontal gyrus (MFG)] and two right-sided foci (posterior STG and anterior insula). Only the left MFG discriminated natural and synthetic speech. The data also supported a parallel rather than serial model of auditory speech and nonspeech perception.  相似文献   

5.
To identify neural regions that automatically respond to linguistically structured, but meaningless manual gestures, 14 deaf native users of American Sign Language (ASL) and 14 hearing non-signers passively viewed pseudosigns (possible but non-existent ASL signs) and non-iconic ASL signs, in addition to a fixation baseline. For the contrast between pseudosigns and baseline, greater activation was observed in left posterior superior temporal sulcus (STS), but not in left inferior frontal gyrus (BA 44/45), for deaf signers compared to hearing non-signers, based on VOI analyses. We hypothesize that left STS is more engaged for signers because this region becomes tuned to human body movements that conform the phonological constraints of sign language. For deaf signers, the contrast between pseudosigns and known ASL signs revealed increased activation for pseudosigns in left posterior superior temporal gyrus (STG) and in left inferior frontal cortex, but no regions were found to be more engaged for known signs than for pseudosigns. This contrast revealed no significant differences in activation for hearing non-signers. We hypothesize that left STG is involved in recognizing linguistic phonetic units within a dynamic visual or auditory signal, such that less familiar structural combinations produce increased neural activation in this region for both pseudosigns and pseudowords.  相似文献   

6.
We used a novel automatic camera, SenseCam, to create a recognition memory test for real-life events. Adapting a 'Remember/Know' paradigm, we asked healthy undergraduates, who wore SenseCam for 2 days, in their everyday environments, to classify images as strongly or weakly remembered, strongly or weakly familiar or novel, while brain activation was recorded with functional MRI. Overlapping, widely distributed sets of brain regions were activated by recollected and familiar stimuli. Within the medial temporal lobes, 'Remember' responses specifically elicited greater activity in the right anterior and posterior parahippocampal gyrus than 'Know' responses. 'New' responses activated anterior parahippocampal regions. A parametric analysis, across correctly recognised items, revealed increasing activation in the right hippocampus and posterior parahippocampal gyrus (pPHG). This may reflect modulation of these regions by the degree of recollection or, alternatively, by increasing memory strength. Strong recollection elicited greater activity in the left posterior hippocampus/pPHG than weak recollection indicating that this region is specifically modulated by the degree of recollection.  相似文献   

7.
The brain mechanisms that subserve music recognition remain unclear despite increasing interest in this process. Here we report the results of a magnetoencephalography experiment to determine the temporal dynamics and spatial distribution of brain regions activated during listening to a familiar and unfamiliar instrumental melody in control adults and adults with Down syndrome (DS). In the control group, listening to the familiar melody relative to the unfamiliar melody, revealed early and significant activations in the left primary auditory cortex, followed by activity in the limbic and sensory-motor regions and finally, activation in the motor related areas. In the DS group, listening to the familiar melody relative to the unfamiliar melody revealed increased significant activations in only three regions. Activity began in the left primary auditory cortex and the superior temporal gyrus and was followed by enhanced activity in the right precentral gyrus. These data suggest that familiar music is associated with auditory–motor coupling but does not activate brain areas involved in emotional processing in DS. These findings reveal new insights on the neural basis of music perception in DS as well as the temporal course of neural activity in control adults.  相似文献   

8.
利用功能性磁共振成像(fMRI)技术探讨文盲和非文盲汉字字形和语音加工脑机制的差异。实验1使用汉字字形和图形比较了中国人文盲和非文盲字形加工过程脑机制的左侧差异。实验2使用汉字语音和纯音比较了文盲和非文盲语音加工过程脑机制的双侧差异。结果表明文盲与非文盲汉字字形和语音加工脑机制不同,且非文盲的脑活动强。  相似文献   

9.
Listeners can perceive a person’s age from their voice with above chance accuracy. Studies have usually established this by asking listeners to directly estimate the age of unfamiliar voices. The recordings used mostly include cross-sectional samples of voices, including people of different ages to cover the age range of interest. Such cross-sectional samples likely include not only cues to age in the sound of the voice but also socio-phonetic cues, encoded in how a person speaks. How age perpcetion accuracy is affected when minimizing socio-phonetic cues by sampling the same voice at different time points remains largely unknown. Similarly, with the voices in age perception studies being usually unfamiliar to listeners, it is unclear how familiarity with a voice affects age perception. We asked listeners who were either familiar or unfamiliar with a set of four voices to complete an age discrimination task: listeners heard two recordings of the same person’s voice, recorded 15 years apart, and were asked to indicate in which recording the person was younger. Accuracy for both familiar and unfamiliar listeners was above chance. While familiarity advantages were apparent, accuracy was not particularly high: familiar and unfamiliar listeners were correct for 68.2% and 62.7% of trials, respectively (chance = 50%). Familiarity furthermore interacted with the voices included. Overall, our findings indicate that age perception from voices is not a trivial task at all times – even when listeners are familiar with a voice. We discuss our findings in the light of how reliable voice may be as a signal for age.  相似文献   

10.
Oral reading is a complex skill involving the interaction of orthographic, phonological, and semantic processes. Functional imaging studies with nonimpaired adult readers have identified a widely distributed network of frontal, inferior parietal, posterior temporal, and occipital brain regions involved in the task. However, while functional imaging can identify cortical regions engaged in the process under examination, it cannot identify those brain regions essential for the task. The current study aimed to identify those neuroanatomical regions critical for successful oral reading by examining the relationship between word and nonword oral reading deficits and areas of tissue dysfunction in acute stroke. We evaluated 91 patients with left hemisphere ischemic stroke with a test of oral word and nonword reading, and magnetic resonance diffusion-weighted and perfusion-weighted imaging, within 24-48 h of stroke onset. A voxel-wise statistical map showed that impairments in word and nonword reading were associated with a distributed network of brain regions, including the inferior and middle frontal gyri, the middle temporal gyrus, the supramarginal and angular gyri, and the middle occipital gyrus. In addition, lesions associated with word deficits were found to be distributed more frontally, while nonword deficits were associated with lesions distributed more posteriorly.  相似文献   

11.
The present functional magnetic resonance imaging study examined the neural response to familiar and unfamiliar, sport and non-sport environmental sounds in expert and novice athletes. Results revealed differential neural responses dependent on sports expertise. Experts had greater neural activation than novices in focal sensorimotor areas such as the supplementary motor area, and pre- and postcentral gyri. Novices showed greater activation than experts in widespread areas involved in perception (i.e. supramarginal, middle occipital, and calcarine gyri; precuneus; inferior and superior parietal lobules), and motor planning and processing (i.e. inferior frontal, middle frontal, and middle temporal gyri). These between-group neural differences also appeared as an expertise effect within specific conditions. Experts showed greater activation than novices during the sport familiar condition in regions responsible for auditory and motor planning, including the inferior frontal gyrus and the parietal operculum. Novices only showed greater activation than experts in the supramarginal gyrus and pons during the non-sport unfamiliar condition, and in the middle frontal gyrus during the sport unfamiliar condition. These results are consistent with the view that expert athletes are attuned to only the most familiar, highly relevant sounds and tune out unfamiliar, irrelevant sounds. Furthermore, these findings that athletes show activation in areas known to be involved in action planning when passively listening to sounds suggests that auditory perception of action can lead to the re-instantiation of neural areas involved in producing these actions, especially if someone has expertise performing the actions.  相似文献   

12.
Yang J  Shu H  Bi Y  Liu Y  Wang X 《Brain and language》2011,119(3):167-174
Embodied semantic theories suppose that representation of word meaning and actual sensory-motor processing are implemented in overlapping systems. According to this view, association and dissociation of different word meaning should correspond to dissociation and association of the described sensory-motor processing. Previous studies demonstrate that although tool-use actions and hand actions have overlapping neural substrates, tool-use actions show greater activations in frontal–parietal–temporal regions that are responsible for motor control and tool knowledge processing. In the present study, we examined the association and the dissociation of the semantic representation of tool-use verbs and hand action verbs. Chinese verbs describing tool-use or hand actions without tools were included, and a passive reading task was employed. All verb conditions showed common activations in areas of left middle frontal gyrus, left inferior frontal gyrus (BA 44/45) and left inferior parietal lobule relative to rest, and all conditions showed significant effects in premotor areas within the mask of hand motion effects. Contrasts between tool-use verbs and hand verbs demonstrated that tool verbs elicited stronger activity in left superior parietal lobule, left middle frontal gyrus and left posterior middle temporal gyrus. Additionally, psychophysiological interaction analyses demonstrated that tool verbs indicated greater connectivity among these regions. These results suggest that the brain regions involved in tool-use action processing also play more important roles in tool-use verb processing and that similar systems may be responsible for word meaning representation and actual sensory-motor processing.  相似文献   

13.
Text cues facilitate the perception of spoken sentences to which they are semantically related (Zekveld, Rudner, et al., 2011). In this study, semantically related and unrelated cues preceding sentences evoked more activation in middle temporal gyrus (MTG) and inferior frontal gyrus (IFG) than nonword cues, regardless of acoustic quality (speech in noise or speech in quiet). Larger verbal working memory (WM) capacity (reading span) was associated with greater intelligibility benefit obtained from related cues, with less speech-related activation in the left superior temporal gyrus and left anterior IFG, and with more activation in right medial frontal cortex for related versus unrelated cues. Better ability to comprehend masked text was associated with greater ability to disregard unrelated cues, and with more activation in left angular gyrus (AG). We conclude that individual differences in cognitive abilities are related to activation in a speech-sensitive network including left MTG, IFG and AG during cued speech perception.  相似文献   

14.
Using Dynamic Causal Modeling (DCM) and functional magnetic resonance imaging (fMRI), we examined effective connectivity between three left hemisphere brain regions (inferior frontal gyrus, inferior parietal lobule, fusiform gyrus) and bilateral medial frontal gyrus in 12 children with reading difficulties (M age=12.4, range: 8.11-14.10) and 12 control children (M age=12.3, range: 8.9-14.11) during rhyming judgments to visually presented words. More difficult conflicting trials either had similar orthography but different phonology (e.g. pint-mint) or similar phonology but different orthography (e.g. jazz-has). Easier non-conflicting trials had similar orthography and phonology (e.g. dime-lime) or different orthography and phonology (e.g. staff-gain). The modulatory effect from left fusiform gyrus to left inferior parietal lobule was stronger in controls than in children with reading difficulties only for conflicting trials. Modulatory effects from left fusiform gyrus and left inferior parietal lobule to left inferior frontal gyrus were stronger for conflicting trials than for non-conflicting trials only in control children but not in children with reading difficulties. Modulatory effects from left inferior frontal gyrus to inferior parietal lobule, from medial frontal gyrus to left inferior parietal lobule, and from left inferior parietal lobule to medial frontal gyrus were positively correlated with reading skill only in control children. These findings suggest that children with reading difficulties have deficits in integrating orthography and phonology utilizing left inferior parietal lobule, and in engaging phonological rehearsal/segmentation utilizing left inferior frontal gyrus possibly through the indirect pathway connecting posterior to anterior language processing regions, especially when the orthographic and phonological information is conflicting.  相似文献   

15.
The effects of perceptual learning of talker identity on the recognition of spoken words and sentences were investigated in three experiments. In each experiment, listeners were trained to learn a set of 10 talkers’ voices and were then given an intelligibility test to assess the influence of learning the voices on the processing of the linguistic content of speech. In the first experiment, listeners learned voices from isolated words and were then tested with novel isolated words mixed in noise. The results showed that listeners who were given words produced by familiar talkers at test showed better identification performance than did listeners who were given words produced by unfamiliar talkers. In the second experiment, listeners learned novel voices from sentence-length utterances and were then presented with isolated words. The results showed that learning a talker’s voice from sentences did not generalize well to identification of novel isolated words. In the third experiment, listeners learned voices from sentence-length utterances and were then given sentence-length utterances produced by familiar and unfamiliar talkers at test. We found that perceptual learning of novel voices from sentence-length utterances improved speech intelligibility for words in sentences. Generalization and transfer from voice learning to linguistic processing was found to be sensitive to the talker-specific information available during learning and test. These findings demonstrate that increased sensitivity to talker-specific information affects the perception of the linguistic properties of speech in isolated words and sentences.  相似文献   

16.
From only a single spoken word, listeners can form a wealth of first impressions of a person’s character traits and personality based on their voice. However, due to the substantial within-person variability in voices, these trait judgements are likely to be highly stimulus-dependent for unfamiliar voices: The same person may sound very trustworthy in one recording but less trustworthy in another. How trait judgements differ when listeners are familiar with a voice is unclear: Are listeners who are familiar with the voices as susceptible to the effects of within-person variability? Does the semantic knowledge listeners have about a familiar person influence their judgements? In the current study, we tested the effect of familiarity on listeners’ trait judgements from variable voices across 3 experiments. Using a between-subjects design, we contrasted trait judgements by listeners who were familiar with a set of voices – either through laboratory-based training or through watching a TV show – with listeners who were unfamiliar with the voices. We predicted that familiarity with the voices would reduce variability in trait judgements for variable voice recordings from the same identity (cf. Mileva, Kramer & Burton, Perception, 48, 471 and 2019, for faces). However, across the 3 studies and two types of measures to assess variability, we found no compelling evidence to suggest that trait impressions were systematically affected by familiarity.  相似文献   

17.
Our voices sound different depending on the context (laughing vs. talking to a child vs. giving a speech), making within‐person variability an inherent feature of human voices. When perceiving speaker identities, listeners therefore need to not only ‘tell people apart’ (perceiving exemplars from two different speakers as separate identities) but also ‘tell people together’ (perceiving different exemplars from the same speaker as a single identity). In the current study, we investigated how such natural within‐person variability affects voice identity perception. Using voices from a popular TV show, listeners, who were either familiar or unfamiliar with this show, sorted naturally varying voice clips from two speakers into clusters to represent perceived identities. Across three independent participant samples, unfamiliar listeners perceived more identities than familiar listeners and frequently mistook exemplars from the same speaker to be different identities. These findings point towards a selective failure in ‘telling people together’. Our study highlights within‐person variability as a key feature of voices that has striking effects on (unfamiliar) voice identity perception. Our findings not only open up a new line of enquiry in the field of voice perception but also call for a re‐evaluation of theoretical models to account for natural variability during identity perception.  相似文献   

18.
This study aimed to investigate the mechanisms underlying joke comprehension using event‐related potentials (ERPs). Fourteen healthy college students were presented with the context of a story without its joke or nonjoke ending, and then, when the story ending was presented, they were asked to make a funny/unfunny judgment about these endings. The behavioral results showed that there was no significant difference between funny and unfunny items, which meant that subjects could understand funny items as easily as unfunny ones. However, the ERP results showed that funny items initially elicited a more negative ERP deflection (N350–450) over frontocentral scalp regions. Dipole analysis localized the generators in the left temporal gyrus and the left medial frontal gyrus; it is suggested that these areas might be involved in detecting the incongruent element in joke comprehension. Between 600 and 800 ms, funny items subsequently elicited a more negative ERP deflection (N600–800) over frontocentral scalp regions and a more positive ERP deflection (P600–800) over posterior scalp regions. Dipole analysis localized the generator in the anterior cingulate cortex (ACC), an area involved in the breaking of mental set/expectation and the forming of novel associations. Finally, funny items elicited a more positive ERP deflection (P1250–1400) over anterior and posterior scalp regions. Dipole analysis localized the generators in the middle frontal gyrus and the fusiform gyrus, areas that might be related to the affective appreciation stage in joke process. Unlike that of Coulson and Kutas (2001), the present study might support the hypothesis of a three stage model of humor processing (humor detection, resolution of incongruity and humor appreciation).  相似文献   

19.
20.
This review describes the functional anatomy of word comprehension and production. Data from functional neuroimaging studies of normal subjects are used to determine the distributed set of brain regions that are engaged during particular language tasks and data from studies of patients with neurological damage are used to determine which of these regions are necessary for task performance. This combination of techniques indicates that the left inferior temporal and left posterior inferior parietal cortices are required for accessing semantic knowledge; the left posterior basal temporal lobe and the left frontal operculum are required for translating semantics into phonological output and the left anterior inferior parietal cortex is required for translating orthography to phonology. Further studies are required to establish the specific functions of the different regions and how these functions interact to provide our sophisticated language system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号