首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Functional magnetic resonance imaging (fMRI) was used to examine differences between children (9–12 years) and adults (21–31 years) in the distribution of brain activation during word processing. Orthographic, phonologic, semantic and syntactic tasks were used in both the auditory and visual modalities. Our two principal results were consistent with the hypothesis that development is characterized by increasing specialization. Our first analysis compared activation in children versus adults separately for each modality. Adults showed more activation than children in the unimodal visual areas of middle temporal gyrus and fusiform gyrus for processing written word forms and in the unimodal auditory areas of superior temporal gyrus for processing spoken word forms. Children showed more activation than adults for written word forms in posterior heteromodal regions (Wernicke's area), presumably for the integration of orthographic and phonologic word forms. Our second analysis compared activation in the visual versus auditory modality separately for children and adults. Children showed primarily overlap of activation in brain regions for the visual and auditory tasks. Adults showed selective activation in the unimodal auditory areas of superior temporal gyrus when processing spoken word forms and selective activation in the unimodal visual areas of middle temporal gyrus and fusiform gyrus when processing written word forms.  相似文献   

2.
In this paper we examine the evidence for human brain areas dedicated to visual or auditory word form processing by comparing cortical activation for auditory word repetition, reading, picture naming, and environmental sound naming. Both reading and auditory word repetition activated left lateralised regions in the frontal operculum (Broca's area), posterior superior temporal gyrus (Wernicke's area), posterior inferior temporal cortex, and a region in the mid superior temporal sulcus relative to baseline conditions that controlled for sensory input and motor output processing. In addition, auditory word repetition increased activation in a lateral region of the left mid superior temporal gyrus but critically, this area is not specific to auditory word processing, it is also activated in response to environmental sounds. There were no reading specific activations, even in the areas previously claimed as visual word form areas: activations were either common to reading and auditory word repetition or common to reading and picture naming. We conclude that there is no current evidence for cortical sites dedicated to visual or auditory word form processing.  相似文献   

3.
Effects of presentation modality and response format were investigated using visual and auditory versions of the word stem completion task. Study presentation conditions (visual, auditory, non-studied) were manipulated within participants, while test conditions (visual/written, visual/spoken, auditory/written, auditory/spoken, recall-only) were manipulated between participants. Results showed evidence for same modality and cross modality priming on all four word stem completion tasks. Words from the visual study list led to comparable levels of priming across all test conditions. In contrast, words from the auditory study list led to relatively low levels of priming in the visual/written test condition and high levels of priming in the auditory/spoken test condition. Response format was found to influence priming performance following auditory study in particular. The findings confirm and extend previous research and suggest that, for implicit memory studies that require auditory presentation, it may be especially beneficial to use spoken rather than written responses.  相似文献   

4.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

5.
Temporal processing in French children with dyslexia was evaluated in three tasks: a word identification task requiring implicit temporal processing, and two explicit temporal bisection tasks, one in the auditory and one in the visual modality. Normally developing children matched on chronological age and reading level served as a control group. Children with dyslexia exhibited robust deficits in temporal tasks whether they were explicit or implicit and whether they involved the auditory or the visual modality. First, they presented larger perceptual variability when performing temporal tasks, whereas they showed no such difficulties when performing the same task on a non‐temporal dimension (intensity). This dissociation suggests that their difficulties were specific to temporal processing and could not be attributed to lapses of attention, reduced alertness, faulty anchoring, or overall noisy processing. In the framework of cognitive models of time perception, these data point to a dysfunction of the ‘internal clock’ of dyslexic children. These results are broadly compatible with the recent temporal sampling theory of dyslexia.  相似文献   

6.
Using fMRI we investigated the neural basis of audio–visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio–visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio–visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.  相似文献   

7.
Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties, mainly formant transitions, or enhanced masking of those properties. Adults and 8-year-olds with and without phonological processing deficits (PPD) participated. Children with PPD demonstrated weaker abilities than children with typical language development (TLD) in reading, sentence recall, and phonological awareness. Dependent measures were word recognition, discrimination of spectral glides, and phonetic judgments based on spectral and temporal cues. All tasks were conducted in quiet and in noise. Children with PPD showed neither poorer auditory sensitivity nor greater masking than adults and children with TLD, but they did demonstrate an unanticipated deficit in category formation for nonspeech sounds. These results suggest that these children may have an underlying deficit in perceptually organizing sensory information to form coherent categories.  相似文献   

8.
视觉词汇加工的动态神经网络及其形成   总被引:1,自引:0,他引:1  
揭示大脑加工的神经网络机制成为认知神经科学研究的最新取向.本研究以视觉词汇加工脑区(VWFA)的神经功能作为切入点,探讨视觉词汇加工神经网络的动态机制及其形成.研究一考察VWFA在刺激驱动和任务调节下的动态激活,及其与语音、语义脑区所组成神经网络的动态机制.研究二通过跨文化对比以及儿童阅读发展研究,阐明语言经验对视觉词汇加工网络的塑造作用.研究三对比功能网络、静息网络以及白质纤维束联结,探讨视觉词汇加工网络的动态联结及其形成.研究结果有助于建构视觉词汇加工的神经生理模型,为基于脑科学的阅读教学和阅读障碍矫治奠定理论基础,为认知神经科学研究提供了新的思路.  相似文献   

9.
By employing visual lexical decision and functional MRI, we studied the neural correlates of morphological decomposition in a highly inflected language (Finnish) where most inflected noun forms elicit a consistent processing cost during word recognition. This behavioral effect could reflect suffix stripping at the visual word form level and/or subsequent meaning integration at the semantic-syntactic level. The first alternative predicts increased activation for inflected vs. monomorphemic words in the left occipitotemporal cortex while the second alternative predicts left inferior frontal gyrus and/or left posterior temporal activation increases. The results show significant activation effects in the latter areas. This provides support for the second alternative, i.e., that the morphological processing cost stems from the semantic-syntactic level.  相似文献   

10.
Previous research indicates that word learning from auditory contexts may be more effective than written context at least through fourth grade. However, no study has examined contextual differences in word learning in older school-aged children when reading abilities are more developed. Here we examined developmental differences in children’s ability to deduce the meanings of unknown words from the surrounding linguistic context in the auditory and written modalities and sought to identify the most important predictors of success in each modality. A total of 89 children aged 8–15 years were randomly assigned to either read or listen to a narrative that included eight novel words, with five exposures to each novel word. They then completed three posttests to assess word meaning inferencing. Children across all ages performed better in the written modality. Vocabulary was the only significant predictor of success on the word inferencing task. Results indicate support for written stimuli as the most effective modality for novel word meaning deduction. Our findings suggest that the presence of orthographic information facilitates novel word learning even for early, less proficient readers.  相似文献   

11.
本研究筛选了11项采用功能性磁共振成像技术探究言语自闭症人群词义加工的研究, 探讨了该人群与典型人群脑激活模式的差异是否具有跨研究的稳定性。结果表明, 差异的脑激活模式稳定存在, 且表现为主要涉及左额上回的典型脑区激活不足。该结果为言语ASD人群语言加工的神经机制提供了来自词义加工的跨研究激活证据, 在明确“减弱的额叶激活”这一稳定差异表现的基础上, 强调了针对不同语言加工任务开展元分析研究的必要性。  相似文献   

12.
Functional MRI was used to investigate sex differences in brain activation during a paradigm similar to a lexical-decision task. Six males and 6 females performed two runs of the lexical visual field task (i.e., deciding which visual field a word compared with a pseudoword was presented to). A sex difference was noted behaviorally: The reaction time data showed males had a marginal right visual field advantage and women a left visual field advantage. Imaging results showed that men had a strongly left-lateralized pattern of activation, e.g., inferior frontal and fusiform gyrus, while women showed a more symmetrical pattern in language related areas with greater right-frontal and right-middle-temporal activation. The data show evidence of task-specific sex differences in the cerebral organization of language processing.  相似文献   

13.
Word recognition is generally assumed to be achieved via competition in the mental lexicon between phonetically similar word forms. However, this process has so far been examined only in the context of auditory phonetic similarity. In the present study, we investigated whether the influence of word-form similarity on word recognition holds in the visual modality and with the patterns of visual phonetic similarity. Deaf and hearing participants identified isolated spoken words presented visually on a video monitor. On the basis of computational modeling of the lexicon from visual confusion matrices of visual speech syllables, words were chosen to vary in visual phonetic distinctiveness, ranging from visually unambiguous (lexical equivalence class [LEC] size of 1) to highly confusable (LEC size greater than 10). Identification accuracy was found to be highly related to the word LEC size and frequency of occurrence in English. Deaf and hearing participants did not differ in their sensitivity to word LEC size and frequency. The results indicate that visual spoken word recognition shows strong similarities with its auditory counterpart in that the same dependencies on lexical similarity and word frequency are found to influence visual speech recognition accuracy. In particular, the results suggest that stimulus-based lexical distinctiveness is a valid construct to describe the underlying machinery of both visual and auditory spoken word recognition.  相似文献   

14.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

15.
A functional region of left fusiform gyrus termed “the visual word form area” (VWFA) develops during reading acquisition to respond more strongly to printed words than to other visual stimuli. Here, we examined responses to letters among 5‐ and 6‐year‐old early kindergarten children (N = 48) with little or no school‐based reading instruction who varied in their reading ability. We used functional magnetic resonance imaging (fMRI) to measure responses to individual letters, false fonts, and faces in left and right fusiform gyri. We then evaluated whether signal change and size (spatial extent) of letter‐sensitive cortex (greater activation for letters versus faces) and letter‐specific cortex (greater activation for letters versus false fonts) in these regions related to (a) standardized measures of word‐reading ability and (b) signal change and size of face‐sensitive cortex (fusiform face area or FFA; greater activation for faces versus letters). Greater letter specificity, but not letter sensitivity, in left fusiform gyrus correlated positively with word reading scores. Across children, in the left fusiform gyrus, greater size of letter‐sensitive cortex correlated with lesser size of FFA. These findings are the first to suggest that in beginning readers, development of letter responsivity in left fusiform cortex is associated with both better reading ability and also a reduction of the size of left FFA that may result in right‐hemisphere dominance for face perception.  相似文献   

16.
Children (7 to 10 years), young adults (17 to 24 years), and older adults (55 to 77 years) were asked to learn three lists of words that were of mixed modality (half the words were visual, and half the words were auditory). With one list the subjects were asked a semantic orienting question; with another, a nonsemantic orienting question; and with a third, no orienting question. Half the subjects in each age group were also asked to remember the presentation modality of each word. Older adults remembered less information about modality than children and young adults did, and the variation in the type of orienting question--or the lack of one--affected modality identification. However, there was no Orienting Task x Age interaction for modality identification. The results of this study suggest that encoding modality information does not take place automatically--in any age group--but that explanations focusing on encoding strategies and effort are not likely to account for older adults' difficulties in remembering presentation modality.  相似文献   

17.
Spoken word recognition by eye   总被引:2,自引:2,他引:0  
Spoken word recognition is thought to be achieved via competition in the mental lexicon between perceptually similar word forms. A review of the development and initial behavioral validations of computational models of visual spoken word recognition is presented, followed by a report of new empirical evidence. Specifically, a replication and extension of Mattys, Bernstein & Auer's (2002) study was conducted with 20 deaf participants who varied widely in speechreading ability. Participants visually identified isolated spoken words. Accuracy of visual spoken word recognition was influenced by the number of visually similar words in the lexicon and by the frequency of occurrence of the stimulus words. The results are consistent with the common view held within auditory word recognition that this task is accomplished via a process of activation and competition in which frequently occurring units are favored. Finally, future directions for visual spoken word recognition are discussed.  相似文献   

18.
The relationship between auditory and visual processing modality and strategy instructions was examined in first- and second-grade children. A Pictograph Sentence Memory Test was used to determine dominant processing modality as well as to assess instructional effects. The pictograph task was given first followed by auditory or visual interference. Children who were disrupted more by visual interference were classed as visual processors and those more disrupted by auditory interference were classed as auditory processors. Auditory and visual processors were then assigned to one of three conditions: interactive imagery strategy, sentence strategy, or a control group. Children in the imagery and sentence strategy groups were briefly taught to integrate the pictographs in order to remember them better. The sentence strategy was found to be effective for both auditory and visual processors, whereas the interactive imagery strategy was effective only for auditory processors.  相似文献   

19.
The neighborhood activation model (NAM; P. A. Luce & Pisoni, 1998) of spoken word recognition was applied to the problem of predicting accuracy of visual spoken word identification. One hundred fifty-three spoken consonant-vowel-consonant words were identified by a group of 12 college-educated adults with normal hearing and a group of 12 college-educated deaf adults. In both groups, item identification accuracy was correlated with the computed NAM output values. Analysis of subsets of the stimulus set demonstrated that when stimulus intelligibility was controlled, words with fewer neighbors were easier to identify than words with many neighbors. However, when neighborhood density was controlled, variation in segmental intelligibility was minimally related to identification accuracy. The present study provides evidence of a common spoken word recognition system for both auditory and visual speech that retains sensitivity to the phonetic properties of the input.  相似文献   

20.
Using 12 participants we conducted an fMRI study involving two tasks, word reversal and rhyme judgment, based on pairs of natural speech stimuli, to study the neural correlates of manipulating auditory imagery under taxing conditions. Both tasks engaged the left anterior superior temporal gyrus, reflecting previously established perceptual mechanisms. Engagement of the left inferior frontal gyrus in both tasks relative to baseline could only be revealed by applying small volume corrections to the region of interest, suggesting that phonological segmentation played only a minor role and providing further support for factorial dissociation of rhyming and segmentation in phonological awareness. Most importantly, subtraction of rhyme judgment from word reversal revealed activation of the parietal lobes bilaterally and the right inferior frontal cortex, suggesting that the dynamic manipulation of auditory imagery involved in mental reversal of words seems to engage mechanisms similar to those involved in visuospatial working memory and mental rotation. This suggests that reversing spoken items is a matter of mind twisting rather than tongue twisting and provides support for a link between language processing and manipulation of mental imagery.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号