首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The effects of orthographic and phonological relatedness between distractor word and object name in a picture–word interference task were investigated. In Experiment 1 distractors were presented visually, and consistent with previous findings, priming effects arising from phonological overlap were modulated by the presence or absence of orthographic similarity between distractor and picture name. This pattern is interpreted as providing evidence for cascaded processing in visual word recognition. In Experiment 2 distractors were presented auditorily, and here priming was not affected by orthographic match or mismatch. These findings provide no evidence for orthographic effects in speech perception and production, contrary to a number of previous reports.  相似文献   

2.
The general magnocellular theory postulates that dyslexia is the consequence of a multimodal deficit in the processing of transient and dynamic stimuli. In the auditory modality, this deficit has been hypothesized to interfere with accurate speech perception, and subsequently disrupt the development of phonological and later reading and spelling skills. In the visual modality, an analogous problem might interfere with literacy development by affecting orthographic skills. In this prospective longitudinal study, we tested dynamic auditory and visual processing, speech-in-noise perception, phonological ability and orthographic ability in 62 five-year-old preschool children. Predictive relations towards first grade reading and spelling measures were explored and the validity of the global magnocellular model was evaluated using causal path analysis. In particular, we demonstrated that dynamic auditory processing was related to speech perception, which itself was related to phonological awareness. Similarly, dynamic visual processing was related to orthographic ability. Subsequently, phonological awareness, orthographic ability and verbal short-term memory were unique predictors of reading and spelling development.  相似文献   

3.
The role of central motor processes in rehearsal was investigated by studying a braindamaged patient with a severe articulatory impairment. Evidence is presented that his articulatory impairment is due to a disruption of motor programming rather than to peripheral muscle weakness. Despite his motor programming deficit, the patient showed normal auditory span and evidence of rehearsal for auditorily presented sequences of words. For visual presentation, span was reduced and there was no evidence of rehearsal. Also, the patient showed excellent sentence comprehension for syntactically complex sentences for both auditory and visual presentation. The results imply that central motor processes are not critical for normal short-term memory, at least for auditory presentation, and that reading comprehension does not depend on inner rehearsal.  相似文献   

4.
Listeners perceive speech sounds relative to context. Contextual influences might differ over hemispheres if different types of auditory processing are lateralized. Hemispheric differences in contextual influences on vowel perception were investigated by presenting speech targets and both speech and non-speech contexts to listeners’ right or left ears (contexts and targets either to the same or to opposite ears). Listeners performed a discrimination task. Vowel perception was influenced by acoustic properties of the context signals. The strength of this influence depended on laterality of target presentation, and on the speech/non-speech status of the context signal. We conclude that contrastive contextual influences on vowel perception are stronger when targets are processed predominately by the right hemisphere. In the left hemisphere, contrastive effects are smaller and largely restricted to speech contexts.  相似文献   

5.
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.  相似文献   

6.
Associating crossmodal auditory and visual stimuli is an important component of perception, with the posterior superior temporal sulcus (pSTS) hypothesized to support this. However, recent evidence has argued that the pSTS serves to associate two stimuli irrespective of modality. To examine the contribution of pSTS to crossmodal recognition, participants (N = 13) learned 12 abstract, non-linguistic pairs of stimuli over 3 weeks. These paired associates comprised four types: auditory–visual (AV), auditory–auditory (AA), visual–auditory (VA), and visual–visual (VV). At week four, participants were scanned using magnetoencephalography (MEG) while performing a correct/incorrect judgment on pairs of items. Using an implementation of synthetic aperture magnetometry that computes real statistics across trials (SAMspm), we directly contrasted crossmodal (AV and VA) with unimodal (AA and VV) pairs from stimulus-onset to 2 s in theta (4–8 Hz), alpha (9–15 Hz), beta (16–30 Hz), and gamma (31–50 Hz) frequencies. We found pSTS showed greater desynchronization in the beta frequency for crossmodal compared with unimodal trials, suggesting greater activity during the crossmodal pairs, which was not influenced by congruency of the paired stimuli. Using a sliding window SAM analysis, we found the timing of this difference began in a window from 250 to 750 ms after stimulus-onset. Further, when we directly contrasted all sub-types of paired associates from stimulus-onset to 2 s, we found that pSTS seemed to respond to dynamic, auditory stimuli, rather than crossmodal stimuli per se. These findings support an early role for pSTS in the processing of dynamic, auditory stimuli, and do not support claims that pSTS is responsible for associating two stimuli irrespective of their modality.  相似文献   

7.
For both adults and children, acoustic context plays an important role in speech perception. For adults, both speech and nonspeech acoustic contexts influence perception of subsequent speech items, consistent with the argument that effects of context are due to domain-general auditory processes. However, prior research examining the effects of context on children’s speech perception have focused on speech contexts; nonspeech contexts have not been explored previously. To better understand the developmental progression of children’s use of contexts in speech perception and the mechanisms underlying that development, we created a novel experimental paradigm testing 5-year-old children’s speech perception in several acoustic contexts. The results demonstrated that nonspeech context influences children’s speech perception, consistent with claims that context effects arise from general auditory system properties rather than speech-specific mechanisms. This supports theoretical accounts of language development suggesting that domain-general processes play a role across the lifespan.  相似文献   

8.
Phonological processing was investigated in nine Broca's aphasics. A receptive phonological task examined knowledge of phonotactic rules. Three lists of “word” pairs, phoneme sequences, were constructed such that one member was possible in English and the other was not. The lists varied in distance from English or phonemic structure (CCVC vs. CCVCC). Following auditory presentation, the aphasic was required to indicate which of the two “words” was possible in English. The productive task was an articulation test for monosyllabic and polysyllabic words. The high positive correlation between receptive and productive scores suggested that, rather than motor speech sequencing problems being exclusively involved, more general phonological-articulatory processes were disrupted. Several hypotheses were advanced to describe the nature of this disruption.  相似文献   

9.
Since Köhler’s experiments in the 1920s, researchers have demonstrated a correspondence between words and shapes. Dubbed the “Bouba–Kiki” effect, these auditory–visual associations extend across cultures and are thought to be universal. More recently the effect has been shown in other modalities including taste, suggesting the effect is independent of vision. The study presented here tested the “Bouba–Kiki” effect in the auditory–haptic modalities, using 2D cut-outs and 3D models based on Köhler’s original drawings. Presented with shapes they could feel but not see, sighted participants showed a robust “Bouba–Kiki” effect. However, in a sample of people with a range of visual impairments, from congenital total blindness to partial sight, the effect was significantly less pronounced. The findings suggest that, in the absence of a direct visual stimulus, visual imagery plays a role in crossmodal integration.  相似文献   

10.
Two aspects of visual speech processing in speechreading (word decoding and word discrimination) were tested in a group of 24 normal hearing and a group of 20 hearing-impaired subjects. Word decoding and word discrimination performance were independent of factors related to the impairment, both in a quantitative and a qualitative sense. Decoding skill, but not discrimination skill, was associated with sentence-based speechreading. The results were interpreted such that, in order to represent a critical component process in sentence-based speechreading, the visual speech perception task must entail lexically induced processing as a task-demand. The theoretical status of the word decoding task as one operationalization of a speech decoding module was discussed (Fodor, 1983). An error analysis of performance in the word decoding/discrimination tasks suggested that the perception of heard stimuli, as well as the perception of lipped stimuli, were critically dependent on the same features; that is, the temporally initial phonetic segment of the word (cf. Marslen-Wilson, 1987). Implications for a theory of visual speech perception were discussed.  相似文献   

11.
The present study aimed at investigating to what extent sensorimotor synchronization is related to (i) musical specialization, (ii) perceptual discrimination, and (iii) the movement’s trajectory. To this end, musicians with different musical expertise (drummers, professional pianists, amateur pianists, singers, and non-musicians) performed an auditory and visual synchronization and a cross-modal temporal discrimination task. During auditory synchronization drummers performed less variably than amateur pianists, singers and non-musicians. In the cross-modal discrimination task drummers showed superior discrimination abilities which were correlated with synchronization variability as well as with the trajectory. These data suggest that (i) the type of specialized musical instrument affects synchronization abilities and (ii) synchronization accuracy is related to perceptual discrimination abilities as well as to (iii) the movement’s trajectory. Since particularly synchronization variability was affected by musical expertise, the present data imply that the type of instrument improves accuracy of timekeeping mechanisms.  相似文献   

12.
13.
Buchan JN  Munhall KG 《Perception》2011,40(10):1164-1182
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.  相似文献   

14.
We report on three experiments that provide a real-time processing perspective on the poor comprehension of Broca’s aphasic patients for non-canonically structured sentences. In the first experiment we presented sentences (via a Cross Modal Lexical Priming (CMLP) paradigm) to Broca’s patients at a normal rate of speech. Unlike the pattern found with unimpaired control participants, we observed a general slowing of lexical activation and a concomitant delay in the formation of syntactic dependencies involving “moved” constituents and empty elements. Our second experiment presented these same sentences at a slower rate of speech. In this circumstance, Broca’s patients formed syntactic dependencies as soon as they were structurally licensed (again, a different pattern from that demonstrated by the unimpaired control group). The third experiment used a sentence-picture matching paradigm to chart Broca’s comprehension for non-canonically structured sentences (presented at both normal and slow rates). Here we observed significantly better scores in the slow rate condition. We discuss these findings in terms of the functional commitment of the left anterior cortical region implicated in Broca’s aphasia and conclude that this region is crucially involved in the formation of syntactically-governed dependency relations, not because it supports knowledge of syntactic dependencies, but rather because it supports the real-time implementation of these specific representations by sustaining, at the least, a lexical activation rise-time parameter.  相似文献   

15.
We tested categorical perception and speech-in-noise perception in a group of five-year-old preschool children genetically at risk for dyslexia, compared to a group of well-matched control children and a group of adults. Both groups of children differed significantly from the adults on all speech measures. Comparing both child groups, the risk group presented a slight but significant deficit in speech-in-noise perception, particularly in the most difficult listening condition. For categorical perception a marginally significant deficit was observed on the discrimination task but not on the identification task. Speech parameters were significantly related to phonological awareness and low-level auditory measures. Results are discussed within the framework of a causal model where low-level auditory problems are hypothesized to result in subtle speech perception problems that might interfere with the development of phonology and reading and spelling ability.  相似文献   

16.
Two studies investigated the effects of same-modality interference on the immediate serial recall of auditorily and visually presented stimuli. Typically, research in which this task is used has been conducted in quiet rooms, excluding auditory information that is extraneous to the auditorily presented stimuli. However, visual information such as background items clearly within the subject's view have not been excluded during visual presentation. Therefore, in both the present studies, the authors used procedures that eliminated extra-list visual interference and introduced extra-list auditory interference. When same-modality interference was eliminated, weak visual recency effects were found, but they were smaller than those that were generated by auditorily presented items. Further, mid-list and end-of-list recall of visually presented stimuli was unaffected by the amount of interfering visual information. On the other hand, the introduction of auditory interference increased mid-list recall of auditory stimuli. The results of Experiment 2 showed that the mid-list effect occurred with a moderate, but not with a minimal or maximal, level of auditory interference, indicating that moderate amounts of auditory interference had an alerting effect that is not present in typical visual interference.  相似文献   

17.
This study reports on the auditory and visual comprehension of Japanese idioms having both literal and figurative meanings. Experiment I conducted the rating of the semantic distance between the two meanings. Experiment II investigated the difference of comprehension between semantically far and close idioms. Here the materials are presented in isolation both auditorily and visually. Experiment III conducted the same investigation as Experiment II, except that idioms were presented embedded in literally and figuratively induced contexts. Experiment IV reinvestigated the findings obtained from the previous experiments. The results of these experiments show that in isolation visual presentation precedes auditory presentation, and that both in the auditory and visual presentations semantically far idioms are comprehended more accurately than semantically close idioms.  相似文献   

18.
言语知觉领域主要的理论争论是听觉理论和动觉理论的对立, 争论的焦点围绕言语知觉是否需要动作表征的中介。言语知觉脑机制的研究有助于澄清事实。脑机制的探讨表明言语知觉主要激活了后部听觉皮层区, 包括颞上皮层的背侧(颞横回和颞平面)和外侧区(颞上回和颞上沟); 而前部和言语产生相关的动作皮层并没有表现出一致的激活模式。言语产生相关的动作表征主要在一些特殊任务情形中以自上而下的反馈机制影响了言语知觉, 可能并非正常言语知觉所必须。  相似文献   

19.
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross‐modal word‐learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning (‘same modality’ condition: auditory test after auditory learning, visual test after visual learning) or in the other modality (‘cross‐modality’ condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross‐modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross‐modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross‐modal representation of visually learned words.  相似文献   

20.
We studied the influence of word frequency and orthographic depth on the interaction of orthographic and phonetic information in word perception. Native speakers of English and Serbo-Croatian were presented with simultaneous printed and spoken verbal stimuli and had to decide whether they were equivalent. Decision reaction time was measured in three experimental conditions: Clear print and clear speech, degraded print and clear speech, and clear print and degraded speech. Within each language, the effects of visual and auditory degradation were measured, relative to the undegraded presentation. Both effects of degradation were much stronger in English than in Serbo-Croatian. Moreover, they were the same for high- and low-frequency words in both languages. These results can be accounted for by a parallel interactive processing model that assumes lateral connections between the orthographic and phonological systems at all of their levels. The structure of these lateral connections is independent of word frequency and is determined by the relationship between spelling and phonology in the language: simple isomorphic connections between graphemes and phonemes in Serbo-Croatian, but more complex, many-to-one, connections in English.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号