首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Individual speechreading abilities have been linked with a range of cognitive and language-processing factors. The role of specifically visual abilities in relation to the processing of visible speech is less studied. Here we report that the detection of coherent visible motion in random-dot kinematogram displays is related to speechreading skill in deaf, but not in hearing, speechreaders. A control task requiring the detection of visual form showed no such relationship. Additionally, people born deaf were better speechreaders than hearing people on a new test of silent speechreading.  相似文献   

2.
The effects of talker variability on visual speech perception were tested by having subjects speechread sentences from either single-talker or mixed-talker sentence lists. Results revealed that changes in talker from trial to trial decreased speechreading performance. To help determine whether this decrement was due to talker change--and not a change in superficial characteristics of the stimuli--Experiment 2 tested speechreading from visual stimuli whose images were tinted by a single color, or mixed colors. Results revealed that the mixed-color lists did not inhibit speechreading performance relative to the single-color lists. These results are analogous to findings in the auditory speech literature and suggest that, like auditory speech, visual speech operations include a resource-demanding component that is influenced by talker variability.  相似文献   

3.
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca’s aphasia, and therefore inferred damage to Broca’s area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca’s area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d′, and was unrelated to the degree of non-fluency in the patients’ speech production. Performance on the auditory–visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory–visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory–visual matching of phonological forms.  相似文献   

4.
Vatakis A  Spence C 《Perception》2008,37(1):143-160
Research has shown that inversion is more detrimental to the perception of faces than to the perception of other types of visual stimuli. Inverting a face results in an impairment of configural information processing that leads to slowed early face processing and reduced accuracy when performance is tested in face recognition tasks. We investigated the effects of inverting speech and non-speech stimuli on audiovisual temporal perception. Upright and inverted audiovisual video clips of a person uttering syllables (experiments 1 and 2), playing musical notes on a piano (experiment 3), or a rhesus monkey producing vocalisations (experiment 4) were presented. Participants made unspeeded temporal-order judgments regarding which modality stream (auditory or visual) appeared to have been presented first. Inverting the visual stream did not have any effect on the sensitivity of temporal discrimination responses in any of the four experiments, thus implying that audiovisual temporal integration is resilient to the effects of orientation in the picture plane. By contrast, the point of subjective simultaneity differed significantly as a function of orientation only for the audiovisual speech stimuli but not for the non-speech stimuli or monkey calls. That is, smaller auditory leads were required for the inverted than for the upright-visual speech stimuli. These results are consistent with the longer processing latencies reported previously when human faces are inverted and demonstrates that the temporal perception of dynamic audiovisual speech can be modulated by changes in the physical properties of the visual speech (ie by changes in orientation).  相似文献   

5.
Models of how listeners understand speech must specify the types of representations that are computed, the nature of the flow of information, and the control structures that modify performance. Three experiments are reported that focus on the control processes in speech perception. Subjects in the experiments tried to discriminate stimuli in which a phoneme had been replaced with white noise from stimuli in which white noise was merely superimposed on a phoneme. In the first two experiments, subjects practiced the discrimination for thousands of trials but did not improve, suggesting that they have poor access to low-level representations of the speech signal. In the third experiment, each (auditory) stimulus was preceded by a visual cue that could potentially be used to focus attention in order to enhance performance. Only subjects who received information about both the identity of the impending word and the identity of the critical phoneme showed enhanced discrimination. Other cues, including syllabic plus phonemic information, were ineffective. The results indicate that attentional control of processing is difficult but possible, and that lexical representations play a central role in the allocation of attention.  相似文献   

6.
Skilled speechreading: A single-case study   总被引:1,自引:1,他引:0  
In a case study the cognitive characteristics of a skilled visual speechreader (SJ) was examined and compared to a control group. SJ is a 56 year old woman, skilled in visual speechreading. She differs from most of the 119 individuals in the control group, as she uses a particular speechreading strategy in which she attempts to repeat overtly each spoken word as soon as it has been uttered, and to summarize and fill in missing pieces of information whenever it is possible (e.g., during pauses) during the conversation.
SJ outperformed the control group on three types of tasks; a reading span task, performance in the asymptote level in the serial position curve, and verbal inference-making. SJ's results were discussed with respect to (a) how they relate to the general case (i.e., models based on group-data) and to (b) her speechreading strategy. From a clinical perspective it was suggested that it might be possible to practice the strategy as such, but any possible improvement is dependent on the individual's capability to process information in this way.  相似文献   

7.
Children's working-memory processes: a response-timing analysis   总被引:3,自引:0,他引:3  
Recall response durations were used to clarify processing in working-memory tasks. Experiment 1 examined children's performance in reading span, a task in which sentences were processed and the final word of each sentence was retained for subsequent recall. Experiment 2 examined the development of listening-, counting-, and digit-span task performance. Responses were much longer in the reading-and listening-span tasks than in the other span tasks, suggesting that participants in sentence-based span tasks take time to retrieve the semantic or linguistic structure as cues to recall of the sentence-final words. Response durations in working-memory tasks helped to predict academic skill and achievement, largely separate from the contributions of the memory spans themselves. Response durations thus are important in the interpretation of span task performance.  相似文献   

8.
Research on binocular rivalry and motion direction discrimination suggests that stochastic activity early in visual processing influences the perception of ambiguous stimuli. Here, we extend this to higher level tasks of word and face processing. In Experiment 1, we used blocked gender and word discrimination tasks, and in Experiment 2, we used a face versus word discrimination task. Stimuli were embedded in noise, and some trials contained only noise. In Experiment 1, we found a larger response in the N170, an ERP component associated with faces, to the noise-alone stimulus when observers were performing the gender discrimination task. The noise-alone trials in Experiment 2 were binned according to the observer’s behavioral response, and there was a greater response in the N170 when they reported seeing a face. After considering various top-down and priming-related explanations, we raise the possibility that seeing a face in noise may result from greater stochastic activity in neural faceprocessing regions.  相似文献   

9.
Four experiments are reported investigating previous findings that speech perception interferes with concurrent verbal memory but difficult nonverbal perceptual tasks do not, to any great degree. The forgetting produced by processing noisy speech could not be attributed to task difficulty, since equally difficult nonspeech tasks did not produce forgetting, and the extent of forgetting produced by speech could be manipulated independently of task difficulty. The forgetting could not be attributed to similarity between memory material and speech stimuli, since clear speech, analyzed in a simple and probably acoustically mediated discrimination task, produced little forgetting. The forgetting could not be attributed to a combination of similarity and difficutly since a very easy speech task involving clear speech produced as much forgetting as noisy speech tasks, as long as overt reproduction of the stimuli was required. By assuming that noisy speech and overtly reproduced speech are processed at a phonetic level but that clear, repetitive speech can be processed at a purely acoustic level, the forgetting produced by speech perception could be entirely attributed to the level at which the speech was processed. In a final experiment, results were obtained which suggest that if prior set induces processing of noisy and clear speech at comparable levels, the difference between the effects of noisy speech processing and clear speech processing on concurrent memory is completely eliminated.  相似文献   

10.
It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.  相似文献   

11.
Speech perception without hearing   总被引:6,自引:0,他引:6  
In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH; n = 96) and with severely to profoundly impaired hearing (IH; n = 72) identified consonant-vowel (CV) nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct and the amount of phoneme substitution entropy for sentences. The results demonstrated greater sensitivity to phonetic information in the IH group. Transmitted feature information was related to isolated word scores for the IH group, but not for the NH group. Phoneme errors in sentences were more systematic in the IH than in the NH group. Individual differences in phonetic perception for CVs were more highly associated with word and sentence performance for the IH than for the NH group. The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.  相似文献   

12.
Current models of reading and speech perception differ widely in their assumptions regarding the interaction of orthographic and phonological information during language perception. The present experiments examined this interaction through a two-alternative, forced-choice paradigm, and explored the nature of the connections between graphemic and phonemic processing subsystems. Experiments 1 and 2 demonstrated a facilitation-dominant influence (i.e., benefits exceed costs) of graphemic contexts on phoneme discrimination, which is interpreted as a sensitivity effect. Experiments 3 and 4 demonstrated a symmetrical influence (i.e., benefits equal costs) of phonemic contexts on grapheme discrimination, which can be interpreted as either a bias effect, or an equally facilitative/inhibitory sensitivity effect. General implications for the functional architecture of language processing models are discussed, as well as specific implications for models of visual word recognition and speech perception.  相似文献   

13.
McCotter MV  Jordan TR 《Perception》2003,32(8):921-936
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments 1a (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and visual signals were combined to produce congruent and incongruent audiovisual speech stimuli. Experiments 1a and 1b showed that perception of visual speech, and its influence on identifying the auditory components of congruent and incongruent audiovisual speech, was less for LI than for either NC or GS faces, which produced identical results. Experiments 2a and 2b showed that perception of visual speech, and influences on perception of incongruent auditory speech, was less for LI and CLI faces than for NC and CI faces (which produced identical patterns of performance). Our findings for NC and CI faces suggest that colour is not critical for perception of visual and audiovisual speech. The effect of luminance inversion on performance accuracy was relatively small (5%), which suggests that the luminance information preserved in LI faces is important for the processing of visual and audiovisual speech.  相似文献   

14.
We used fMRI to examine patterns of brain activity associated with component processes of visual word recognition and their relationships to individual differences in reading skill. We manipulated both the judgments adults made on written stimuli and the characteristics of the stimuli. Phonological processing led to activation in left inferior frontal and temporal regions whereas semantic processing was associated with bilateral middle frontal activation. Individual differences in reading subskills were reflected in differences in the degree to which cortical regions were engaged during reading. Variation in sight word reading efficiency was associated with degree of activation in visual cortex. Increased phonological decoding skill was associated with greater activation in left temporo-parietal cortex. Greater reading comprehension ability was associated with decreased activation in anterior cingulate and temporal regions. Notably, associations between reading ability and neural activation indicate that brain/behavior relationships among skilled readers differ from patterns associated with dyslexia and reading development.  相似文献   

15.
Tasks assessing perception of a phonemic contrast based on voice onset time (VOT) and a nonspeech analog of a VOT contrast using tone onset time (TOT) were administered to children (ages 7.5 to 15.9 years) identified as having reading disability (RD; n = 21), attention deficit/hyperactivity disorder (ADHD; n = 22), comorbid RD and ADHD (n = 26), or no impairment (NI; n = 26). Children with RD, whether they had been identified as having ADHD or not, exhibited reduced perceptual skills on both tasks as indicated by shallower slopes on category labeling functions and reduced accuracy even at the endpoints of the series where cues are most salient. Correlations between performance on the VOT task and measures of single word decoding and phonemic awareness were significant only in the groups without ADHD. These findings suggest that (a) children with RD have difficulty in processing speech and nonspeech stimuli containing similar auditory temporal cues, (b) phoneme perception is related to phonemic awareness and decoding skills, and (c) the potential presence of ADHD needs to be taken into account in studies of perception in children with RD.  相似文献   

16.
In the context of face processing, the skill of processing speech from faces (speechreading) occupies a unique cognitive and neuropsychological niche. Neuropsychological dissociations in two cases (Campbell et al., 1986) suggested a very clear pattern: speechreading, but not face recognition, can be impaired by left-hemisphere damage, while face-recognition impairment consequent to right-hemisphere damage leaves speechreading unaffected. However, this story soon proved too simple, while neuroimaging techniques started to reveal further more detailed patterns. These patterns, moreover, were readily accommodated within the Bruce and Young (1986) model. Speechreading requires structural encoding of faces as faces, but further analysis of visible speech is supported by a network comprising several lateral temporal regions and inferior frontal regions. Posterior superior temporal regions play a significant role in speechreading natural speech, including audiovisual binding in hearing people. In deaf people, similar regions and circuits are implicated. While these detailed developments were not predicted by Bruce and Young, nevertheless, their model has stood the test of time, affording a structural framework for exploring speechreading in terms of face processing.  相似文献   

17.
18.
鉴于阅读起始于基础视觉加工阶段, 越来越多的研究者开始关注阅读障碍者的视觉空间注意加工能力。视觉空间注意是指个体对视觉刺激的空间位置的注意, 可通过线索提示、视觉搜索和视觉注意广度等视觉任务来考察。大量国内外研究发现, 发展性阅读障碍者在视觉空间注意任务下表现出行为和神经活动方面的异常。其中的神经机制问题不仅反映在与视觉空间注意有关的顶叶区域激活异常, 还存在于脑区间功能连接异常(如顶叶区域与字形加工区的功能连接)。未来研究还需利用横断和追踪研究探讨阅读障碍与视觉空间注意能力发展关系的内在机制, 以及探究语言特性对阅读障碍者视觉空间注意缺陷的可能调节作用。  相似文献   

19.
An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.  相似文献   

20.
Speaker variations influence speechreading speed for dynamic faces   总被引:1,自引:0,他引:1  
We investigated the influence of task-irrelevant speaker variations on speechreading performance. In three experiments with video digitised faces presented either in dynamic, static-sequential, or static mode, participants performed speeded classifications on vowel utterances (German vowels /u/ and /i/). A Garner interference paradigm was used, in which speaker identity was task-irrelevant but could be either correlated, constant, or orthogonal to the vowel uttered. Reaction times for facial speech classifications were slowed by task-irrelevant speaker variations for dynamic stimuli. The results are discussed with reference to distributed models of face perception (Haxby et al, 2000 Trends in Cognitive Sciences 4 223-233) and the relevance of both dynamic information and speaker characteristics for speechreading.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号