首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Speech perception is audiovisual, as demonstrated by the McGurk effect in which discrepant visual speech alters the auditory speech percept. We studied the role of visual attention in audiovisual speech perception by measuring the McGurk effect in two conditions. In the baseline condition, attention was focused on the talking face. In the distracted attention condition, subjects ignored the face and attended to a visual distractor, which was a leaf moving across the face. The McGurk effect was weaker in the latter condition, indicating that visual attention modulated audiovisual speech perception. This modulation may occur at an early, unisensory processing stage, or it may be due to changes at the stage where auditory and visual information is integrated. We investigated this issue by conventional statistical testing, and by fitting the Fuzzy Logical Model of Perception (Massaro, 1998) to the results. The two methods suggested different interpretations, revealing a paradox in the current methods of analysis.  相似文献   

2.
McCotter MV  Jordan TR 《Perception》2003,32(8):921-936
We conducted four experiments to investigate the role of colour and luminance information in visual and audiovisual speech perception. In experiments 1a (stimuli presented in quiet conditions) and 1b (stimuli presented in auditory noise), face display types comprised naturalistic colour (NC), grey-scale (GS), and luminance inverted (LI) faces. In experiments 2a (quiet) and 2b (noise), face display types comprised NC, colour inverted (CI), LI, and colour and luminance inverted (CLI) faces. Six syllables and twenty-two words were used to produce auditory and visual speech stimuli. Auditory and visual signals were combined to produce congruent and incongruent audiovisual speech stimuli. Experiments 1a and 1b showed that perception of visual speech, and its influence on identifying the auditory components of congruent and incongruent audiovisual speech, was less for LI than for either NC or GS faces, which produced identical results. Experiments 2a and 2b showed that perception of visual speech, and influences on perception of incongruent auditory speech, was less for LI and CLI faces than for NC and CI faces (which produced identical patterns of performance). Our findings for NC and CI faces suggest that colour is not critical for perception of visual and audiovisual speech. The effect of luminance inversion on performance accuracy was relatively small (5%), which suggests that the luminance information preserved in LI faces is important for the processing of visual and audiovisual speech.  相似文献   

3.
Seeing a talker's face influences auditory speech recognition, but the visible input essential for this influence has yet to be established. Using a new seamless editing technique, the authors examined effects of restricting visible movement to oral or extraoral areas of a talking face. In Experiment 1, visual speech identification and visual influences on identifying auditory speech were compared across displays in which the whole face moved, the oral area moved, or the extraoral area moved. Visual speech influences on auditory speech recognition were substantial and unchanging across whole-face and oral-movement displays. However, extraoral movement also influenced identification of visual and audiovisual speech. Experiments 2 and 3 demonstrated that these results are dependent on intact and upright facial contexts, but only with extraoral movement displays.  相似文献   

4.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.  相似文献   

5.
Spatial frequency band-pass and low-pass filtered images of a talker were used in an audiovisual speech-in-noise task. Three experiments tested subjects' use of information contained in the different filter bands with center frequencies ranging from 2.7 to 44.1 cycles/face (c/face). Experiment 1 demonstrated that information from a broad range of spatial frequencies enhanced auditory intelligibility. The frequency bands differed in the degree of enhancement, with a peak being observed in a mid-range band (11-c/face center frequency). Experiment 2 showed that this pattern was not influenced by viewing distance and, thus, that the results are best interpreted in object spatial frequency, rather than in retinal coordinates. Experiment 3 showed that low-pass filtered images could produce a performance equivalent to that produced by unfiltered images. These experiments are consistent with the hypothesis that high spatial resolution information is not necessary for audiovisual speech perception and that a limited range of spatial frequency spectrum is sufficient.  相似文献   

6.
Three experiments examined whether image manipulations known to disrupt face perception also disrupt visual speech perception. Research has shown that an upright face with an inverted mouth looks strikingly grotesque whereas an inverted face and an inverted face containing an upright mouth look relatively normal. The current study examined whether a similar sensitivity to upright facial context plays a role in visual speech perception. Visual and audiovisual syllable identification tasks were tested under 4 presentation conditions: upright face-upright mouth, inverted face-inverted mouth, inverted face-upright mouth, and upright face-inverted mouth. Results revealed that for some visual syllables only the upright face-inverted mouth image disrupted identification. These results suggest that upright facial context can play a role in visual speech perception. A follow-up experiment testing isolated mouths supported this conclusion.  相似文献   

7.
An experiment is reported, the results of which confirm and extend an earlier observation that visual information for the speaker’s lip movements profoundly modifies the auditorv perception of natural speech by normally hearing subjects. The effect is most pronounced when there is auditory information for a bilabial utterance combined with visual information for a nonlabial utterance. However, the effect is also obtained with the reverse combination, although to a lesser extent. These findings are considered for their relevance to auditory theories of speech perception.  相似文献   

8.
9.
10.
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.  相似文献   

11.
People naturally move their heads when they speak, and our study shows that this rhythmic head motion conveys linguistic information. Three-dimensional head and face motion and the acoustics of a talker producing Japanese sentences were recorded and analyzed. The head movement correlated strongly with the pitch (fundamental frequency) and amplitude of the talker's voice. In a perception study, Japanese subjects viewed realistic talking-head animations based on these movement recordings in a speech-in-noise task. The animations allowed the head motion to be manipulated without changing other characteristics of the visual or acoustic speech. Subjects correctly identified more syllables when natural head motion was present in the animation than when it was eliminated or distorted. These results suggest that nonverbal gestures such as head movements play a more direct role in the perception of speech than previously known.  相似文献   

12.
The multistable perception of speech, or verbal transformation effect, refers to perceptual changes experienced while listening to a speech form that is repeated rapidly and continuously. In order to test whether visual information from the speaker's articulatory gestures may modify the emergence and stability of verbal auditory percepts, subjects were instructed to report any perceptual changes during unimodal, audiovisual, and incongruent audiovisual presentations of distinct repeated syllables. In a first experiment, the perceptual stability of reported auditory percepts was significantly modulated by the modality of presentation. In a second experiment, when audiovisual stimuli consisting of a stable audio track dubbed with a video track that alternated between congruent and incongruent stimuli were presented, a strong correlation between the timing of perceptual transitions and the timing of video switches was found. Finally, a third experiment showed that the vocal tract opening onset event provided by the visual input could play the role of a bootstrap mechanism in the search for transformations. Altogether, these results demonstrate the capacity of visual information to control the multistable perception of speech in its phonetic content and temporal course. The verbal transformation effect thus provides a useful experimental paradigm to explore audiovisual interactions in speech perception.  相似文献   

13.
14.
Recent research suggests synesthesia as a result of a hypersensitive multimodal binding mechanism. To address the question whether multimodal integration is altered in synesthetes in general, grapheme‐colour and auditory‐visual synesthetes were investigated using speech‐related stimulation in two behavioural experiments. First, we used the McGurk illusion to test the strength and number of illusory perceptions in synesthesia. In a second step, we analysed the gain in speech perception coming from seen articulatory movements under acoustically noisy conditions. We used disyllabic nouns as stimulation and varied signal‐to‐noise ratio of the auditory stream presented concurrently to a matching video of the speaker. We hypothesized that if synesthesia is due to a general hyperbinding mechanism this group of subjects should be more susceptible to McGurk illusions and profit more from the visual information during audiovisual speech perception. The results indicate that there are differences between synesthetes and controls concerning multisensory integration – but in the opposite direction as hypothesized. Synesthetes showed a reduced number of illusions and had a reduced gain in comprehension by viewing matching articulatory movements in comparison to control subjects. Our results indicate that rather than having a hypersensitive binding mechanism, synesthetes show weaker integration of vision and audition.  相似文献   

15.
Since both speech and color have perceptual ramifications in language, the present study was developed to study speech perception through color perception. With the recent advances in perception generally and color perception specifically, this nontraditional approach to studying speech perception appeared reasonable. The 12 consonants utilized in this study generated 144 pairs of nonsense CV-syllables. The consonant ensemble was selected because it accommodated a maximum number of phonological features with a minimum number of phonemes. The 46 subjects, who were in junior high school with an age range of 11 through 14 years, responded to each of the stimulus pairs by assigning (associating) to it one of the six primary colors. Because of the perceptual orderliness associated with subjects' judgments, the results indicated that color can be used to study speech perception. Specific findings included the retrieval of sameness, fronting (or place), and voicing.This study is based on a master's thesis presented to the Graduate School, Tennessee State University in Nashville. The authors gratefully acknowledge the cooperation and assistance of the Speech and Hearing Institute at the University of Texas Health Science Center-Houston.  相似文献   

16.
17.
We conducted three experiments in order to examine the influence of gaze behavior and fixation on audiovisual speech perception in a task that required subjects to report the speech sound they perceived during the presentation of congruent and incongruent (McGurk) audiovisual stimuli. Experiment 1 showed that the subjects' natural gaze behavior rarely involved gaze fixations beyond the oral and ocular regions of the talker's face and that these gaze fixations did not predict the likelihood of perceiving the McGurk effect. Experiments 2 and 3 showed that manipulation of the subjects' gaze fixations within the talker's face did not influence audiovisual speech perception substantially and that it was not until the gaze was displaced beyond 10 degrees - 20 degrees from the talker's mouth that the McGurk effect was significantly lessened. Nevertheless, the effect persisted under such eccentric viewing conditions and became negligible only when the subject's gaze was directed 60 degrees eccentrically. These findings demonstrate that the analysis of high spatial frequency information afforded by direct oral foveation is not necessary for the successful processing of visual speech information.  相似文献   

18.
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号