首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study explored asymmetries for movement, expression and perception of visual speech. Sixteen dextral models were videoed as they articulated: 'bat,' 'cat,' 'fat,' and 'sat.' Measurements revealed that the right side of the mouth was opened wider and for a longer period than the left. The asymmetry was accentuated at the beginning and ends of the vocalization and was attenuated for words where the lips did not articulate the first consonant. To measure asymmetries in expressivity, 20 dextral observers watched silent videos and reported what was said. The model's mouth was covered so that the left, right or both sides were visible. Fewer errors were made when the right mouth was visible compared to the left--suggesting that the right side is more visually expressive of speech. Investigation of asymmetries in perception using mirror-reversed clips revealed that participants did not preferentially attend to one side of the speaker's face. A correlational analysis revealed an association between movement and expressivity whereby a more motile right mouth led to stronger visual expressivity of the right mouth. The asymmetries are most likely driven by left hemisphere specialization for language, which causes a rightward motoric bias.  相似文献   

2.
The study of cerebral specialization in persons with Down syndrome (DS) has revealed an anomalous pattern of organization. Specifically, dichotic listening studies (e.g., Elliott & Weeks, 1993) have suggested a left ear/right hemisphere dominance for speech perception for persons with DS. In the current investigation, the cerebral dominance for speech production was examined using the mouth asymmetry technique. In right-handed, nonhandicapped subjects, mouth asymmetry methodology has shown that during speech, the right side of the mouth opens sooner and to a larger degree then the left side (Graves, Goodglass, & Landis, 1982). The phenomenon of right mouth asymmetry (RMA) is believed to reflect the direct access that the musculature on the right side of the face has to the left hemisphere's speech production systems. This direct access may facilitate the transfer of innervatory patterns to the muscles on the right side of the face. In the present study, the lateralization for speech production was investigated in 10 right-handed participants with DS and 10 nonhandicapped subjects. A RMA at the initiation and end of speech production occurred for subjects in both groups. Surprisingly, the degree of asymmetry between groups did not differ, suggesting that the lateralization of speech production is similar for persons with and persons without DS. These results support the biological dissociation model (Elliott, Weeks, & Elliott, 1987), which holds that persons with DS display a unique dissociation between speech perception (right hemisphere) and speech production (left hemisphere).  相似文献   

3.
When a normal subject is speaking, the right side of the mouth typically opens more widely or moves over a greater total distance. This asymmetry is most consistent during purely verbal word list generation and verbal recall tasks, less consistent when emotional expression and/or visual imagery is involved, and reversed during smiling. Aphasic patients also show the right bias during word lists, repetition, and conversation, but not during serial speech, singing and smiling. Since observable mouth asymmetry is presumed to result from hemispheric asymmetry in motor control, these observations confirm the major role of a left hemisphere control system for pure verbal expression and provide evidence for involvement of the right hemisphere in mouth motor control during emotional and prosodic expression or visual imagery. Therapeutic possibilities are also suggested.  相似文献   

4.
The effect of attention on cerebral dominance and the asymmetry between left and right ears was investigated using a selective listening task. Right handed subjects were presented with simultaneous dichotic speech messages; they shadowed one message in either the right or left ear and at the same time tapped with either the right or the left hand when they heard a specified target word in either message. The ear asymmetry was shown only when subjects' attention was focused on some other aspect of the task: they tapped to more targets in the right ear, but only when these came in the non-shadowed message; they made more shadowing errors with the left ear message, but chiefly for non-target words. The verbal response of shadowing showed the right ear dominance more clearly than the manual response of tapping. Tapping with the left hand interfered more with shadowing than tapping with the right hand, but there was little correlation between the degree of hand and of ear asymmetry over individual subjects. The results support the idea that the right ear dominance is primarily a quantitative difference in the distribution of attention to left and right ear inputs reaching the left hemisphere speech areas. This affects both the efficiency of speech perception and the degree of response competition between simultaneous verbal and manual responses.  相似文献   

5.
Differential hemispheric contributions to the perceptual phenomenon known as theMcGurk effect were examined in normal subjects, I callosotomy patient, and 4 patients with intractable epilepsy. Twenty-five right-handed subjects were more likely to demonstrate an influence of a mouthed ward on identification of a dubbed acoustic word when the speaker’s fag vase lateralized to the LVF as compared with the RVF. In contrast, display of printed response alternatives in the RVF elicited a greater percentage of McGurk responses than display in the LVF. Visual field differences were absent in a group of 15 left-handed subjects. These results suggest that in right-handers, the two hemispheres may make distinct contributions to the McGurk effect. The callosotomy patient demonstrated reliable McGurk effects, but at a lower rate than the normal subjects and the epileptic control subjects. These data support the view that both the right and left hemisphere can make significant contributions to the McGurk effect.  相似文献   

6.
The present study examined interactions of speech production and finger-tapping movement, using a syncopated motor task with two movements in 10 male right-handed undergraduate students (M age = 21.0 yr.; SD = 1.4). On the syncopated task, participants were required to produce one movement exactly midway between two other movements (target interresponse interval: 250 msec.). They were divided into two groups, the tap-preceding group and speech-preceding group. The author observed that the right hand showed a more variable peak force and intertap interval than the left hand in the speech-preceding group, indicating an asymmetrical interference of two movements. On the other hand, the mean differences between onsets of speech and tapping movement were shorter than 250 msec. over all conditions (the shortest mean difference was 50 msec.), suggesting a mutual entrainment of two movements. An asymmetry of entrainment was observed in the speech-preceding group, in which speech production was more strongly entrained with movements of the right hand than with those of the left hand.  相似文献   

7.
We conducted three experiments in order to examine the influence of gaze behavior and fixation on audiovisual speech perception in a task that required subjects to report the speech sound they perceived during the presentation of congruent and incongruent (McGurk) audiovisual stimuli. Experiment 1 showed that the subjects' natural gaze behavior rarely involved gaze fixations beyond the oral and ocular regions of the talker's face and that these gaze fixations did not predict the likelihood of perceiving the McGurk effect. Experiments 2 and 3 showed that manipulation of the subjects' gaze fixations within the talker's face did not influence audiovisual speech perception substantially and that it was not until the gaze was displaced beyond 10 degrees - 20 degrees from the talker's mouth that the McGurk effect was significantly lessened. Nevertheless, the effect persisted under such eccentric viewing conditions and became negligible only when the subject's gaze was directed 60 degrees eccentrically. These findings demonstrate that the analysis of high spatial frequency information afforded by direct oral foveation is not necessary for the successful processing of visual speech information.  相似文献   

8.
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.  相似文献   

9.
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onset-time continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audio-visual integration.  相似文献   

10.
In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners’ auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants’ susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners’ McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.  相似文献   

11.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.  相似文献   

12.
We report a 53-year-old patient (AWF) who has an acquired deficit of audiovisual speech integration, characterized by a perceived temporal mismatch between speech sounds and the sight of moving lips. AWF was less accurate on an auditory digit span task with vision of a speaker's face as compared to a condition in which no visual information from the lower face was available. He was slower in matching words to pictures when he saw congruent lip movements compared to no lip movements or non-speech lip movements. Unlike normal controls, he showed no McGurk effect. We propose that multisensory binding of audiovisual language cues can be selectively disrupted.  相似文献   

13.
In the McGurk effect, visual information specifying a speaker’s articulatory movements can influence auditory judgments of speech. In the present study, we attempted to find an analogue of the McGurk effect by using nonspeech stimuli—the discrepant audiovisual tokens of plucks and bows on a cello. The results of an initial experiment revealed that subjects’ auditory judgments were influenced significantly by the visual pluck and bow stimuli. However, a second experiment in which speech syllables were used demonstrated that the visual influence on consonants was significantly greater than the visual influence observed for pluck-bow stimuli. This result could be interpreted to suggest that the nonspeech visual influence was not a true McGurk effect. In a third experiment, visual stimuli consisting of the wordspluck andbow were found to have no influence over auditory pluck and bow judgments. This result could suggest that the nonspeech effects found in Experiment 1 were based on the audio and visual information’s having an ostensive lawful relation to the specified event. These results are discussed in terms of motor-theory, ecological, and FLMP approaches to speech perception.  相似文献   

14.
Young adults talked to an experimenter about their emotional reactions to video episodes intended to evoke either negative or positive affect. Facial behavior was simultaneously videotaped from three perspectives (full-face, a 90° right profile, and a 90° left profile) without their awareness. Judges viewed a subset of dynamic expressions in one of the three facial perspectives in either normal or mirror-reversed orientation. While subjects talked about a negative affect elicitor, the left hemiface and the full-face were perceived as more expressive than the right hemiface. The left hemiface, in reversed orientation, was perceived to display more emotion than the same expression in original orientation for positive or negative affect. These results are discussed in the context of the literature exploring hemifacial differences in emotional expression and mouth asymmetry during propositional speech.  相似文献   

15.
Visual information provided by a talker's mouth movements can influence the perception of certain speech features. Thus, the "McGurk effect" shows that when the syllable (bi) is presented audibly, in synchrony with the syllable (gi), as it is presented visually, a person perceives the talker as saying (di). Moreover, studies have shown that interactions occur between place and voicing features in phonetic perception, when information is presented audibly. In our first experiment, we asked whether feature interactions occur when place information is specificed by a combination of auditory and visual information. Members of an auditory continuum ranging from (ibi) to (ipi) were paired with a video display of a talker saying (igi). The auditory tokens were heard as ranging from (ibi) to (ipi), but the auditory-visual tokens were perceived as ranging from (idi) to (iti). The results demonstrated that the voicing boundary for the auditory-visual tokens was located at a significantly longer VOT value than the voicing boundary for the auditory continuum presented without the visual information. These results demonstrate that place-voice interactions are not limited to situations in which place information is specified audibly.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

16.
The importance of visual cues in speech perception is illustrated by the McGurk effect, whereby a speaker’s facial movements affect speech perception. The goal of the present study was to evaluate whether the McGurk effect is also observed for sung syllables. Participants heard and saw sung instances of the syllables /ba/ and /ga/ and then judged the syllable they perceived. Audio-visual stimuli were congruent or incongruent (e.g., auditory /ba/ presented with visual /ga/). The stimuli were presented as spoken, sung in an ascending and descending triad (C E G G E C), and sung in an ascending and descending triad that returned to a semitone above the tonic (C E G G E C#). Results revealed no differences in the proportion of fusion responses between spoken and sung conditions confirming that cross-modal phonemic information is integrated similarly in speech and song.  相似文献   

17.
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired (“unity assumption”). Participants made temporal order judgments (TOJ) and simultaneity judgments (SJ) about sine-wave speech (SWS) replicas of pseudowords and the corresponding video of the face. Listeners in speech and non-speech mode were equally sensitive judging audiovisual temporal order. Yet, using the McGurk effect, we could demonstrate that the sound was more likely integrated with lipread speech if heard as speech than non-speech. Judging temporal order in audiovisual speech is thus unaffected by whether the auditory and visual streams are paired. Conceivably, previously found differences between speech and non-speech stimuli are not due to the putative “special” nature of speech, but rather reflect low-level stimulus differences.  相似文献   

18.
Serial-verbal short-term memory is impaired by irrelevant sound, particularly when the sound changes acoustically (the changing-state effect). In contrast, short-term recall of semantic information is impaired only by the semanticity of irrelevant speech, particularly when it is semantically related to the target memory items (the between-sequence semantic similarity effect). Previous research indicates that the changing-state effect is larger when the sound is presented to the left ear in comparison to the right ear, the left ear disadvantage. In this paper, we report a novel finding whereby the between-sequence semantic similarity effect is larger when the irrelevant speech is presented to the right ear in comparison to the left ear, but this right ear disadvantage is found only when meaning is the basis of recall (Experiments 1 and 3), not when order is the basis of recall (Experiment 2). Our results complement previous research on hemispheric asymmetry effects in cross-modal auditory distraction by demonstrating a role for the left hemisphere in semantic auditory distraction.  相似文献   

19.
Recent research suggests synesthesia as a result of a hypersensitive multimodal binding mechanism. To address the question whether multimodal integration is altered in synesthetes in general, grapheme‐colour and auditory‐visual synesthetes were investigated using speech‐related stimulation in two behavioural experiments. First, we used the McGurk illusion to test the strength and number of illusory perceptions in synesthesia. In a second step, we analysed the gain in speech perception coming from seen articulatory movements under acoustically noisy conditions. We used disyllabic nouns as stimulation and varied signal‐to‐noise ratio of the auditory stream presented concurrently to a matching video of the speaker. We hypothesized that if synesthesia is due to a general hyperbinding mechanism this group of subjects should be more susceptible to McGurk illusions and profit more from the visual information during audiovisual speech perception. The results indicate that there are differences between synesthetes and controls concerning multisensory integration – but in the opposite direction as hypothesized. Synesthetes showed a reduced number of illusions and had a reduced gain in comprehension by viewing matching articulatory movements in comparison to control subjects. Our results indicate that rather than having a hypersensitive binding mechanism, synesthetes show weaker integration of vision and audition.  相似文献   

20.
More frequent appearance of herpes zoster infection on the left side of the body has been noted. In women, breast cancer occurs more frequently on the left side. It has been suggested that the left neocortex is involved in neuroimmunomodulation via the dopaminergic system. In this study, our purpose was to investigate the possible difference in cell-mediated hypersensitivity between right and left body sides using the tuberculin test with 22 male and 36 female healthy high school students. In the present study, the cell-mediated hypersensitivity was higher in the left side of the body than the right. This difference was slightly more apparent in the girls and may be related to brain asymmetry in neuroimmunomodulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号