首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
Differential hemispheric contributions to the perceptual phenomenon known as theMcGurk effect were examined in normal subjects, I callosotomy patient, and 4 patients with intractable epilepsy. Twenty-five right-handed subjects were more likely to demonstrate an influence of a mouthed ward on identification of a dubbed acoustic word when the speaker’s fag vase lateralized to the LVF as compared with the RVF. In contrast, display of printed response alternatives in the RVF elicited a greater percentage of McGurk responses than display in the LVF. Visual field differences were absent in a group of 15 left-handed subjects. These results suggest that in right-handers, the two hemispheres may make distinct contributions to the McGurk effect. The callosotomy patient demonstrated reliable McGurk effects, but at a lower rate than the normal subjects and the epileptic control subjects. These data support the view that both the right and left hemisphere can make significant contributions to the McGurk effect.  相似文献   

2.
Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words accompanied by beats elicited a positive shift in ERPs at an early sensory stage (before 100 ms) and at a later time window coinciding with the auditory component P2. The same word tokens produced no ERP differences when participants listened to the discourse without view of the speaker. We conclude that beat gestures are integrated with speech early on in time and modulate sensory/phonological levels of processing. The present results support the possible role of beats as a highlighter, helping the listener to direct the focus of attention to important information and modulate the parsing of the speech stream.  相似文献   

3.
4.
Buchan JN  Munhall KG 《Perception》2011,40(10):1164-1182
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.  相似文献   

5.
6.
7.
Dupoux E  de Gardelle V  Kouider S 《Cognition》2008,109(2):267-273
Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and stimulus strength (energy, duration). Here, we used a masked speech priming method in conjunction with a submillisecond interaural delay manipulation to contrast subliminal and supraliminal processing at constant prime, mask and target strength. This delay induced a perceptual streaming effect, with the prime popping out in the supraliminal condition. By manipulating the prime-target interval (ISI), we show a qualitatively distinct profile of priming longevity as a function of prime awareness. While subliminal priming disappeared after half a second, supraliminal priming was independent of ISI. This shows that the distinction between conscious and unconscious processing depends on high-level perceptual streaming factors rather than low-level features (energy, duration).  相似文献   

8.
Results of auditory speech experiments show that reaction times (RTs) for place classification in a test condition in which stimuli vary along the dimensions of both place and voicing are longer than RTs in a control condition in which stimuli vary only in place. Similar results are obtained when subjects are asked to classify the stimuli along the voicing dimension. By taking advantage of the "McGurk" effect (McGurk & MacDonald, 1976), the present study investigated whether a similar pattern of interference extends to situations in which variation along the place dimension occurs in the visual modality. The results showed that RTs for classifying phonetic features in the test condition were significantly longer than in the control condition for the place and voicing dimensions. These results indicate a mutual and symmetric interference exists in the classification of the two dimensions, even when the variation along the dimensions occurs in separate modalities.  相似文献   

9.
To what extent is simultaneous visual and auditory perception subject to capacity limitations and attentional control? Two experiments addressed this question by asking observers to recognize test tones and test letters under selective and divided attention. In Experiment 1, both stimuli occurred on each trial, but subjects were cued in advance to process just one or both of the stimuli. In Experiment 2, subjects processed one stimulus and then the other or processed both stimuli simultaneously. Processing time was controlled using a backward recognition masking task. A significant, but small, attention effect was found in both experiments. The present positive results weaken the interpretation that previous attentional effects were due to the particular duration judgment task that was employed. The answer to the question addressed by the experiments appears to be that the degree of capacity limitations and attentional control during visual and auditory perception is small but significant.  相似文献   

10.
Visual information provided by a talker's mouth movements can influence the perception of certain speech features. Thus, the "McGurk effect" shows that when the syllable (bi) is presented audibly, in synchrony with the syllable (gi), as it is presented visually, a person perceives the talker as saying (di). Moreover, studies have shown that interactions occur between place and voicing features in phonetic perception, when information is presented audibly. In our first experiment, we asked whether feature interactions occur when place information is specificed by a combination of auditory and visual information. Members of an auditory continuum ranging from (ibi) to (ipi) were paired with a video display of a talker saying (igi). The auditory tokens were heard as ranging from (ibi) to (ipi), but the auditory-visual tokens were perceived as ranging from (idi) to (iti). The results demonstrated that the voicing boundary for the auditory-visual tokens was located at a significantly longer VOT value than the voicing boundary for the auditory continuum presented without the visual information. These results demonstrate that place-voice interactions are not limited to situations in which place information is specified audibly.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

11.
Several recent theories of visual information processing have postulated that errors in recognition may result not only from a failure in feature extraction, but also from a failure to correctly join features after they have been correctly extracted. Errors that result from incorrectly integrating features are called conjunction errors. The present study uses conjunction errors to investigate the principles used by the visual system to integrate features. The research tests whether the visual system is more likely to integrate features located close together in visual space (the location principle) or whether the visual system is more likely to integrate features from stimulus items that come from the same perceptual group or object (the perceptual group principle). In four target-detection experiments, stimuli were created so that feature integration by the location principle and feature integration by the perceptual group principle made different predictions for performance. In all of the experiments, the perceptual group principle predicted feature integration even though the distance between stimulus items and retinal eccentricity were strictly controlled.  相似文献   

12.
Summary Perceptual organization during short tachistoscopic presentation of stimulus patterns formed by ten moving bright spots, representing a human body in walking, running, etc. was investigated. Exposure times were .1 sec to .5 sec.The results reveal that in all Ss the dot pattern is perceptually organized to a gestalt, a walking, running, etc., person at an exposure time of .2 sec. 40% of Ss perceived a human body in such motion at presentation times as short as 0.1 sec. Under the experimental conditions used the track length of the bright spots at the threshold of integration to a moving unit was of the size order 10 visual angle.This result is regarded as indicating that a complex vector analysis of the proximal motion pattern is accomplished at the initial stage of physiological signal recording and that it is a consequence of receptive field organization. It is discussed in terms of vector calculus.  相似文献   

13.
People naturally move their heads when they speak, and our study shows that this rhythmic head motion conveys linguistic information. Three-dimensional head and face motion and the acoustics of a talker producing Japanese sentences were recorded and analyzed. The head movement correlated strongly with the pitch (fundamental frequency) and amplitude of the talker's voice. In a perception study, Japanese subjects viewed realistic talking-head animations based on these movement recordings in a speech-in-noise task. The animations allowed the head motion to be manipulated without changing other characteristics of the visual or acoustic speech. Subjects correctly identified more syllables when natural head motion was present in the animation than when it was eliminated or distorted. These results suggest that nonverbal gestures such as head movements play a more direct role in the perception of speech than previously known.  相似文献   

14.
Speech alignment is the tendency for interlocutors to unconsciously imitate one another’s speaking style. Alignment also occurs when a talker is asked to shadow recorded words (e.g., Shockley, Sabadini, & Fowler, 2004). In two experiments, we examined whether alignment could be induced with visual (lipread) speech and with auditory speech. In Experiment 1, we asked subjects to lipread and shadow out loud a model silently uttering words. The results indicate that shadowed utterances sounded more similar to the model’s utterances than did subjects’ nonshadowed read utterances. This suggests that speech alignment can be based on visual speech. In Experiment 2, we tested whether raters could perceive alignment across modalities. Raters were asked to judge the relative similarity between a model’s visual (silent video) utterance and subjects’ audio utterances. The subjects’ shadowed utterances were again judged as more similar to the model’s than were read utterances, suggesting that raters are sensitive to cross-modal similarity between aligned words.  相似文献   

15.
A computer program capable of supporting auditory and speech perception experimentation is described. Given a continuum of acoustic stimuli, the program (Paradigm) allows the user to present those stimuli under different, well-known psychophysical paradigms (simple identification, identification with a rating scale, 2IAX, ABX, AXB, and oddity task). For discrimination tests, both high uncertainty (roving designs) and minimal psychophysical uncertainty (fixed designs) procedures are available. All the relevant time intervals can be precisely specified, and feedback is also available. Response times can be measured as well. Furthermore, the program stores subjects’ responses and provides summaries of experimental results for both individual subjects and groups. The program runs on Microsoft Windows (3.1 or 95) on personal computers equipped with any soundboard.  相似文献   

16.
Responses are typically faster and more accurate when both auditory and visual modalities are stimulated than when only one is. This bimodal advantage is generally attributed to a speeding of responding on bimodal trials, relative to unimodal trials. It remains possible that this effect might be due to a performance decrement on unimodal ones. To investigate this, two levels of auditory and visual signal intensities were combined in a double-factorial paradigm. Responses to the onset of the imperative signal were measured under go/no-go conditions. Mean reaction times to the four types of bimodal stimuli exhibited a superadditive interaction. This is evidence for the parallel self-terminating processing of the two signal components. Violations of the race model inequality also occurred, and measures of processing capacity showed that efficiency was greater on the bimodal than on the unimodal trials. These data are discussed in terms of a possible underlying neural substrate.  相似文献   

17.
Eye movements and the integration of visual memory and visual perception   总被引:3,自引:0,他引:3  
Because visual perception has temporal extent, temporally discontinuous input must be linked in memory. Recent research has suggested that this may be accomplished by integrating the active contents of visual short-term memory (VSTM) with subsequently perceived information. In the present experiments, we explored the relationship between VSTM consolidation and maintenance and eye movements, in order to discover how attention selects the information that is to be integrated. Specifically, we addressed whether stimuli needed to be overtly attended in order to be included in the memory representation or whether covert attention was sufficient. Results demonstrated that in static displays in which the to-be-integrated information was presented in the same spatial location, VSTM consolidation proceeded independently of the eyes, since subjects made few eye movements. In dynamic displays, however, in which the to-be-integrated information was presented in different spatial locations, eye movements were directly related to task performance. We conclude that these differences are related to different encoding strategies. In the static display case, VSTM was maintained in the same spatial location as that in which it was generated. This could apparently be accomplished with covert deployments of attention. In the dynamic case, however, VSTM was generated in a location that did not overlap with one of the to-be-integrated percepts. In order to "move" the memory trace, overt shifts of attention were required.  相似文献   

18.
Cross-modal effects on visual and auditory object perception   总被引:1,自引:0,他引:1  
  相似文献   

19.
Attention is often conceived as a gateway to consciousness (Posner, 1994). Although endogenous spatial attention may be independent of conscious perception (CP) (Koch Tsuchiya, 2007), exogenous spatial orienting seems instead to be an important modulator of CP ( [Chica et al., 2010] and Chica, Lasaponara, et al., 2011). Here, we investigate the role of auditory alerting in CP in normal observers. We used a behavioral task in which phasic alerting tones were presented either at unpredictable or at predictable time intervals prior to the occurrence of a near-threshold visual target. We find, for the first time in neurologically intact observers, that phasic alertness increases CP, both objectively and subjectively. This result is consistent with evidence showing that phasic alerting can ameliorate the spatial bias exhibited by visual neglect patients (Robertson, Mattingley, Rorden, & Driver, 1998). The alerting network may increase the activity of fronto-parietal networks involved in top-down amplification required to bring a stimulus into consciousness (Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006).  相似文献   

20.
Two experiments involving memory retrieval of auditorilv and visually presented materials were performed. In Experiment I, subjects were presented with memory sets of 1, 2, or 4 stimuli and then with a test item to be classified as belonging or not belonging to the memory set. In Condition 1, each memory stimulus was a single, auditorily presented letter. In Condition 2, each memory stimulus was a visually presented letter. In Conditions 3 and 4, each memory stimulus was a pair of letters, one presented visually and the other auditorily. Mean reaction time (RT) for the classification task increased as a function of number of memory stimuli at equal rates for all four conditions. This was interpreted as evidence for a parallel scanning process in Conditions 3 and 4 where the auditory item and visual item of each memory stimulus pair can be scanned simultaneously. Experiment II compared memory retrieval for a simultaneous condition in which auditory and visual memory items were presented as pairs with a sequential condition in which mixed auditory-visual memory sets were presented one item at a time. RTs were shorter for the simultaneous condition. This was interpreted as evidence that parallel scanning may depend upon memory input parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号