排序方式: 共有26条查询结果,搜索用时 0 毫秒
21.
The McGurk effect is usually presented as an example of fast, automatic, multisensory integration. We report a series of experiments designed to directly assess these claims. We used a syllabic version of the speeded classification paradigm, whereby response latencies to the first (target) syllable of spoken word-like stimuli are slowed down when the second (irrelevant) syllable varies from trial to trial. This interference effect is interpreted as a failure of selective attention to filter out the irrelevant syllable. In Experiment 1 we reproduced the syllabic interference effect with bimodal stimuli containing auditory as well as visual lip movement information, thus confirming the generalizability of the phenomenon. In subsequent experiments we were able to produce (Experiment 2) and to eliminate (Experiment 3) syllabic interference by introducing 'illusory' (McGurk) audiovisual stimuli in the irrelevant syllable, suggesting that audiovisual integration occurs prior to attentional selection in this paradigm. 相似文献
22.
To interact functionally with our environment, our perception must locate events in time, including discerning whether sensory events are simultaneous. The Temporal Binding Window (TBW; the time window within which two stimuli tend to be integrated into one event) has been shown to relate to individual differences in perception, including schizotypy, but the relationship with subjective estimates of duration is unclear. We compare individual TBWs with individual differences in the filled duration illusion, exploiting differences in perception between empty and filled durations (the latter typically being perceived as longer). Schizotypy has been related to both these measures and is included to explore a potential link between these tasks and enduring perceptual differences. Results suggest that individuals with a narrower TBW make longer estimates for empty durations and demonstrate less variability in both conditions. Exploratory analysis of schizotypy data suggests a relationship with the TBW but is inconclusive regarding time perception. 相似文献
23.
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum. 相似文献
24.
Observers change their audio-visual timing judgements after exposure to asynchronous audiovisual signals. The mechanism underlying this temporal recalibration is currently debated. Three broad explanations have been suggested. According to the first, the time it takes for sensory signals to propagate through the brain has changed. The second explanation suggests that decisional criteria used to interpret signal timing have changed, but not time perception itself. A final possibility is that a population of neurones collectively encode relative times, and that exposure to a repeated timing relationship alters the balance of responses in this population. Here, we simplified each of these explanations to its core features in order to produce three corresponding six-parameter models, which generate contrasting patterns of predictions about how simultaneity judgements should vary across four adaptation conditions: No adaptation, synchronous adaptation, and auditory leading/lagging adaptation. We tested model predictions by fitting data from all four conditions simultaneously, in order to assess which model/explanation best described the complete pattern of results. The latency-shift and criterion-change models were better able to explain results for our sample as a whole. The population-code model did, however, account for improved performance following adaptation to a synchronous adapter, and best described the results of a subset of observers who reported least instances of synchrony. 相似文献
25.
The present study examined the effect of visual feedback on the ability to recognise and consolidate pitch information. We trained two groups of nonmusicians to play a piano piece by ear, having one group receiving uninterrupted audiovisual feedback, while allowing the other only to hear, but not see their hand on the keyboard. Results indicate that subjects for whom visual information was deprived showed significantly poorer ability to recognise pitches from the musical piece they had learned. These results are interesting since pitch recognition ability would not intuitively seem to rely on visual feedback. In addition, we show that subjects with previous experience in computer touch-typing made fewer errors during training when trained with no visual feedback, but did not show improved pitch recognition ability posttraining. Our results demonstrate how sensory redundancy increases robustness of learning, and further encourage the use of audiovisual training procedures for facilitating the learning of new skills. 相似文献
26.
《Quarterly journal of experimental psychology (2006)》2013,66(1):23-30
Audiovisual integration (AVI) has been demonstrated to play a major role in speech comprehension. Previous research suggests that AVI in speech comprehension tolerates a temporal window of audiovisual asynchrony. However, few studies have employed audiovisual presentation to investigate AVI in person recognition. Here, participants completed an audiovisual voice familiarity task in which the synchrony of the auditory and visual stimuli was manipulated, and in which visual speaker identity could be corresponding or noncorresponding to the voice. Recognition of personally familiar voices systematically improved when corresponding visual speakers were presented near synchrony or with slight auditory lag. Moreover, when faces of different familiarity were presented with a voice, recognition accuracy suffered at near synchrony to slight auditory lag only. These results provide the first evidence for a temporal window for AVI in person recognition between approximately 100 ms auditory lead and 300 ms auditory lag. 相似文献