首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Infant perception often deals with audiovisual speech input and a first step in processing this input is to perceive both visual and auditory information. The speech directed to infants has special characteristics and may enhance visual aspects of speech. The current study was designed to explore the impact of visual enhancement in infant-directed speech (IDS) on audiovisual mismatch detection in a naturalistic setting. Twenty infants participated in an experiment with a visual fixation task conducted in participants’ homes. Stimuli consisted of IDS and adult-directed speech (ADS) syllables with a plosive and the vowel /a:/, /i:/ or /u:/. These were either audiovisually congruent or incongruent. Infants looked longer at incongruent than congruent syllables and longer at IDS than ADS syllables, indicating that IDS and incongruent stimuli contain cues that can make audiovisual perception challenging and thereby attract infants’ gaze.  相似文献   

2.
To investigate the basis of crossmodal visual distractor congruency effects, we recorded event-related brain potentials (ERP) while participants performed a tactile location-discrimination task. Participants made speeded tactile location-discrimination responses to tactile targets presented to the index fingers or thumbs while ignoring simultaneously presented task-irrelevant visual distractor stimuli at either the same (congruent) or a different (incongruent) location. Behavioural results were in line with previous studies, showing slowed response times and increased error rates on incongruent compared with congruent visual distractor trials. To clarify the effect of visual distractors on tactile processing, concurrently recorded ERPs were analyzed for poststimulus, preresponse, and postresponse modulations. An enhanced negativity was found in the time range of the N2 component on incongruent compared with congruent visual distractor trials prior to correct responses. In addition, postresponse ERPs showed the presence of error-related negativity components on incorrect-response trials and enhanced negativity for congruent-incorrect compared with incongruent-incorrect trials. This pattern of ERP results has previously been related to response conflict (Yeung, Botvinick, & Cohen, 2004). Importantly, no modulation of early somatosensory ERPs was present prior to the N2 time range, which may have suggested the contribution of other perceptual or postperceptual processes to crossmodal congruency effects. Taken together, our results suggest that crossmodal visual distractor effects are largely due to response conflict.  相似文献   

3.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

4.
Spence C  Walton M 《Acta psychologica》2005,118(1-2):47-70
We investigated the extent to which people can selectively ignore distracting vibrotactile information when performing a visual task. In Experiment 1, participants made speeded elevation discrimination responses (up vs. down) to a series of visual targets presented from one of two eccentricities on either side of central fixation, while simultaneously trying to ignore task-irrelevant vibrotactile distractors presented independently to the finger (up) vs. thumb (down) of either hand. Participants responded significantly more slowly, and somewhat less accurately, when the elevation of the vibrotactile distractor was incongruent with that of the visual target than when they were presented from the same (i.e., congruent) elevation. This crossmodal congruency effect was significantly larger when the visual and tactile stimuli appeared on the same side of space than when they appeared on different sides, although the relative eccentricity of the two stimuli within the hemifield (i.e., same vs. different) had little effect on performance. In Experiment 2, participants who crossed their hands over the midline showed a very different pattern of crossmodal congruency effects to participants who adopted an uncrossed hands posture. Our results suggest that both the relative external location and the initial hemispheric projection of the target and distractor stimuli contribute jointly to determining the magnitude of the crossmodal congruency effect when participants have to respond to vision and ignore touch.  相似文献   

5.
Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant’s two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of crossmodal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.  相似文献   

6.
The ability to detect social signals represents a first step to enter our social world. Behavioral evidence has demonstrated that 6‐month‐old infants are able to orient their attention toward the position indicated by walking direction, showing faster orienting responses toward stimuli cued by the direction of motion than toward uncued stimuli. The present study investigated the neural mechanisms underpinning this attentional priming effect by using a spatial cueing paradigm and recording EEG (Geodesic System 128 channels) from 6‐month‐old infants. Infants were presented with a central point‐light walker followed by a single peripheral target. The target appeared randomly at a position either congruent or incongruent with the walking direction of the cue. We examined infants' target‐locked event‐related potential (ERP) responses and we used cortical source analysis to explore which brain regions gave rise to the ERP responses. The P1 component and saccade latencies toward the peripheral target were modulated by the congruency between the walking direction of the cue and the position of the target. Infants' saccade latencies were faster in response to targets appearing at congruent spatial locations. The P1 component was larger in response to congruent than to incongruent targets and a similar congruency effect was found with cortical source analysis in the parahippocampal gyrus and the anterior fusiform gyrus. Overall, these findings suggest that a type of biological motion like the one of a vertebrate walking on the legs can trigger covert orienting of attention in 6‐month‐old infants, enabling enhancement of neural activity related to visual processing of potentially relevant information as well as a facilitation of oculomotor responses to stimuli appearing at the attended location.  相似文献   

7.
In this study, an extended pacemaker-counter model was applied to crossmodal temporal discrimination. In three experiments, subjects discriminated between the durations of a constant standard stimulus and a variable comparison stimulus. In congruent trials, both stimuli were presented in the same sensory modality (i.e., both visual or both auditory), whereas in incongruent trials, each stimulus was presented in a different modality. The model accounts for the finding that temporal discrimination depends on the presentation order of the sensory modalities. Nevertheless, the model fails to explain why temporal discrimination was much better with congruent than with incongruent trials. The discussion considers possibilities to accommodate the model to this and other shortcomings.  相似文献   

8.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

9.
In this study, we addressed how the particular context of stimulus congruency influences audiovisual interactions. We combined an audiovisual congruency task with a proportion-of-congruency manipulation. In Experiment 1, we demonstrated that the perceived duration of a visual stimulus is modulated by the actual duration of a synchronously presented auditory stimulus. In the following experiments, we demonstrated that this crossmodal congruency effect is modulated by the proportion of congruent trials between (Exp. 2) and within (Exp. 4) blocks. In particular, the crossmodal congruency effect was reduced in the context with a high proportion of incongruent trials. This effect was attributed to changes in participants' control set as a function of the congruency context, with greater control applied in the context where the majority of the trials were incongruent. These data contribute to the ongoing debate concerning crossmodal interactions and attentional processes. In sum, context can provide a powerful cue for selective attention to modulate the interaction between stimuli from different sensory modalities.  相似文献   

10.
Multisensory integration can play a critical role in producing unified and reliable perceptual experience. When sensory information in one modality is degraded or ambiguous, information from other senses can crossmodally resolve perceptual ambiguities. Prior research suggests that auditory information can disambiguate the contents of visual awareness by facilitating perception of intermodally consistent stimuli. However, it is unclear whether these effects are truly due to crossmodal facilitation or are mediated by voluntary selective attention to audiovisually congruent stimuli. Here, we demonstrate that sounds can bias competition in binocular rivalry toward audiovisually congruent percepts, even when participants have no recognition of the congruency. When speech sounds were presented in synchrony with speech-like deformations of rivalling ellipses, ellipses with crossmodally congruent deformations were perceptually dominant over those with incongruent deformations. This effect was observed in participants who could not identify the crossmodal congruency in an open-ended interview (Experiment 1) or detect it in a simple 2AFC task (Experiment 2), suggesting that the effect was not due to voluntary selective attention or response bias. These results suggest that sound can automatically disambiguate the contents of visual awareness by facilitating perception of audiovisually congruent stimuli.  相似文献   

11.
It is surprising that there are inconsistent findings of transitive inference (TI) in young infants given that non‐linguistic species succeed on TI tests. To conclusively test for TI in infants, we developed a task within the social domain, with which infants are known to show sophistication. We familiarized 10‐ to 13‐month‐olds (= 11.53 months) to a video of two dominance interactions between three puppets (bear > elephant; hippo > bear) consistent with a dominance hierarchy (hippo > bear > elephant; where ‘>’ denotes greater dominance). Infants then viewed interactions between the two puppets that had not interacted during familiarization. These interactions were either congruent (hippo > elephant) or incongruent (elephant > hippo) with the inferred hierarchy. Consistent with TI, infants looked longer to incongruent than congruent displays. Control conditions ruled out the possibility that infants’ expectations were based on stable behaviors specific to individual puppets rather than their inferred transitive dominance relations. We suggest that TI may be supported by phylogenetically ancient mechanisms of ordinal representation and visuospatial processing that come online early in human development.  相似文献   

12.

It has been suggested that judgments about the temporal–spatial order of successive tactile stimuli depend on the perceived direction of apparent motion between them. Here we manipulated tactile apparent-motion percepts by presenting a brief, task-irrelevant auditory stimulus temporally in-between pairs of tactile stimuli. The tactile stimuli were applied one to each hand, with varying stimulus onset asynchronies (SOAs). Participants reported the location of the first stimulus (temporal order judgments: TOJs) while adopting both crossed and uncrossed hand postures, so we could scrutinize skin-based, anatomical, and external reference frames. With crossed hands, the sound improved TOJ performance at short (≤300 ms) and at long (>300 ms) SOAs. When the hands were uncrossed, the sound induced a decrease in TOJ performance, but only at short SOAs. A second experiment confirmed that the auditory stimulus indeed modulated tactile apparent motion perception under these conditions. Perceived apparent motion directions were more ambiguous with crossed than with uncrossed hands, probably indicating competing spatial codes in the crossed posture. However, irrespective of posture, the additional sound tended to impair potentially anatomically coded motion direction discrimination at a short SOA of 80 ms, but it significantly enhanced externally coded apparent motion perception at a long SOA of 500 ms. Anatomically coded motion signals imply incorrect TOJ responses with crossed hands, but correct responses when the hands are uncrossed; externally coded motion signals always point toward the correct TOJ response. Thus, taken together, these results suggest that apparent-motion signals are likely taken into account when tactile temporal–spatial information is reconstructed.

  相似文献   

13.
Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4‐ to 8‐month‐old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye‐tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.  相似文献   

14.
Visual capture of touch: out-of-the-body experiences with rubber gloves   总被引:15,自引:0,他引:15  
When the apparent visual location of a body part conflicts with its veridical location, vision can dominate proprioception and kinesthesia. In this article, we show that vision can capture tactile localization. Participants discriminated the location of vibrotactile stimuli (upper, at the index finger, vs. lower, at the thumb), while ignoring distractor lights that could independently be upper or lower. Such tactile discriminations were slowed when the distractor light was incongruent with the tactile target (e.g., an upper light during lower touch) rather than congruent, especially when the lights appeared near the stimulated hand. The hands were occluded under a table, with all distractor lights above the table. The effect of the distractor lights increased when rubber hands were placed on the table, 'holding' the distractor lights, but only when the rubber hands were spatially aligned with the participant's own hands. In this aligned situation, participants were more likely to report the illusion of feeling touch at the rubber hands. Such visual capture of touch appears cognitively impenetrable.  相似文献   

15.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

16.
Previous research has shown that redundant information in faces and voices leads to faster emotional categorization compared to incongruent emotional information even when attending to only one modality. The aim of the present study was to test whether these crossmodal effects are predominantly due to a response conflict rather than interference at earlier, e.g. perceptual processing stages. In Experiment 1, participants had to categorize the valence and rate the intensity of happy, sad, angry and neutral unimodal or bimodal face–voice stimuli. They were asked to rate either the facial or vocal expression and ignore the emotion expressed in the other modality. Participants responded faster and more precisely to emotionally congruent compared to incongruent face–voice pairs in both the Attend Face and in the Attend Voice condition. Moreover, when attending to faces, emotionally congruent bimodal stimuli were more efficiently processed than unimodal visual stimuli. To study the role of a possible response conflict, Experiment 2 used a modified paradigm in which emotional and response conflicts were disentangled. Incongruency effects were significant even in the absence of response conflicts. The results suggest that emotional signals available through different sensory channels are automatically combined prior to response selection.  相似文献   

17.
In this study, pianists were tested for learned associations between actions (movements on the piano) and their perceivable sensory effects (piano tones). Actions were examined that required the playing of two-tone sequences (intervals) in a four-choice paradigm. In Experiment 1, the intervals to be played were denoted by visual note stimuli. Concurrently with these imperative stimuli, task-irrelevant auditory distractor intervals were presented (“potential” action effects, congruent or incongruent). In Experiment 2, imperative stimuli were coloured squares, in order to exclude possible influences of spatial relationships of notes, responses, and auditory stimuli. In both experiments responses in the incongruent conditions were slower than those in the congruent conditions. Also, heard intervals actually “induced” false responses. The reaction time effects were more pronounced in Experiment 2. In nonmusicians (Experiment 3), no evidence for interference could be observed. Thus, our results show that in expert pianists potential action effects are able to induce corresponding actions, which demonstrates the existence of acquired action-effect associations in pianists.  相似文献   

18.
Participants respond more quickly to two simultaneously presented target stimuli of two different modalities (redundant targets) than would be predicted from their reaction times to the unimodal targets. To examine the neural correlates of this redundant-target effect, event-related potentials (ERPs) were recorded to auditory, visual, and bimodal standard and target stimuli presented at two locations (left and right of central fixation). Bimodal stimuli were combinations of two standards, two targets, or a standard and a target, presented either from the same or from different locations. Responses generally were faster for bimodal stimuli than for unimodal stimuli and were faster for spatially congruent than for spatially incongruent bimodal events. ERPs to spatially congruent and spatially incongruent bimodal stimuli started to differ over the parietal cortex as early as 160 msec after stimulus onset. The present study suggests that hearing and seeing interact at sensory-processing stages by matching spatial information across modalities.  相似文献   

19.
The development of theory of mind (ToM) in infancy has been mainly documented through studies conducted on a single age group with a single task. Very few studies have examined ToM abilities other than false belief, and very few studies have used a within-subjects design. During 2 testing sessions, infants aged 14 and 18 months old were administered ToM tasks based on the violation-of-expectation paradigm which measured intention, true belief, desire, and false-belief understanding. Infants’ looking times at the congruent and incongruent test trials of each task were compared, and results revealed that both groups of infants looked significantly longer at the incongruent trial on the intention and true-belief tasks. In contrast, only 18-month-olds looked significantly longer at the incongruent trial of the desire task and neither age group looked significantly longer at the incongruent trial on the false-belief task. Additionally, intertask comparisons revealed only a significant relation between performance on the false-belief and intention task. These findings suggest that implicit intention and true-belief understanding emerge earlier than desire and false-belief understanding and that ToM constructs do not appear to be integrated, as is the case for explicit ToM.  相似文献   

20.
Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory–visual interaction, using an auditory–visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号