首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Research has demonstrated that young infants can detect a change in the tempo and the rhythm of an event when they experience the event bimodally (audiovisually), but not when they experience it unimodally (acoustically or visually). According to Bahrick and Lickliter (2000, 2002), intersensory redundancy available in bimodal, but not in unimodal, stimulation directs attention to the amodal properties of events in early development. Later in development, as infants become more experienced perceivers, attention becomes more flexible and can be directed toward amodal properties in unimodal and bimodal stimulation. The present study tested this developmental hypothesis by assessing the ability of older, more perceptually experienced infants to discriminate the tempo or rhythm of an event, using procedures identical to those in prior studies. The results indicated that older infants can detect a change in the rhythm and the tempo of an event following both bimodal (audiovisual) and unimodal (visual) stimulation. These results provide further support for the intersensory redundancy hypothesis and are consistent with a pattern of increasing specificity in perceptual development.  相似文献   

2.
Information presented redundantly and in temporal synchrony across sensory modalities (intersensory redundancy) selectively recruits attention and facilitates perceptual learning in human infants. This comparative study examined whether intersensory redundancy also facilitates perceptual learning prenatally. The authors assessed quail (Colinus virginianus) embryos' ability to learn a maternal call when it was (a) unimodal, (b) concurrent but asynchronous with patterned light, or (c) redundant and synchronous with patterned light. Chicks' preference for the familiar over a novel maternal call was assessed 24 hr following hatching. Chicks receiving redundant, synchronous stimulation as embryos learned the call 4 times faster than those who received unimodal exposure. Chicks who received asynchronous bimodal stimulation showed no evidence of learning. These results provide the first evidence that embryos are sensitive to redundant, bimodal information and that it can facilitate learning during the prenatal period.  相似文献   

3.
This research examined the developmental course of infants' ability to perceive affect in bimodal (audiovisual) and unimodal (auditory and visual) displays of a woman speaking. According to the intersensory redundancy hypothesis (L. E. Bahrick, R. Lickliter, & R. Flom, 2004), detection of amodal properties is facilitated in multimodal stimulation and attenuated in unimodal stimulation. Later in development, however, attention becomes more flexible, and amodal properties can be perceived in both multimodal and unimodal stimulation. The authors tested these predictions by assessing 3-, 4-, 5-, and 7-month-olds' discrimination of affect. Results demonstrated that in bimodal stimulation, discrimination of affect emerged by 4 months and remained stable across age. However, in unimodal stimulation, detection of affect emerged gradually, with sensitivity to auditory stimulation emerging at 5 months and visual stimulation at 7 months. Further temporal synchrony between faces and voices was necessary for younger infants' discrimination of affect. Across development, infants first perceive affect in multimodal stimulation through detecting amodal properties, and later their perception of affect is extended to unimodal auditory and visual stimulation. Implications for social development, including joint attention and social referencing, are considered.  相似文献   

4.
We assessed whether exposure to amodal properties in bimodal stimulation (e.g. rhythm, rate, duration) could educate attention to amodal properties in subsequent unimodal stimulation during prenatal development. Bobwhite quail embryos were exposed to an individual bobwhite maternal call under several experimental and control conditions during the day prior to hatching. Experimental groups received redundant auditory and visual exposure to the temporal features of an individual maternal call followed by unimodal auditory exposure to the same call immediately or after a 2-hr or 4-hr delay. Control groups received (1) the same exposure but in the reverse sequence (unimodal --> redundant bimodal), (2) asynchronous bimodal --> unimodal, (3) only unimodal exposure, or (4) only bimodal exposure. All experimental groups showed a significant preference for the familiar maternal call over a novel maternal call when tested 2 days after hatching, whereas none of the control groups showed a significant preference for the familiar call. These results indicate that intersensory redundancy can direct attention to amodal properties in bimodal stimulation and educate attention to the same amodal properties in subsequent unimodal stimulation where no intersensory redundancy is available.  相似文献   

5.
Prior research has demonstrated intersensory facilitation for perception of amodal properties of events such as tempo and rhythm in early development, supporting predictions of the Intersensory Redundancy Hypothesis (IRH). Specifically, infants discriminate amodal properties in bimodal, redundant stimulation but not in unimodal, nonredundant stimulation in early development, whereas later in development infants can detect amodal properties in both redundant and nonredundant stimulation. The present study tested a new prediction of the IRH: that effects of intersensory redundancy on attention and perceptual processing are most apparent in tasks of high difficulty relative to the skills of the perceiver. We assessed whether by increasing task difficulty, older infants would revert to patterns of intersensory facilitation shown by younger infants. Results confirmed our prediction and demonstrated that in difficult tempo discrimination tasks, 5‐month‐olds perform like 3‐month‐olds, showing intersensory facilitation for tempo discrimination. In contrast, in tasks of low and moderate difficulty, 5‐month‐olds discriminate tempo changes in both redundant audiovisual and nonredundant unimodal visual stimulation. These findings indicate that intersensory facilitation is most apparent for tasks of relatively high difficulty and may therefore persist across the lifespan.  相似文献   

6.
Responses to unimodal and multimodal attributes of a compound auditory/visual stimulus were investigated in 4-, 6-, 8-, and 10-month-old infants. First, infants were habituated to a compound stimulus consisting of a visual stimulus that moved up and down on a video monitor and a sound that occurred each time the visual stimulus reversed direction at the bottom. Once each infant met a habituation criterion, a series of test trials was administered to assess responsiveness to the components of the compound stimulus. Response was defined as the total duration of visual fixation in each trial. In the two unimodal test trials, the rate at which the component was presented was changed while the rate of the other component remained the same, whereas in the bimodal test trial the rate of both components was changed simultaneously. Results indicated that infants at each age successfully discriminated the bimodal and the two unimodal changes and that regression to the mean did not account for the results. Results also showed that disruption of the temporal relationship that accompanied the change in rate in the two unimodal test trials was also discriminable, but rate changes appeared to play a greater role in responsiveness than did synchrony changes. Considered together with results from similar prior studies, the current results are consistent with the modality appropriateness hypothesis in showing that discrimination of temporal changes in the auditory and visual modalities is dependent on the specialization of the sensory modalities.  相似文献   

7.
Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of auditory and visual stimuli was assessed during a subsequent testing phase. In Experiment 2, the familiarity of the auditory or visual stimulus was systematically manipulated by prefamiliarizing infants to either the auditory or visual stimulus prior to the experiment proper. With the exception of the prefamiliarized auditory condition in Experiment 2, infants in the multimodal conditions failed to increase looking when the visual component changed at test. This finding is noteworthy given that infants discriminated the same visual stimuli when presented unimodally, and there was no evidence that multimodal presentation attenuated auditory processing. Possible factors underlying these effects are discussed.  相似文献   

8.
Evidence that audition dominates vision in temporal processing has come from perceptual judgment tasks. This study shows that this auditory dominance extends to the largely subconscious processes involved in sensorimotor coordination. Participants tapped their finger in synchrony with auditory and visual sequences containing an event onset shift (EOS), expected to elicit an involuntary phase correction response (PCR), and also tried to detect the EOS. Sequences were presented in unimodal and bimodal conditions, including one in which auditory and visual EOSs of opposite sign coincided. Unimodal results showed greater variability of taps, smaller PCRs, and poorer EOS detection in vision than in audition. In bimodal conditions, variability of taps was similar to that for unimodal auditory sequences, and PCRs depended more on auditory than on visual information, even though attention was always focused on the visual sequences.  相似文献   

9.
Responses to unimodal and multimodal attributes of a compouMauxlitoryMsuat stimulus were investigated in 4-, 6-, 8-, and 10-month-old infants. First, infants were habituated to a compound stimulus consisting of a visual stimulus that moved up anddown on a video monitor and a sound that occurred each time the visual stimulus reversed direction at the bottom. Once each infant met a habituation criterion, a series of test trials was administered to assess responsiveness to the components of the compound stimulus. Response was defined as the total duration of visual fixation in each trial. In the two unimodal test trials, the rate at which the component was presented was changed while the rate of the other component remained the same, whereas in the bimodal test trial the rate of both components was changed simultaneously. Results indicated that infants at each age successfully discriminated the bimodal and the two unimodal changes and that regression to the mean did not account for the results. Results also showed that disruption of the temporal relationship that accompanied the change in rate in the two unimodal test trials was also discriminable, but rate changes appeared to play a greater role in responsiveness than did synchrony changes. Considered together with results from similar prior studies, the current results are consistent with the modality appropriateness hypothesis in showing that discrimination of temporal changes in the auditory and visual modalities is dependent on the specialization of the sensory modalities.  相似文献   

10.
When the senses deliver conflicting information, vision dominates spatial processing, and audition dominates temporal processing. We asked whether this sensory specialization results in cross-modal encoding of unisensory input into the task-appropriate modality. Specifically, we investigated whether visually portrayed temporal structure receives automatic, obligatory encoding in the auditory domain. In three experiments, observers judged whether the changes in two successive visual sequences followed the same or different rhythms. We assessed temporal representations by measuring the extent to which both task-irrelevant auditory information and task-irrelevant visual information interfered with rhythm discrimination. Incongruent auditory information significantly disrupted task performance, particularly when presented during encoding; by contrast, varying the nature of the rhythm-depicting visual changes had minimal impact on performance. Evidently, the perceptual system automatically and obligatorily abstracts temporal structure from its visual form and represents this structure using an auditory code, resulting in the experience of "hearing visual rhythms."  相似文献   

11.
Infants’ attention is captured by the redundancy of amodal stimulation in multimodal objects and events. Evidence from this study demonstrates that intersensory redundancy can facilitate discrimination of rhythm changes presented in the visual modality alone in visually impaired infants, suggesting that multisensory rehabilitation strategies could prove helpful in this population.  相似文献   

12.
This study examined the multisensory integration of visual and auditory motion information using a methodology designed to single out perceptual integration processes from post-perceptual influences. We assessed the threshold stimulus onset asynchrony (SOA) at which the relative directions (same vs. different) of simultaneously presented visual and auditory apparent motion streams could no longer be discriminated (Experiment 1). This threshold was higher than the upper threshold for direction discrimination (left vs. right) of each individual modality when presented in isolation (Experiment 2). The poorer performance observed in bimodal displays was interpreted as a consequence of automatic multisensory integration of motion information. Experiment 3 supported this interpretation by ruling out task differences as the explanation for the higher threshold in Experiment 1. Together these data provide empirical support for the view that multisensory integration of motion signals can occur at a perceptual level.  相似文献   

13.
The present experiment assessed intersensory differences in temporal judgments, that is, auditory stimuli are perceived as longer than physically equivalent visual stimuli. The results confirmed the intersensory difference. Auditorially defined intervals were experienced as longer than visually defined intervals. Auditory boundaries were perceived as longer than visual ones. An interaction of boundary modality and interval modality was obtained which suggested that auditorially defined intervals provided more temporal information about events occurring in close temporal proximity than visually defined intervals. It was hypothesized that cognitive factors, specifically stimulus complexity, would affect the auditory and visual systems differentially. This hypothesis was not substantiated, although highly complex stimuli were experienced as longer than those of low complexity.  相似文献   

14.
Ninety-six infants of 3 1/2 months were tested in an infant-control habituation procedure to determine whether they could detect three types of audio-visual relations in the same events. The events portrayed two amodal invariant relations, temporal synchrony and temporal microstructure specifying the composition of the objects, and one modality-specific relation, that between the pitch of the sound and the color/shape of the objects. Subjects were habituated to two events accompanied by their natural, synchronous, and appropriate sounds and then received test trials in which the relation between the visual and the acoustic information was changed. Consistent with Gibson's increasing specificity hypothesis, it was expected that infants would differentiate amodal invariant relations prior to detecting arbitrary, modality-specific relations. Results were consistent with this prediction, demonstrating significant visual recovery to a change in temporal synchrony and temporal microstructure, but not to a change in the pitch-color/shape relations. Two subsequent discrimination studies demonstrated that infants' failure to detect the changes in pitch-color/shape relations could not be attributed to an inability to discriminate the pitch or the color/shape changes used in Experiment 1. Infants showed robust discrimination of the contrasts used.  相似文献   

15.
Temporal preparation often has been assumed to influence motor stages of information processing. Recent studies, however, challenge this notion and provide evidence for a facilitation of visual processing. The present study was designed to investigate whether perceptual processing in the auditory domain also benefits from temporal preparation. To this end, we employed a pitch discrimination task. In Experiment 1, discrimination performance was clearly improved when participants were temporally prepared. This finding was confirmed in Experiment 2, which ruled out possible influences of short-term memory. The results support the notion that temporal preparation enhances perceptual processing not only in the visual, but also in the auditory, modality.  相似文献   

16.
Temporal preparation often has been assumed to influence motor stages of information processing. Recent studies, however, challenge this notion and provide evidence for a facilitation of visual processing. The present study was designed to investigate whether perceptual processing in the auditory domain also benefits from temporal preparation. To this end, we employed a pitch discrimination task. In Experiment 1, discrimination performance was clearly improved when participants were temporally prepared. This finding was confirmed in Experiment 2, which ruled out possible influences of short-term memory. The results support the notion that temporal preparation enhances perceptual processing not only in the visual, but also in the auditory, modality.  相似文献   

17.
The nature of the evidence on the role played by early stimulation history in perceptual development related to an appreciation of intermodal attributes involving space and time is reviewed. In conjunction with this analysis, an examination was undertaken of the effect of early visual deprivation on the ability of dark- (DR) and light-reared (LR) rats to learn discriminations involving location of sounds or lights and to abstract the intersensory correspondence involved from the initial modality-specific training. Visually inexperienced DR rats were somewhat slower to acquire a discrimination involving the location of visual events under some stimulus/response arrangements. More importantly, such animals were not as effective as their visually experienced LR counterparts in demonstrating cross-modal transfer (CMT) to signals in a new modality. The present study also revealed that CMT involving location of signals was less salient than CMT of duration information in rats regardless of their rearing condition. Finally, findings are discussed more generally, providing contextual information that bears on issues related to parallel cognitive functions in rats and human neonates and on the role of early visual experience in the ontogeny of intersensory perceptual competence in mammals.  相似文献   

18.
Early evidence of social referencing was examined in 5?-month-old infants. Infants were habituated to 2 films of moving toys, one toy eliciting a woman's positive emotional expression and the other eliciting a negative expression under conditions of bimodal (audiovisual) or unimodal visual (silent) speech. It was predicted that intersensory redundancy provided by audiovisual (but not available in unimodal visual) events would enhance detection of the relation between emotional expressions and the corresponding toy. Consistent with predictions, only infants who received bimodal, audiovisual events detected a change in the affect-object relations, showing increased looking during a switch test in which the toy-affect pairing was reversed. Moreover, in a subsequent live preference test, they preferentially touched the 3-dimensional toy previously paired with the positive expression. These findings suggest social referencing emerges by 5? months in the context of intersensory redundancy provided by dynamic multimodal stimulation and that even 5?-month-old infants demonstrate preferences for 3-dimensional objects on the basis of affective information depicted in videotaped events.  相似文献   

19.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

20.
Presenting an auditory or tactile cue in temporal synchrony with a change in the color of a visual target can facilitate participants’ visual search performance. In the present study, we compared the magnitude of unimodal auditory, vibrotactile, and bimodal (i.e., multisensory) cuing benefits when the nonvisual cues were presented in temporal synchrony with the changing of the target’s color (Experiments 1 and 2). The target (a horizontal or vertical line segment) was presented among a number of distractors (tilted line segments) that also changed color at various times. In Experiments 3 and 4, the cues were also made spatially informative with regard to the location of the visual target. The unimodal and bimodal cues gave rise to an equivalent (significant) facilitation of participants’ visual search performance relative to a no-cue baseline condition. Making the unimodal auditory and vibrotactile cues spatially informative produced further performance improvements (on validly cued trials), as compared with cues that were spatially uninformative or otherwise spatially invalid. A final experiment was conducted in order to determine whether cue location (close to versus far from the visual display) would influence participants’ visual search performance. Auditory cues presented close to the visual search display were found to produce significantly better performance than cues presented over headphones. Taken together, these results have implications for the design of nonvisual and multisensory warning signals used in complex visual displays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号