首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
Four experiments were conducted in order to compare the effects of stimulus redundancy on temporal order judgments (TOJs) and reaction times (RTs). In Experiments 1 and 2, participants were presented in each trial with a tone and either a single visual stimulus or two redundant visual stimuli. They were asked to judge whether the tone or the visual display was presented first. Judgments of the relative onset times of the visual and the auditory stimuli were virtually unaffected by the presentation of redundant, rather than single, visual stimuli. Experiments 3 and 4 used simple RT tasks with the same stimuli, and responses were much faster to redundant than to single visual stimuli. It appears that the traditional speedup of RT associated with redundant visual stimuli arises after the stimulus detection processes to which TOJs are sensitive.  相似文献   

2.
People often move in synchrony with auditory rhythms (e.g., music), whereas synchronization of movement with purely visual rhythms is rare. In two experiments, this apparent attraction of movement to auditory rhythms was investigated by requiring participants to tap their index finger in synchrony with an isochronous auditory (tone) or visual (flashing light) target sequence while a distractor sequence was presented in the other modality at one of various phase relationships. The obtained asynchronies and their variability showed that auditory distractors strongly attracted participants' taps, whereas visual distractors had much weaker effects, if any. This asymmetry held regardless of the spatial congruence or relative salience of the stimuli in the two modalities. When different irregular timing patterns were imposed on target and distractor sequences, participants' taps tended to track the timing pattern of auditory distractor sequences when they were approximately in phase with visual target sequences, but not the reverse. These results confirm that rhythmic movement is more strongly attracted to auditory than to visual rhythms. To the extent that this is an innate proclivity, it may have been an important factor in the evolution of music.  相似文献   

3.
Prior research has established that performance in short-term memory tasks using auditory rhythmic stimuli is frequently superior to that in tasks using visual stimuli. In five experiments, the reasons for this were explored further. In a same-different task, pairs of brief rhythms were presented in which each rhythm was visual or auditory, resulting in two same-modality conditions and two cross-modality conditions. Three different rates of presentation were used. The results supported the temporal advantage of the auditory modality in short-term memory, which was quite robust at the quickest presentation rates. This advantage tended to decay as the presentation rate was slowed down, consistent with the view that, with time, the temporal patterns were being recoded into a more generic form.  相似文献   

4.
Dissociations between a motor response and the subject's verbal report have been reported from various experiments that investigated special experimental effects (e.g., metacontrast or induced motion). To examine whether similar dissociations can also be observed under standard experimental conditions, we compared reaction times (RT) and temporal order judgments (TOJ) to visual and auditory stimuli of three intensity levels. Data were collected from six subjects, each of which served for nine sessions. The results showed a strong, highly significant modality dissociation: While RTs to auditory stimuli were shorter than RTs to visual stimuli, the TOJ data indicated longer processing times for auditory than for visual stimuli. This pattern was found over the whole range of intensities investigated. Light intensity had similar effects on RT and TOJ, while there was a marginally significant tendency of tone intensity to affect RT more strongly than TOJ. It is concluded that modality dissociation is an example of "direct parameter specification", where the pathway from stimulus to response in the simple RT experiment is (at least partially) separate from the pathway that leads to a conscious, reportable representation. Two variants of this notion and alternatives to it are discussed.  相似文献   

5.
People naturally dance to music, and research has shown that rhythmic auditory stimuli facilitate production of precisely timed body movements. If motor mechanisms are closely linked to auditory temporal processing, just as auditory temporal processing facilitates movement production, producing action might reciprocally enhance auditory temporal sensitivity. We tested this novel hypothesis with a standard temporal-bisection paradigm, in which the slope of the temporal-bisection function provides a measure of temporal sensitivity. The bisection slope for auditory time perception was steeper when participants initiated each auditory stimulus sequence via a keypress than when they passively heard each sequence, demonstrating that initiating action enhances auditory temporal sensitivity. This enhancement is specific to the auditory modality, because voluntarily initiating each sequence did not enhance visual temporal sensitivity. A control experiment ruled out the possibility that tactile sensation associated with a keypress increased auditory temporal sensitivity. Taken together, these results demonstrate a unique reciprocal relationship between auditory time perception and motor mechanisms. As auditory perception facilitates precisely timed movements, generating action enhances auditory temporal sensitivity.  相似文献   

6.
Königs K  Knöll J  Bremmer F 《Perception》2007,36(10):1507-1512
Previous studies have shown that the perceived location of visual stimuli briefly flashed during smooth pursuit, saccades, or optokinetic nystagmus (OKN) is not veridical. We investigated whether these mislocalisations can also be observed for brief auditory stimuli presented during OKN. Experiments were carried out in a lightproof sound-attenuated chamber. Participants performed eye movements elicited by visual stimuli. An auditory target (white noise) was presented for 5 ms. Our data clearly indicate that auditory targets are mislocalised during reflexive eye movements. OKN induces a shift of perceived location in the direction of the slow eye movement and is modulated in the temporal vicinity of the fast phase. The mislocalisation is stronger for look- as compared to stare-nystagmus. The size and temporal pattern of the observed mislocalisation are different from that found for visual targets. This suggests that different neural mechanisms are at play to integrate oculomotor signals and information on the spatial location of visual as well as auditory stimuli.  相似文献   

7.
Our motor and perceptual representations of actions seem to be intimately linked and the human mirror neuron system (MNS) has been proposed as the mediator. In two experiments, we presented biological or non-biological movement stimuli that were either congruent or incongruent to a required response prompted by a tone. When the tone occurred with the onset of the last movement in a series, i.e., it was perceived during the movement presentation, congruent biological stimuli resulted in faster reaction times than congruent non-biological stimuli. The opposite was observed for incongruent stimuli. When the tone was presented after visual movement stimulation, however, no such interaction was present. This implies that biological movement stimuli only affect motor behaviour during visual processing but not thereafter. These data suggest that the MNS is an "online" system; longstanding repetitive visual stimulation (Experiment 1) has no benefit in comparison to only one or two repetitions (Experiment 2).  相似文献   

8.
This study examined the effects of visual-verbalload (as measured by a visually presented reading-memory task with three levels) on a visual/auditory stimulus-response task. The three levels of load were defined as follows: "No Load" meant no other stimuli were presented concurrently; "Free Load" meant that a letter (A, B, C, or D) appeared at the same time as the visual or auditory stimulus; and "Force Load" was the same as "Free Load," but the participants were also instructed to count how many times the letter A appeared. The stimulus-response task also had three levels: "irrelevant," "compatible," and "incompatible" spatial conditions. These required different key-pressing responses. The visual stimulus was a red ball presented either to the left or to the right of the display screen, and the auditory stimulus was a tone delivered from a position similar to that of the visual stimulus. Participants also processed an irrelevant stimulus. The results indicated that participants perceived auditory stimuli earlier than visual stimuli and reacted faster under stimulus-response compatible conditions. These results held even under a high visual-verbal load. These findings suggest the following guidelines for systems used in driving: an auditory source, appropriately compatible signal and manual-response positions, and a visually simplified background.  相似文献   

9.
Visual temporal processing and multisensory integration (MSI) of sound and vision were examined in individuals with schizophrenia using a visual temporal order judgment (TOJ) task. Compared to a non-psychiatric control group, persons with schizophrenia were less sensitive judging the temporal order of two successively presented visual stimuli. However, their sensitivity to visual temporal order improved as in the control group when two accessory sounds were added (temporal ventriloquism). These findings indicate that individuals with schizophrenia have diminished sensitivity to visual temporal order, but no deficits in the integration of low-level auditory and visual information.  相似文献   

10.
Previous research has shown that irrelevant sounds can facilitate the perception of visual apparent motion. Here the effectiveness of a single sound to facilitate motion perception was investigated in three experiments. Observers were presented with two discrete lights temporally separated by stimulus onset asynchronies from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. A short sound presented temporally (and spatially) midway between the lights facilitated the impression of motion relative to baseline (lights without sound), whereas a sound presented either before the first or after the second light or simultaneously with the lights did not affect motion impression. The facilitation effect also occurred with sound presented far from the visual display, as well as with continuous-sound that was started with the first light and terminated with the second light. No facilitation of visual motion perception occurred if the sound was part of a tone sequence that allowed for intramodal perceptual grouping of the auditory stimuli prior to the critical audiovisual stimuli. Taken together, the findings are consistent with a low-level audiovisual integration approach in which the perceptual system merges temporally proximate sound and light stimuli, thereby provoking the impression of a single multimodal moving object.  相似文献   

11.
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.  相似文献   

12.
This study investigated audiovisual synchrony perception in a rhythmic context, where the sound was not consequent upon the observed movement. Participants judged synchrony between a bouncing point-light figure and an auditory rhythm in two experiments. Two questions were of interest: (1) whether the reference in the visual movement, with which the auditory beat should coincide, relies on a position or a velocity cue; (2) whether the figure form and motion profile affect synchrony perception. Experiment 1 required synchrony judgment with regard to the same (lowest) position of the movement in four visual conditions: two figure forms (human or non-human) combined with two motion profiles (human or ball trajectory). Whereas figure form did not affect synchrony perception, the point of subjective simultaneity differed between the two motions, suggesting that participants adopted the peak velocity in each downward trajectory as their visual reference. Experiment 2 further demonstrated that, when judgment was required with regard to the highest position, the maximal synchrony response was considerably low for ball motion, which lacked a peak velocity in the upward trajectory. The finding of peak velocity as a cue parallels results of visuomotor synchronization tasks employing biological stimuli, suggesting that synchrony judgment with rhythmic motions relies on the perceived visual beat.  相似文献   

13.
Xiao M  Wong M  Umali M  Pomplun M 《Perception》2007,36(9):1391-1395
Perceptual integration of audio-visual stimuli is fundamental to our everyday conscious experience. Eye-movement analysis may be a suitable tool for studying such integration, since eye movements respond to auditory as well as visual input. Previous studies have shown that additional auditory cues in visual-search tasks can guide eye movements more efficiently and reduce their latency. However, these auditory cues were task-relevant since they indicated the target position and onset time. Therefore, the observed effects may have been due to subjects using the cues as additional information to maximize their performance, without perceptually integrating them with the visual displays. Here, we combine a visual-tracking task with a continuous, task-irrelevant sound from a stationary source to demonstrate that audio-visual perceptual integration affects low-level oculomotor mechanisms. Auditory stimuli of constant, increasing, or decreasing pitch were presented. All sound categories induced more smooth-pursuit eye movement than silence, with the greatest effect occurring with stimuli of increasing pitch. A possible explanation is that integration of the visual scene with continuous sound creates the perception of continuous visual motion. Increasing pitch may amplify this effect through its common association with accelerating motion.  相似文献   

14.
This study investigated how spatial intervals between successive visual flashes are influenced by the temporal intervals between auditory pure tones presented concurrently with the flashes. Three successive visual flashes defined two spatial intervals with different extents as well as two equal temporal intervals. The onsets of the first and third tones were temporally aligned with those of the first and third flashes, while the onset of the second tone was temporally offset to that of the second visual flash, resulting in shorter or longer temporal intervals between pairs of tones. Observers judged which of the first or second spatial intervals between flashes was shorter. The results showed that the shorter temporal interval between tones caused underestimation of the spatial interval between flashes. On the other hand, stimuli without the first and third tones did not result in underestimation of spatial intervals between flashes. These results indicate an audiovisual tau effect, which is triggered by a constant velocity assumption applied to moving objects defined by more than one modality.  相似文献   

15.
It has been proposed that the perception of very short duration is governed by sensory mechanisms, whereas the perception of longer duration depends on cognitive capacities. Four duration discrimination tasks (modalities: visual, auditory; base duration: 100 ms, 1000 ms) were used to study the relation between time perception, age, sex, and cognitive abilities (alertness, visual and verbal working memory, general fluid reasoning) in 100 subjects aged between 21 and 84 years. Temporal acuity was higher (Weber fractions are lower) for longer stimuli and for the auditory modality. Age was related to the visual 100 ms condition only, with lower temporal acuity in elder participants. Alertness was significantly related to auditory and visual Weber fractions for shorter stimuli only. Additionally, visual working memory was a significant predictor for shorter visual stimuli. These results indicate that alertness, but also working memory, are associated with temporal discrimination of very brief duration.  相似文献   

16.
This study investigated whether explicit beat induction in the auditory, visual, and audiovisual (bimodal) modalities aided the perception of weakly metrical auditory rhythms, and whether it reinforced attentional entrainment to the beat of these rhythms. The visual beat-inducer was a periodically bouncing point-light figure, which aimed to examine whether an observed rhythmic human movement could induce a beat that would influence auditory rhythm perception. In two tasks, participants listened to three repetitions of an auditory rhythm that were preceded and accompanied by (1) an auditory beat, (2) a bouncing point-light figure, (3) a combination of (1) and (2) synchronously, or (4) a combination of (1) and (2), with the figure moving in anti-phase to the auditory beat. Participants reproduced the auditory rhythm subsequently (Experiment 1), or detected a possible temporal change in the third repetition (Experiment 2). While an explicit beat did not improve rhythm reproduction, possibly due to the syncopated rhythms when a beat was imposed, bimodal beat induction yielded greater sensitivity to a temporal deviant in on-beat than in off-beat positions. Moreover, the beat phase of the figure movement determined where on-beat accents were perceived during bimodal induction. Results are discussed with regard to constrained beat induction in complex auditory rhythms, visual modulation of auditory beat perception, and possible mechanisms underlying the preferred visual beat consisting of rhythmic human motions.  相似文献   

17.
Human Ss matched an auditory and a visual stimulus for subjective magnitude. Then each stimulus was used as a cue in a reaction time task. On occasions when both stimuli were presented simultaneously, Ss’ responding was seen to be dominated by the visual stimulus. Of further interest was the finding that on some occasions of simultaneous light-tone presentation Ss were unaware that the tone had been presented. This apparent prepotency of the visual over the auditory stimulus was seen to persist across a variety of experimental conditions, which included giving Ss verbal instructions to respond to the tone when both stimuli were presented simultaneously.  相似文献   

18.
Although music and dance are often experienced simultaneously, it is unclear what modulates their perceptual integration. This study investigated how two factors related to music–dance correspondences influenced audiovisual binding of their rhythms: the metrical match between the music and dance, and the kinematic familiarity of the dance movement. Participants watched a point-light figure dancing synchronously to a triple-meter rhythm that they heard in parallel, whereby the dance communicated a triple (congruent) or a duple (incongruent) visual meter. The movement was either the participant’s own or that of another participant. Participants attended to both streams while detecting a temporal perturbation in the auditory beat. The results showed lower sensitivity to the auditory deviant when the visual dance was metrically congruent to the auditory rhythm and when the movement was the participant’s own. This indicated stronger audiovisual binding and a more coherent bimodal rhythm in these conditions, thus making a slight auditory deviant less noticeable. Moreover, binding in the metrically incongruent condition involving self-generated visual stimuli was correlated with self-recognition of the movement, suggesting that action simulation mediates the perceived coherence between one’s own movement and a mismatching auditory rhythm. Overall, the mechanisms of rhythm perception and action simulation could inform the perceived compatibility between music and dance, thus modulating the temporal integration of these audiovisual stimuli.  相似文献   

19.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

20.
When a speech sound in a sentence is replaced completely by an extraneous sound (such as a cough or tone), the listene restores the missing sound on the bases of both prior and subsequent context. This illusory effect, called phonemic restoration (PhR), causes the physically absent phoneme to seem as real as the speech sounds which are present. The extraneous sound seems to occur along with other phonemes without interfering with their clarity. But if a silent gap (rather than an extraneous sound) replaces the same phoneme, the interruption in the sentence is more readily localized in its true position and PhRs occours less frequently. Quantitative measures were taken both of the incidence of PhRs and of the direction and extent of temporal mislocalizations of interruptions for several related stimuli under a variety of experimental conditions. The results were connected with other auditory illusions and temporal confusions reported in the literature, and suggestions were made concerning mechanisms employed normally for verbal organization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号