首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For some stimuli, dynamic changes are crucial for identifying just what the stimuli are. For example, spoken words (or any auditory stimuli) require change over time to be recognized. Kallman and Cameron (1989) have proposed that this sort of dynamic change underlies the enhanced recency effect found for auditory stimuli, relative to visual stimuli. The results of three experiments replicate and extend Kallman and Cameron's finding that dynamic visual stimuli (that is visual stimuli in which movement is necessary to identify the stimuli), relative to static visual stimuli, engender enhanced recency effects. In addition, an analysis based on individual differences is used to demonstrate that the processes underlying enhanced recency effects for auditory and dynamic visual stimuli are substantially similar. These results are discussed in the context of perceptual grouping processes.  相似文献   

2.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

3.
Four experiments were conducted in order to compare the effects of stimulus redundancy on temporal order judgments (TOJs) and reaction times (RTs). In Experiments 1 and 2, participants were presented in each trial with a tone and either a single visual stimulus or two redundant visual stimuli. They were asked to judge whether the tone or the visual display was presented first. Judgments of the relative onset times of the visual and the auditory stimuli were virtually unaffected by the presentation of redundant, rather than single, visual stimuli. Experiments 3 and 4 used simple RT tasks with the same stimuli, and responses were much faster to redundant than to single visual stimuli. It appears that the traditional speedup of RT associated with redundant visual stimuli arises after the stimulus detection processes to which TOJs are sensitive.  相似文献   

4.
Inspection time, defined as the minimum duration for which two different stimuli must be presented if they are to be perceived as different, was measured for both auditory and visual stimuli. The minimum durations were determined by means of a two- alternative forced-choice task for 50 children whose average age was 12 years 2 months. The times were correlated with the children's verbal (Mill-Hill vocabulary) and nonverbal (Raven matrices) intelligence. The Kendall correlation coefficients were ?.3188 and ?.0929 for the auditory and visual inspection times with verbal intelligence, and ?.2322 and ?.2676 for those times with nonverbal intelligence. Auditory and visual inspection times were correlated .1721 with each other. These results do not support earlier claims that inspection time is closely related to conventional measures of intelligence.  相似文献   

5.
Selective attention to visual and auditory stimuli and reflection-impulsivity were studied in normal and learning-disabled 8- and 12-year-old boys. Multivariate analyses, followed by univariate and paired-comparison tests, indicated that the normal children increased in selective attention efficiency with age to both visual and auditory stimuli. Learning-disabled children increased in selective attention efficiency with age to auditory, but not to visual, stimuli. Both groups increased with age in reflection as measured by Kagan's Matching Familiar Figures Test (MFF). The 8-year-old learning-disabled children were more impulsive than the 8-year-old normals on MFF error scores, but not on MFF latency scores. No difference occurred between the 12-year-old learning-disabled and normal children on either MFF error or MFF latency scores. Correlations between the selective attention scores and MFF error and latency scores were not significant.This research was supported in part by BEH grant G007507227. The authors are indebted to Eleanor McCandless for her assistance in securing the learning-disabled subjects and to James McLeskey and Michael Popkin for their assistance in collecting and analyzing data.  相似文献   

6.
We have previously shown that aging deteriorates detection of spatial visual and auditory stimuli and prolongs reaction times measured during a virtual driving task. Sleep deprivation affected the young more than the old. Here we determined the effects of age and sleep deprivation on ERPs elicited by spatial visual and auditory stimuli during virtual driving. Participants were 22 young (18–35 years) and 19 old (65–79) healthy males. Experiments were run in normal daytime condition and after a night of sleep deprivation. Aging shortened the peak latencies of the early P1 and N1 but increased the P3 latency. Sleep deprivation slowed down and diminished the N1 peaks of the young. General right-side preference was seen in latencies. Thus, the effects of aging could be seen in decision making and working memory related processes (P3), whereas those of sleep deprivation could be found in alerting and orienting functions (N1) in the young.  相似文献   

7.
The relation between mental ability and auditory discrimination ability was examined by recording event-related potentials from 60 women during an auditory oddball task with backward masking. Across conditions that varied in intensity and in the interval between the target and masking stimuli, the higher ability (HA) group exhibited greater response accuracy, shorter response times, larger P3 amplitude, and shorter P3 latency to target stimuli than the lower ability (LA) group. When instructed to ignore the stimuli, the HA group exhibited shorter mismatch negativity latency to deviant tones than the LA group. The greater speed and accuracy of auditory discrimination for the HA group, observed here with multiple measures, is not a consequence of response strategy, test-taking ability, or attention deployment.  相似文献   

8.
Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that message is presented in a noisy background. Speech is a particularly important example of multisensory integration because of its behavioural relevance to humans and also because brain regions have been identified that appear to be specifically tuned for auditory speech and lip gestures. Previous research has suggested that speech stimuli may have an advantage over other types of auditory stimuli in terms of audio-visual integration. Here, we used a modified adaptive psychophysical staircase approach to compare the influence of congruent visual stimuli (brief movie clips) on the detection of noise-masked auditory speech and non-speech stimuli. We found that congruent visual stimuli significantly improved detection of an auditory stimulus relative to incongruent visual stimuli. This effect, however, was equally apparent for speech and non-speech stimuli. The findings suggest that speech stimuli are not specifically advantaged by audio-visual integration for detection at threshold when compared with other naturalistic sounds.  相似文献   

9.
In this paper, we show that human saccadic eye movements toward a visual target are generated with a reduced latency when this target is spatially and temporally aligned with an irrelevant auditory nontarget. This effect gradually disappears if the temporal and/or spatial alignment of the visual and auditory stimuli are changed. When subjects are able to accurately localize the auditory stimulus in two dimensions, the spatial dependence of the reduction in latency depends on the actual radial distance between the auditory and the visual stimulus. If, however, only the azimuth of the sound source can be determined by the subjects, the horizontal target separation determines the strength of the interaction. Neither saccade accuracy nor saccade kinematics were affected in these paradigms. We propose that, in addition to an aspecific warning signal, the reduction of saccadic latency is due to interactions that take place at a multimodal stage of saccade programming, where theperceived positions of visual and auditory stimuli are represented in a common frame of reference. This hypothesis is in agreement with our finding that the saccades often are initially directed to the average position of the visual and the auditory target, provided that their spatial separation is not too large. Striking similarities with electrophysiological findings on multisensory interactions in the deep layers of the midbrain superior colliculus are discussed.  相似文献   

10.
《Memory (Hove, England)》2013,21(3):321-342
A late parietal positivity (P3) and behavioural measures were studied during performance of a two-item memory-scanning task. Stimuli were digits presented as memorised items in one modality (auditory or visual) while the following probe, also a digit, was presented in the same or the other modality. In a separate set of experiments, P3 and behaviour were similarly studied using only visual stimuli that were either lexical (digits) or non-lexical (novel fonts with the same contours as the digits) to which subjects assigned numerical values. Reaction times (RTs) and P3 latencies were prolonged to non-lexical compared to lexical stimuli. Although RTs were longer to auditory than to visual stimuli, P3 latencies to memorised items were prolonged in response to visually compared to auditorily presented memorised items, and were further prolonged when preceding visual probes. P3 amplitudes were smaller to auditory than to visual stimuli, and were smaller for the second memorised item when lexical/non-lexical comparisons were involved. The most striking finding was scalp distribution variations indicating changes in relative contributions of brain structures involved in processing memorised items, according to the probes that followed. These findings are compatible, in general, with a phonological memorisation, but they suggest that the process is modified by memorising the item in the same terms as the expected probe that follows.  相似文献   

11.
Male hooded and albino rats were exposed to a light flash followed at various temporal intervals by a startle-eliciting 117 db. (re 20 muN/m2) burst of white noise. The visual stimulus engendered startle response inhibition (maximally when the lead time was 64-250 msec) as well as startle response latency reduction (maximally when the lead time was 2-8 msec). The temporal functions for the effects of visual stimuli paralleled those previously reported for startle modification by acoustic events. Further study revealed that, given optimal lead times, inhibition is produced reliably by weaker visual stimuli (3 X 10-6 cd-sec/cm2) than latency reduction (3 X 10-4 cd-sec/cm2). This differential sensitivity to visual stimuli is also analogous to previously reported findings for events in the acoustic environment. It reveals that the neural mechanisms that mediate latency reduction and inhibition can be engaged by either acoustic or visual stimulation.  相似文献   

12.
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target–distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.  相似文献   

13.
Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker??s phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory?Cvisual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.  相似文献   

14.
为考察听觉失匹配负波是否反映自动加工,实验改进了视觉和听觉刺激同时呈现的感觉道间选择性注意实验模式,更好地控制了非注意听觉条件。结果发现,在注意与非注意听觉条件下,听觉偏离刺激均诱发出失匹配负波;注意听觉刺激时140-180ms的偏离相关负波与非注意时该时程负波的平均波幅之间无显著差异,而注意时180-220ms的偏离相关负波的平均波幅大于非注意时同一时程之负波;非注意听觉时失匹配负波的平均波幅和峰潜伏期不受视觉道任务难度的影响,该结果为听觉失匹配负波反映自动加工的观点提供了进一步证据。  相似文献   

15.
Dissociations between a motor response and the subject's verbal report have been reported from various experiments that investigated special experimental effects (e.g., metacontrast or induced motion). To examine whether similar dissociations can also be observed under standard experimental conditions, we compared reaction times (RT) and temporal order judgments (TOJ) to visual and auditory stimuli of three intensity levels. Data were collected from six subjects, each of which served for nine sessions. The results showed a strong, highly significant modality dissociation: While RTs to auditory stimuli were shorter than RTs to visual stimuli, the TOJ data indicated longer processing times for auditory than for visual stimuli. This pattern was found over the whole range of intensities investigated. Light intensity had similar effects on RT and TOJ, while there was a marginally significant tendency of tone intensity to affect RT more strongly than TOJ. It is concluded that modality dissociation is an example of "direct parameter specification", where the pathway from stimulus to response in the simple RT experiment is (at least partially) separate from the pathway that leads to a conscious, reportable representation. Two variants of this notion and alternatives to it are discussed.  相似文献   

16.
A two-stage model for visual-auditory interaction in saccadic latencies   总被引:2,自引:0,他引:2  
In two experiments, saccadic response time (SRT) for eye movements toward visual target stimuli at different horizontal positions was measured under simultaneous or near-simultaneous presentation of an auditory nontarget (distractor). The horizontal position of the auditory signal was varied, using a virtual auditory environment setup. Mean SRT to a visual target increased with distance to the auditory nontarget and with delay of the onset of the auditory signal relative to the onset of the visual stimulus. A stochastic model is presented that distinguishes a peripheral processing stage with separate parallel activation by visual and auditory information from a central processing stage at which intersensory integration takes place. Two model versions differing with respect to the role of the auditory distractors are tested against the SRT data.  相似文献   

17.
Rhythmic auditory stimuli presented before a goal-directed movement have been found to improve temporal and spatial movement outcomes. However, little is known about the mechanisms mediating these benefits. The present experiment used three types of auditory stimuli to probe how improved scaling of movement parameters, temporal preparation and an external focus of attention may contribute to changes in movement performance. Three types of auditory stimuli were presented for 1200 ms before movement initiation; three metronome beats (RAS), a tone that stayed the same (tone-same), a tone that increased in pitch (tone-change) and a no sound control, were presented with and without visual feedback for a total of eight experimental conditions. The sound was presented before a visual go-signal, and participants were instructed to reach quickly and accurately to one of two targets randomly identified in left and right hemispace. Twenty-two young adults completed 24 trials per blocked condition in a counterbalanced order. Movements were captured with an Optotrak 3D Investigator, and a 4(sound) by 2(vision) repeated measures ANOVA was used to analyze dependant variables. All auditory conditions had shorter reaction times than no sound. Tone-same and tone-change conditions had shorter movement times and higher peak velocities, with no change in trajectory variability or endpoint error. Therefore, rhythmic and non-rhythmic auditory stimuli impacted movement performance differently. Based on the pattern of results we propose multiple mechanisms impact movement planning processes when rhythmic auditory stimuli are present.  相似文献   

18.
The question of how vision and audition interact in natural object identification is currently a matter of debate. We developed a large set of auditory and visual stimuli representing natural objects in order to facilitate research in the field of multisensory processing. Normative data was obtained for 270 brief environmental sounds and 320 visual object stimuli. Each stimulus was named, categorized, and rated with regard to familiarity and emotional valence by N=56 participants (Study 1). This multimodal stimulus set was employed in two subsequent crossmodal priming experiments that used semantically congruent and incongruent stimulus pairs in a S1-S2 paradigm. Task-relevant targets were either auditory (Study 2) or visual stimuli (Study 3). The behavioral data of both experiments expressed a crossmodal priming effect with shorter reaction times for congruent as compared to incongruent stimulus pairs. The observed facilitation effect suggests that object identification in one modality is influenced by input from another modality. This result implicates that congruent visual and auditory stimulus pairs were perceived as the same object and demonstrates a first validation of the multimodal stimulus set.  相似文献   

19.
The ability to recognize familiar individuals with different sensory modalities plays an important role in animals living in complex physical and social environments. Individual recognition of familiar individuals was studied in a female chimpanzee named Pan. In previous studies, Pan learned an auditory–visual intermodal matching task (AVIM) consisting of matching vocal samples with the facial pictures of corresponding vocalizers (humans and chimpanzees). The goal of this study was to test whether Pan was able to generalize her AVIM ability to new sets of voice and face stimuli, including those of three infant chimpanzees. Experiment 1 showed that Pan performed intermodal individual recognition of familiar adult chimpanzees and humans very well. However, individual recognition of infant chimpanzees was poorer relative to recognition of adults. A transfer test with new auditory samples (Experiment 2) confirmed the difficulty in recognizing infants. A remaining question was what kind of cues were crucial for the intermodal matching. We tested the effect of visual cues (Experiment 3) by introducing new photographs representing the same chimpanzees in different visual perspectives. Results showed that only the back view was difficult to recognize, suggesting that facial cues can be critical. We also tested the effect of auditory cues (Experiment 4) by shortening the length of auditory stimuli, and results showed that 200 ms vocal segments were the limit for correct recognition. Together, these data demonstrate that auditory–visual intermodal recognition in chimpanzees might be constrained by the degree of exposure to different modalities and limited to specific visual cues and thresholds of auditory cues.  相似文献   

20.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号