首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Investigations of situations involving spatial discordance between auditory and visual data which can otherwise be attributed to a common origin have revealed two main phenomena:cross-modal bias andperceptual fusion (or ventriloquism). The focus of the present study is the relationship between these two. The question asked was whether bias occurred only with fusion, as is predicted by some accounts of reactions to discordance, among them those based on cuesubstitution. The approach consisted of having subjects, on each trial, both point to signals in one modality in the presence of conflicting signals in the other modality and produce same-different origin judgments. To avoid the confounding of immediate effects with cumulative adaptation, which was allowed in most previous studies, the direction and amplitude of discordance was varied randomly from trial to trial. Experiment 1, which was a pilot study, showed that both visual bias of auditory localization and auditory bias of visual localization can be observed under such conditions. Experiment 2, which addressed the main question, used a method which controls for the selection involved in separating fusion from no-fusion trials and showed that the attraction of auditory localization by conflicting visual inputs occurs even when fusion is not reported. This result is inconsistent with purely postperceptual views of cross-modal interactions. The question could not be answered for auditory bias of visual localization, which, although significant, was very small in Experiment 1 and fell below significance under the conditions of Experiment 2.  相似文献   

2.
The retention of discrete movements was examined under augmented and minimal feedback conditions. The augmented condition was presented for both the criterion and recall movements and consisted of providing visual, auditory, and heightened proprioceptive cues with each movement. Under minimal conditions, no visual, auditory or heightened proprioceptive cues were provided. Absolute and constant error revealed that under augmented conditions recall accuracy was improved. The retention interval x feedback condition interaction failed significance for both sources of error indicating that there was no evidence of differential decay rates. Variable error appeared to be an informative index of forgetting. The results were interpreted to be in support of the view that a memory trace is imprinted with feedback from all modalities and that the amount of such feedback determines memory trace strength.  相似文献   

3.
Train drivers routinely perform visual search tasks to locate combinations of coloured signals controlling their progress, and are required to make discrete decisions on the basis of what they see. Two studies are reported which examine the performance of students under conditions that simulate critical aspects of United Kingdom train drivers' signal‐response task. The most crucial cautionary signal, the single yellow signal used to alert a transition to potentially hazardous situations, was responded to more slowly than other signal types. A longer processing time was found whether (Study 2) or not (Study 1) the signal appearance was accompanied by the auditory warning signal train drivers encounter under actual driving conditions. The results are consistent with predictions from Treisman and Gelade's ( 1980 ) Feature Integration Theory, and the implications for signal sighting practice are discussed. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than for the auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here, we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception, where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances.  相似文献   

5.
This study examined the effects of visual-verbalload (as measured by a visually presented reading-memory task with three levels) on a visual/auditory stimulus-response task. The three levels of load were defined as follows: "No Load" meant no other stimuli were presented concurrently; "Free Load" meant that a letter (A, B, C, or D) appeared at the same time as the visual or auditory stimulus; and "Force Load" was the same as "Free Load," but the participants were also instructed to count how many times the letter A appeared. The stimulus-response task also had three levels: "irrelevant," "compatible," and "incompatible" spatial conditions. These required different key-pressing responses. The visual stimulus was a red ball presented either to the left or to the right of the display screen, and the auditory stimulus was a tone delivered from a position similar to that of the visual stimulus. Participants also processed an irrelevant stimulus. The results indicated that participants perceived auditory stimuli earlier than visual stimuli and reacted faster under stimulus-response compatible conditions. These results held even under a high visual-verbal load. These findings suggest the following guidelines for systems used in driving: an auditory source, appropriately compatible signal and manual-response positions, and a visually simplified background.  相似文献   

6.
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object’s instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18–39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio–visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio–visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.  相似文献   

7.
Action can affect visual perception if the action's expected sensory effects resemble a concurrent unstable or deviant event. To determine whether action can also change auditory perception, participants were required to play pairs of octave-ambiguous tones by pressing successive keys on a piano or computer keyboard and to judge whether each pitch interval was rising or falling. Both pianists and nonpianist musicians gave significantly more “rising” responses when the order of key presses was left-to-right than when it was right-to-left, in accord with the pitch mapping of the piano. However, the effect was much larger in pianists. Pianists showed a similarly large effect when they passively observed the experimenter pressing keys on a piano keyboard, as long as the keyboard faced the participant. The results suggest that acquired action–effect associations can affect auditory perceptual judgement.  相似文献   

8.
The numerosity of any set of discrete elements can be depicted by a genuinely abstract number representation, irrespective of whether they are presented in the visual or auditory modality. The accumulator model predicts that no cost should apply for comparing numerosities within and across modalities. However, in behavioral studies, some inconsistencies have been apparent in the performance of number comparisons among different modalities. In this study, we tested whether and how numerical comparisons of visual, auditory, and cross-modal presentations would differ under adequate control of stimulus presentation. We measured the Weber fractions and points of subjective equality of numerical discrimination in visual, auditory, and cross-modal conditions. The results demonstrated differences between the performances in visual and auditory conditions, such that numerical discrimination of an auditory sequence was more precise than that of a visual sequence. The performance of cross-modal trials lay between performance levels in the visual and auditory conditions. Moreover, the number of visual stimuli was overestimated as compared to that of auditory stimuli. Our findings imply that the process of approximate numerical representation is complex and involves multiple stages, including accumulation and decision processes.  相似文献   

9.
Three experiments are reported on the influence of different timing relations on the McGurk effect. In the first experiment, it is shown that strict temporal synchrony between auditory and visual speech stimuli is not required for the McGurk effect. Subjects were strongly influenced by the visual stimuli when the auditory stimuli lagged the visual stimuli by as much as 180 msec. In addition, a stronger McGurk effect was found when the visual and auditory vowels matched. In the second experiment, we paired auditory and visual speech stimuli produced under different speaking conditions (fast, normal, clear). The results showed that the manipulations in both the visual and auditory speaking conditions independently influenced perception. In addition, there was a small but reliable tendency for the better matched stimuli to elicit more McGurk responses than unmatched conditions. In the third experiment, we combined auditory and visual stimuli produced under different speaking conditions (fast, clear) and delayed the acoustics with respect to the visual stimuli. The subjects showed the same pattern of results as in the second experiment. Finally, the delay did not cause different patterns of results for the different audiovisual speaking style combinations. The results suggest that perceivers may be sensitive to the concordance of the time-varying aspects of speech but they do not require temporal coincidence of that information.  相似文献   

10.
Recent research suggests an auditory temporal deficit as a possible contributing factor to poor phonemic awareness skills. This study investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in children with a reading disability, aged 8-12 years, using Tallal's tone-order judgement task. Normal performance on the tone-order task was established for 36 normal readers. Forty-two children with developmental reading disability were then subdivided by their performance on the tone-order task. Average and poor tone-order subgroups were then compared on their ability to process speech sounds and visual symbols, and on phonological awareness and reading. The presence of a tone-order deficit did not relate to performance on the order processing of speech sounds, to poorer phonological awareness or to more severe reading difficulties. In particular, there was no evidence of a group by interstimulus interval interaction, as previously described in the literature, and thus little support for a general auditory temporal processing difficulty as an underlying problem in poor readers. In this study, deficient order judgement on a nonverbal auditory temporal order task (tone task) did not underlie phonological awareness or reading difficulties.  相似文献   

11.
Perceptual learning was used to study potential transfer effects in a duration discrimination task. Subjects were trained to discriminate between two empty temporal intervals marked with auditory beeps, using a twoalternative forced choice paradigm. The major goal was to examine whether perceptual learning would generalize to empty intervals that have the same duration but are marked by visual flashes. The experiment also included longer intervals marked with auditory beeps and filled auditory intervals of the same duration as the trained interval, in order to examine whether perceptual learning would generalize to these conditions within the same sensory modality. In contrast to previous findings showing a transfer from the haptic to the auditory modality, the present results do not indicate a transfer from the auditory to the visual modality; but they do show transfers within the auditory modality.  相似文献   

12.
A two-stage model for visual-auditory interaction in saccadic latencies   总被引:2,自引:0,他引:2  
In two experiments, saccadic response time (SRT) for eye movements toward visual target stimuli at different horizontal positions was measured under simultaneous or near-simultaneous presentation of an auditory nontarget (distractor). The horizontal position of the auditory signal was varied, using a virtual auditory environment setup. Mean SRT to a visual target increased with distance to the auditory nontarget and with delay of the onset of the auditory signal relative to the onset of the visual stimulus. A stochastic model is presented that distinguishes a peripheral processing stage with separate parallel activation by visual and auditory information from a central processing stage at which intersensory integration takes place. Two model versions differing with respect to the role of the auditory distractors are tested against the SRT data.  相似文献   

13.
The purpose of the study was to determine the effect of visual and auditory warning on visual reaction time with variations of subjects' alertness. An experimental group of 30 subjects was tested with an auditory or visual warning signal; foreperiods lasted 3, 2, and 4 sec. Reaction time was shorter as alertness improved and with an auditory warning signal. Comparable measures in a control group showed that visual reaction time was shorter when an auditory warning signal was used.  相似文献   

14.
Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory–visual interactions, which rapidly increase the denoted target object’s salience. This would apply, in particular, to complex visual scenes.  相似文献   

15.
Using both recognition and recall responses, confusion and intrusion errors were obtained for briefly exposed 11-letter strings. The patterns of errors were sharply dependent upon experimental variables. In Experiment I Ss made auditory and visual intrusions with recall, but neither with recognition. In Experiment II increasing exposure time and eliminating a poststimulus cue primarily increased auditory confusions. This suggests that auditory and visual confusions reflect strategy-contingentrecoding rather than modality-specificencoding.  相似文献   

16.
The ability of black and brown lemurs (Eulemur macaco and Eulemur fulvus) to make inferences about hidden food was tested using the same paradigm as in Call’s (J Comp Psycol 118:232–241, 2004) cup task experiment. When provided with either visual or auditory information about the content of two boxes (one empty, one baited), lemurs performed better in the auditory condition than in the visual one. When provided with visual or auditory information only about the empty box, one subject out of four was above chance in the auditory condition, implying inferential reasoning. No subject was successful in the visual condition. This study reveals that (1) lemurs are capable of inferential reasoning by exclusion and (2) lemurs make better use of auditory than visual information. The results are compared with the performances recorded in apes and monkeys under the same paradigm.  相似文献   

17.
Attention development is a critical foundation for cognitive abilities. This study examines the relationship between phasic aspects of alertness and disengagement in infants, using the overlap paradigm. Research shows that visual disengagement in overlap condition is modulated by auditory cues in 6-year-olds. Our participants were aged 6 months (N = 20), 12 months (N = 27), and 24 months (N = 14). Phasic alertness during overlap and no-overlap tasks was manipulated using a spatially nondirective warning signal shortly before onset of the peripheral target. Responses in overlap condition were slower and fewer than in no-overlap condition. The signal showed a tendency to reduce latencies in both overlap and no-overlap conditions. While our hypothesis that the warning signal might be more effective in younger infants was not supported, we confirmed the association reported in previous studies between temperamental soothability and disengagement latencies in infancy.  相似文献   

18.
The relationship between eye movements and visual imagery has almost exclusively been studied by treating eye movements as the dependent variable while an imagery task is being performed. In the present experiment three eye-movement treatment conditions were manipulated, within Ss, as the independent variable in order to study their effects on the free recall of nouns which Ss had to store by means of imagery. The imagery-evoking capacity (I) of nouns was varied over three levels within lists of Dutch nouns (Low, Medium and High I). Ss were instructed to generate a visual image to each separate noun under the following treatment conditions: (a) while they looked over and scanned their image as if they were looking at the real object: (b) while they received concurrent visual stimulation from a checkerboard pattern; (c) while they fixated on a target. Reliable, but minor effects of treatment conditions on the recall scores were found. The results were discussed in terms of possible theories about the nature of the relationship between eye movements and visual imagery.  相似文献   

19.
Two experiments were pedormed under visual-only and visual-auditory discrepancy conditions (dubs) to assess observers’ abilities to read speech information on a face. In the first experiment, identification and multiple choice testing were used. In addition, the relation between visual and auditory phonetic information was manipulated and related to perceptual bias. In the second experiment, the “compellingness” of the visual-auditory discrepancy as a single speech event was manipulated. Subjects also rated the confidence they had that their perception of the lipped word was accurate. Results indicated that competing visual information exerted little effect on auditory speech recognition, but visual speech recognition was substantially interfered with when discrepant auditory information was present. The extent of auditory bias was found to be related to the abilities of observers to read speech under nondiscrepancy conditions, the magnitude of the visual-auditory discrepancy, and the compellingheSS of the visual-auditory discrepancy as a single event. Auditory bias during speech was found to be a moderately compelling conscious experience, and not simply a case of confused responding or guessing. Results were discussed in terms of current models of perceptual dominance and related to results from modality discordance during space perception.  相似文献   

20.
The study is designed to investigate response inhibition in children with conduct disorder and borderline intellectual functioning. To this end, children are compared to a normal peer control group using the Alertness test. The test has two conditions. In one condition, children are instructed to push a response button after a visual "go" signal is presented on the screen. In a second condition the "go" signal is preceded by an auditory signal, telling the child that a target stimulus will occur soon. Compared to the control group, the group carrying the dual diagnosis made many preliminary responses (responses before the presentation of the "go" signal), especially in the condition with an auditory signal. This impulsive response style was controlled for attention deficit/hyperactivity disorder characteristics of the children.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号