首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Warning signals enhance psychomotor performance by optimising preparation for the arrival of an event. Recent evidence, however, suggests that a warning signal can also disrupt attentional preparation by interfering with the process of preparing. It was hypothesised that a warning signal consisting of a change to the task-relevant items (array onset) may be more effective than a traditional warning signal consisting of the arrival, or removal, of a bar-cue which is independent of the task array. In three experiments array onset was a more effective warning signal than the bar-cue because reaction times were significantly faster without an increase in errors. In addition, an auditory warning signal resulted in faster reaction time than the bar-cue but in performance equivalent to that with an onset warning signal. Thus, the superiority of an auditory warning signal, reported by Davis and Green, was not found, when the interference of the visual warning signal with preparation was reduced. These results suggest that a visual warning signal consisting of a change to the stimulus array is more effective than an event independent from the stimulus array.  相似文献   

2.
Choice reaction latencies were measured at three different a priori probabilities for two stimulus alternatives. Unlike the results of some other studies, the mean latency of a given response was nearly the same whether the response was correct or incorrect. The discriminable stimuli were a 1000- or a 1700-Hz tone presented at 70 dB SPL. Latencies and standard deviations, based on about 17,000 observations, are reported for three observers. The data are compared with predictions of the optimum sequential model of Wald and Stone and two modifications of that random-walk model, one proposed by Link and Heath and the other proposed by Laming. Fast-guess analyses were also carried out. The three-parameter version of either the sequential or the modified random-walk models provided reasonably accurate predictions of the mean data for each observer. The parameters estimated by the fast-guess analysis were unrealistic. There are three obvious differences between this experiment and most previous choice reaction-time experiments. First is stimulus modality#x2014;we used an auditory signal, whereas most of the previous studies used a visual signal. Second, the observers practiced more in this experiment than in most previous experiments. Finally, there was a random foreperiod with a heavy penalty for anticipations. One or more of these factors is the probable reason for the discrepancy between our results and those of previous studies.  相似文献   

3.
Errors from a serial response task involving single-finger responses to alphabetic stimuli are analysed and discussed in relation to findings which have been reported from tasks with more compatible stimulus-response relationships. Errors are divided into three distinguishable subsets and in each case found to have longer latencies than correct responses. Those which result from mirroring the required response about the centre of the hand are found to resist elimination during practice and their frequency seems to depend on the type of code used. In all cases error correction times are faster than the times to make a correct response but mirror errors and errors involving a finger adjacent to the correct response are corrected faster than other errors. The findings are discussed in relation to the theory of choice reaction time and error correction.  相似文献   

4.
Understanding and modeling the influence of mobile phone use on pedestrian behaviour is important for several safety and performance evaluations. Mobile phone use affects pedestrian perception of the surrounding traffic environment and reduces situation awareness. This study investigates the effect of distraction due to mobile phone use (i.e., visual and auditory) on pedestrian reaction time to the pedestrian signal. Traffic video data was collected from four crosswalks in Canada and China. A multilevel mixed-effects accelerated failure time (AFT) approach is used to model pedestrian reaction times, with random intercepts capturing the clustered-specific (countries) heterogeneity. Potential reaction time influencing factors were investigated, including pedestrian demographic attributes, distraction characteristics, and environment-related parameters. Results show that pedestrian reaction times were longer in Canada than in China under the non-distraction and distraction conditions. The auditory and visual distractions increase pedestrian reaction time by 67% and 50% on average, respectively. Pedestrian reactions were slower at road segment crosswalks compared to intersection crosswalks, at higher distraction durations, and for males aged over 40 compared to other pedestrians. Moreover, pedestrian reactions were faster at higher traffic awareness levels.  相似文献   

5.
Data from a sustained monitoring experiment involving auditory, visual and combined audio-visual signal recognition were used to assess the predictive validity of five models of bisensory information processing. Satisfactory predictions of the dual-mode performance levels were made only by two models, neither of which assumes that the auditory and visual systems operate independently, and correlations which attest to this nonindependence are presented. One of these models explicitly assumes that the two systems are associated so that their judgments tend to coincide; the other assumes that the visual system “alerts” the auditory system to the presence of a signal. Both models accurately predict the levels of d’ and β in the dual-mode condition, and the “alerting” one also accounts for the observed reduction in response latencies.  相似文献   

6.
The authors consider six models of underlying process in the weapon identification task: The first two are response-time extensions of signal detection models; the last four, of the process dissociation model. Predictions for accuracy data, correct response latencies, and false response latencies are used to discriminate between models. In the present study, racial bias in responses and correct response latency was replicated. New findings were that the direction of bias was reversed in error latency and that errors were faster than correct responses. These findings rule out four models, in particular, the idea that race biases early perception and interpretation of targets. Implications for reducing errors in the weapon identification task and possibilities of discriminating between the remaining two models are discussed.  相似文献   

7.
Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.  相似文献   

8.
Speech perception, especially in noise, may be maximized if the perceiver observes the naturally occurring visual-plus-auditory cues inherent in the production of spoken language. Evidence is conflicting, however, about which aspects of visual information mediate enhanced speech perception in noise. For this reason, we investigated the relative contributions of audibility and the type of visual cue in three experiments in young adults with normal hearing and vision. Relative to static visual cues, access to the talker??s phonetic gestures in speech production, especially in noise, was associated with (a) faster response times and sensitivity for speech understanding in noise, and (b) shorter latencies and reduced amplitudes of auditory N1 event-related potentials. Dynamic chewing facial motion also decreased the N1 latency, but only meaningful linguistic motions reduced the N1 amplitude. The hypothesis that auditory?Cvisual facilitation is distinct to properties of natural, dynamic speech gestures was partially supported.  相似文献   

9.
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 degrees or 45 degrees to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 degrees eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.  相似文献   

10.
This study examined the effects of visual-verbalload (as measured by a visually presented reading-memory task with three levels) on a visual/auditory stimulus-response task. The three levels of load were defined as follows: "No Load" meant no other stimuli were presented concurrently; "Free Load" meant that a letter (A, B, C, or D) appeared at the same time as the visual or auditory stimulus; and "Force Load" was the same as "Free Load," but the participants were also instructed to count how many times the letter A appeared. The stimulus-response task also had three levels: "irrelevant," "compatible," and "incompatible" spatial conditions. These required different key-pressing responses. The visual stimulus was a red ball presented either to the left or to the right of the display screen, and the auditory stimulus was a tone delivered from a position similar to that of the visual stimulus. Participants also processed an irrelevant stimulus. The results indicated that participants perceived auditory stimuli earlier than visual stimuli and reacted faster under stimulus-response compatible conditions. These results held even under a high visual-verbal load. These findings suggest the following guidelines for systems used in driving: an auditory source, appropriately compatible signal and manual-response positions, and a visually simplified background.  相似文献   

11.
Responses are typically faster and more accurate when both auditory and visual modalities are stimulated than when only one is. This bimodal advantage is generally attributed to a speeding of responding on bimodal trials, relative to unimodal trials. It remains possible that this effect might be due to a performance decrement on unimodal ones. To investigate this, two levels of auditory and visual signal intensities were combined in a double-factorial paradigm. Responses to the onset of the imperative signal were measured under go/no-go conditions. Mean reaction times to the four types of bimodal stimuli exhibited a superadditive interaction. This is evidence for the parallel self-terminating processing of the two signal components. Violations of the race model inequality also occurred, and measures of processing capacity showed that efficiency was greater on the bimodal than on the unimodal trials. These data are discussed in terms of a possible underlying neural substrate.  相似文献   

12.
An experiment carried out to determine the relation between auditory and visual reaction times suggested that when the general level of response is slow visual RTs are faster than auditory, and that the reverse is the case when the level of response is fast. Thus most normal subjects have an auditory RT faster than visual, and most schizophrenics the reverse. However, the difference between auditory and visual RTs does not appear to depend directly on schizophrenic pathology except in so far as this is a factor in the general slowness of reaction time.  相似文献   

13.
This study investigated speed of processing (SOP) among college-level adult dyslexic and normal readers in nonlinguistic and sublexical linguistic auditory and visual oddball tasks, and a nonlinguistic cross-modal choice reaction task. Behavioral and electrophysiological (ERP) measures were obtained. The results revealed that between both groups, reaction times (RT) were longer and the latencies of P2 and P3 components occurred later in the visual as compared to auditory oddball tasks. RT and ERP latencies were longest in the cross-modal task. RT and ERP latencies were delayed among dyslexic as compared to normal readers across tasks. On the oddball tasks, group differences in brain activity were observed only when responding to low-probability targets. These differences were largest for the P3 component, and most pronounced in the case of phonemes. The gap between ERP latencies in the visual versus the auditory modalities for each component was larger among dyslexic as compared to normal readers, and was particularly evident at the linguistic level. A hypothesis is proposed that suggests an amodal, basic SOP deficit among dyslexic readers. The slower cross-modal SOP is attributed to slower information processing in general and to disproportionate "asynchrony" between SOP in the visual versus the auditory system. It is suggested that excessive asynchrony in the SOP of the two systems may be one of the underlying causes of dyslexics' impaired reading skills.  相似文献   

14.
Human visual reaction times were fractionated into component latencies measuring visual reception time, opto-motor integration time, central motor outflow time, and peripheral motor time on the basis of evoked cortical activity recorded from the intact scalp and the occurrence of the response electromyogram. Normative data are presented for a right-foot dorsiflexion task studied on 18 male subjects, together with an analysis of inter- and intra-subject variability in response timing. Faster reactors were found to display briefer opto-motor integration times and motor times, while an individual's faster responses were characterized by shorter motor outflow times and motor time. These results are interpreted in terms of varying physiological mechanisms.  相似文献   

15.
Visual dominance in the pigeon   总被引:3,自引:0,他引:3       下载免费PDF全文
In Experiment 1, three pigeons were trained to obtain grain by depressing one foot treadle in the presence of a 746-Hertz tone stimulus and by depressing a second foot treadle in the presence of a red light stimulus. Intertrial stimuli included white light and the absence of tone. The latencies to respond on auditory element trials were as fast, or faster, than on visual element trials, but pigeons always responded on the visual treadle when presented with a compound stimulus composed of the auditory and visual elements. In Experiment 2, pigeons were trained on the auditory-visual discrimination task using as trial stimuli increases in the intensity of auditory or visual intertrial stimuli. Again, pigeons showed visual dominance on subsequent compound stimulus test trials. In Experiment 3, on compound test trials, the onset of the visual stimulus was delayed relative to the onset of the auditory stimulus. Visual treadle responses generally occurred with delay intervals of less than 500 milliseconds, and auditory treadle responses generally occurred with delay intervals of greater than 500 milliseconds. The results are discussed in terms of Posner, Nissen, and Klein's (1976) theory of visual dominance in humans.  相似文献   

16.
This study investigated whether "asynchrony" in speed of processing (SOP) between the visual-orthographic and auditory-phonological modalities contributes to word recognition deficits among adult dyslexics. Male university students with a history of diagnosed dyslexia were compared to age-matched normal readers on a variety of experimental measures while event-related potentials and reaction time data were collected. Measures were designed to evaluate auditory and visual processing for non-linguistic (tones and shapes) and linguistic (phonemes and graphemes) low-level stimuli as well as higher-level orthographic and phonological processing (in a lexical decision task). Data indicated that adult dyslexic readers had significantly slower reaction times and longer P300 latencies than control readers in most of the experimental tasks and delayed P200 latencies for the lexical decision task. Moreover, adult dyslexics revealed a systematic SOP gap in P300 latency between the auditory/phonological and visual/orthographic processing measures. Our data support and extend previous work that found SOP asynchrony to be an underlying factor of childhood dyslexia. The present data suggests, however, that among adult dyslexics the between modalities asynchrony occurs at later processing stages than in children.  相似文献   

17.
The present study investigates the effects of word category (nouns versus verbs) and their subcategories on naming latencies in German, with a focus on the influence of lexical parameters on naming performance. The experimental material met linguistic construction criteria and was carefully matched for age of spontaneous production, frequency, and name agreement. Additional lexical parameters (objective age-of-acquisition, word length, visual complexity, imageability) were obtained. The results demonstrated a clear effect of word category on naming latencies. This effect was supported by two different observations. First, there was evidence for category and subcategory effects in naming: nouns were named faster than verbs, and intransitive verbs were named faster than transitive verbs. Second, while objective age-of-acquisition (naming age) turned out to be an important predictor of reaction times for both word categories, naming latencies for nouns and verbs were affected differentially by other lexical parameters. The results are discussed with respect to current controversies on the noun-verb-asymmetry.  相似文献   

18.
Manual reaction times to visual, auditory, and tactile stimuli presented simultaneously, or with a delay, were measured to test for multisensory interaction effects in a simple detection task with redundant signals. Responses to trimodal stimulus combinations were faster than those to bimodal combinations, which in turn were faster than reactions to unimodal stimuli. Response enhancement increased with decreasing auditory and tactile stimulus intensity and was a U-shaped function of stimulus onset asynchrony. Distribution inequality tests indicated that the multisensory interaction effects were larger than predicted by separate activation models, including the difference between bimodal and trimodal response facilitation. The results are discussed with respect to previous findings in a focused attention task and are compared with multisensory integration rules observed in bimodal and trimodal superior colliculus neurons in the cat and monkey.  相似文献   

19.
Semantic and perceptual size decision times for pictorial and verbal material were analyzed in the context of a unitary memory model and several dual memory models. Experiment 1 involved a same-different categorical judgment task. The results showed that picture-picture response latencies were 185 msec faster than the corresponding word-word latencies, and word-picture and picture-word latencies equaled the mean of these two extremes. Similarity of subcategory for “same” judgments led to faster decision latency for all presentation conditions. Additionally, a linear relationship was found between picture-picture and word-word latencies for individual item pairs. Experiment 2 involved a comparison of pictures and words across a. categorical judgment and a size judgment task. Pictures produced faster decision latencies in both tasks, and the latency diflerence between pictures and words was comparable across tasks. These data fit the predictions of a unitary memory model. Several variants of a dual memory model are rejected and those which fit the data require assumptions about storage and/or transfer time values which result in a functional regression to the unitary memory model.  相似文献   

20.
The reaction times (RTs) of 12 subjects were recorded in a design where a visual or auditory warning signal preceded an auditory RT signal by one of four short foreperiods 500, 750, 1000 or 1250 ms long, which occured in a random sequence. For the 16 trials at each foreperiod, with each modality of warning signal, the average of the 2-s long EEG samples following the warning signal was computed so that the record showed the scalp recorded (vertex--left mastoid) evoked potentials (EPs) to both warning and RT signals, and also the contingent negative variation or expectancy wave occurring during the foreperiod.

Differences between RTs with different foreperiods were not reflected in negatively correlated differences in the amplitude of the RT signal EPs, taking the major positive going deflection between peaks N1 and P2 at mean latencies of 126 and 231 msec after the RT signal. Furthermore RT signal EPs preceded by a warning signal were highly attenuated in amplitude relative to control EPs which were not preceded by a warning signal, whether or not an RT response was required. This was despite the fact that alerted RTs were slightly faster than non-alerted RTs, so that these findings contradict previous findings associating augmented EPs with responding versus not responding and with speeded RTs.

However, it was also found that RT signal EP amplitudes were greater with the more effective modality of warning signal than the less effective, which was consistent with previous findings. The divergence from previous findings when comparing EPs preceded by a warning with those having no prior warning is tentatively accounted for in terms of persisting physiological refractoriness following the warning signal EP.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号