首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Previous studies have found little if any correlation between dichotic and tachistoscopic language laterality task performance asymmetries. Problems with these studies have been that quite dissimilar auditory and visual tasks have often been used, and the reliability of the asymmetry measures has generally been unknown or, when known, relatively poor. We assessed the cross-modal correlation for two tasks, the Bilateral Object Naming Latency Task (BONLT) and the Dichotic Object Naming Latency Task (DONLT). These tasks are highly similar and have demonstrated high reliabilities. A significant, though rather small, cross-modal correlation was found (r = +.28). When cross-modal correlations were computed for FS- and FS+ subjects separately, no correlation was found for FS+ subjects (r = +.02), but the correlation for FS- subjects was highly significant (r = +.54, p less than .004). This led us to reexamine some previously collected data (P.L. Van Eys and W. F. McKeever, 1988, Brain and Cognition, 4, 413-429) which had administered two highly reliable language laterality tasks (the BONLT and the Dichotic Consonant Vowel Task), but had not assessed cross-modal correlation. A significant cross-modal correlation was found for FS- but not for FS+ subjects. The results are consistent with the hypothesis of H. Hecaen, M. De Agnostini, and A. Monzon-Montes (1981, Brain and Language, 12, 261-284) which suggests that one effect of FS+ is to induce a greater heterogeneity of localizations of different language processes.  相似文献   

2.
Responses are typically faster and more accurate when both auditory and visual modalities are stimulated than when only one is. This bimodal advantage is generally attributed to a speeding of responding on bimodal trials, relative to unimodal trials. It remains possible that this effect might be due to a performance decrement on unimodal ones. To investigate this, two levels of auditory and visual signal intensities were combined in a double-factorial paradigm. Responses to the onset of the imperative signal were measured under go/no-go conditions. Mean reaction times to the four types of bimodal stimuli exhibited a superadditive interaction. This is evidence for the parallel self-terminating processing of the two signal components. Violations of the race model inequality also occurred, and measures of processing capacity showed that efficiency was greater on the bimodal than on the unimodal trials. These data are discussed in terms of a possible underlying neural substrate.  相似文献   

3.
Cross-modal effects on visual and auditory object perception   总被引:1,自引:0,他引:1  
  相似文献   

4.
Three priming experiments were conducted to determine how information about the self from different sensory modalities/cognitive domains affects self-face recognition. Being exposed to your body odor, seeing your name, and hearing your name all facilitated self-face recognition in a reaction time task. No similar cross-modal facilitation was found among stimuli from familiar or novel individuals. The finding of a left-hand advantage for self-face recognition was replicated when no primes were presented. These data, along with other recent results suggest the brain processes/represents information about the self in highly integrated ways.  相似文献   

5.
Three experiments examined sequential effects in choice reaction time tasks. On each trial, a right/left positional judgment was made to a either a pure tone or a luminance increment in a visual array of box elements. In the first two experiments, a preparatory signal was presented prior to each imperative signal to indicate the relevant stimulus modality. At a short stimulus onset asynchrony (SOA) between the preparatory and the imperative signal (i.e., 60 msec), subjects were quicker to repeat the same response than to change their response when presented with successive tones, although no such repetition effect occurred on the visual target trials. Subjects were impaired if the stimulus modality changed across successive trials regardless of the modality of the target. At a longer SOA (i.e., 500 msec), these sequential effects were abolished; subjects were assumed to be able to prepare for the relevant modality because of the presentation of the preparatory signal. When the preparatory signals were omitted, in a final experiment, the modality-switching costs were still evident, but now inhibition of return occurred on both the auditory and the visual target trials-subjects were now impaired in responding when the target reappeared at its immediately previous location. It seems, therefore, that the repetition effect and modality-switching effects do dissociate. The data revealed clear differences between orienting attention to a particular spatial locale and focusing attention to a particular sensory modality.  相似文献   

6.
The present study describes a possible method by which potentially meaningless responses to questionnaires can be easily identified. Given an inadvertent mistake in the design of a questionnaire packet, we found that 10% of respondents provided invalid responses to items.  相似文献   

7.
8.
9.
Effects of sleep deprivation on auditory and visual memory tasks   总被引:2,自引:0,他引:2  
Probe recognition tasks have shown the effects of sleep deprivation following a full night of sleep loss. The current study investigated shorter durations of deprivation by testing 11 subjects for accuracy and response time every 2 hr. from 10 p.m. through 8 a.m. We replicated Elkin and Murray's auditory single-probe recognition task using the number triplets and added two visual tasks with number and shape triplets. Series of six stimuli were each followed by a probe, which was presented after 2.5 sec. as a short delay or 20 sec. as a long delay. Accuracy performance showed a significant decrease for the long delay beginning after 4 a.m. for the two visual tasks. Response times were significantly slower for the visual shapes task using the short delay. Visual tasks, especially shapes, may be more prone to disruption by sleep deprivation, given the visual information load and the briefness of iconic memory.  相似文献   

10.
Research has shown age-related declines in the cognitive ability to inhibit irrelevant information. Thirty-six younger adults (mean age = 22 years) and 36 older adults (mean age = 74 years) performed 2 versions of an emotional Stroop task. In one, they made lexical decisions to emotion words spoken in 1 of several tones of voice. Latencies were longer for test words spoken in an incongruent tone of voice, but only for older adults. In another, words were displayed on a computer screen in a colored font, and participants quickly named the font color. Latencies were longer for test words high on arousal, but only for older adults. Results are discussed in terms of inhibitory cognitive processes, attention, and theories of emotional development.  相似文献   

11.
12.
Similarities have been observed in the localization of the final position of moving visual and moving auditory stimuli: Perceived endpoints that are judged to be farther in the direction of motion in both modalities likely reflect extrapolation of the trajectory, mediated by predictive mechanisms at higher cognitive levels. However, actual comparisons of the magnitudes of displacement between visual tasks and auditory tasks using the same experimental setup are rare. As such, the purpose of the present free-field study was to investigate the influences of the spatial location of motion offset, stimulus velocity, and motion direction on the localization of the final positions of moving auditory stimuli (Experiment 1 and 2) and moving visual stimuli (Experiment 3). To assess whether auditory performance is affected by dynamically changing binaural cues that are used for the localization of moving auditory stimuli (interaural time differences for low-frequency sounds and interaural intensity differences for high-frequency sounds), two distinct noise bands were employed in Experiments 1 and 2. In all three experiments, less precise encoding of spatial coordinates in paralateral space resulted in larger forward displacements, but this effect was drowned out by the underestimation of target eccentricity in the extreme periphery. Furthermore, our results revealed clear differences between visual and auditory tasks. Displacements in the visual task were dependent on velocity and the spatial location of the final position, but an additional influence of motion direction was observed in the auditory tasks. Together, these findings indicate that the modality-specific processing of motion parameters affects the extrapolation of the trajectory.  相似文献   

13.
14.
Prominent roles for general attention resources are posited in many models of working memory, but the manner in which these can be allocated differs between models or is not sufficiently specified. We varied the payoffs for correct responses in two temporally-overlapping recognition tasks, a visual array comparison task and a tone sequence comparison task. In the critical conditions, an increase in reward for one task corresponded to a decrease in reward for the concurrent task, but memory load remained constant. Our results show patterns of interference consistent with a trade-off between the tasks, suggesting that a shared resource can be flexibly divided, rather than only fully allotted to either of the tasks. Our findings support a role for a domain-general resource in models of working memory, and furthermore suggest that this resource is flexibly divisible.  相似文献   

15.
16.
Carlyon RP  Plack CJ  Fantini DA  Cusack R 《Perception》2003,32(11):1393-1402
Carlyon et al (2001 Journal of Experimental Psychology: Human Perception and Performance 27 115-127) have reported that the buildup of auditory streaming is reduced when attention is diverted to a competing auditory stimulus. Here, we demonstrate that a reduction in streaming can also be obtained by attention to a visual task or by the requirement to count backwards in threes. In all conditions participants heard a 13 s sequence of tones, and, during the first 10 s saw a sequence of visual stimuli containing three, four, or five targets. The tone sequence consisted of twenty repeating triplets in an ABA - ABA ... order, where A and B represent tones of two different frequencies. In each sequence, three, four, or five tones were amplitude modulated. During the first 10 s of the sequence, participants either counted the number of visual targets, counted the number of (modulated) auditory targets, or counted backwards in threes from a specified number. They then made an auditory-streaming judgment about the last 3 s of the tone sequence: whether one or two streams were heard. The results showed more streaming when participants counted the auditory targets (and hence were attending to the tones throughout) than in either the 'visual' or 'counting-backwards' conditions.  相似文献   

17.
In the tripartite model of working memory (WM) it is postulated that a unique part system—the visuo-spatial sketchpad (VSSP)—processes non-verbal content. Due to behavioral and neurophysiological findings, the VSSP was later subdivided into visual object and visual spatial processing, the former representing objects’ appearance and the latter spatial information. This distinction is well supported. However, a challenge to this model is the question how spatial information from non-visual sensory modalities, for example the auditory one, is processed. Only a few studies so far have directly compared visual and auditory spatial WM. They suggest that the distinction of two processing domains—one for object and one for spatial information—also holds true for auditory WM, but that only a part of the processes is modality specific. We propose that processing in the object domain (the item’s appearance) is modality specific, while spatial WM as well as object-location binding relies on modality general processes.  相似文献   

18.
Observers were required to detect double jumps of a diffuse light spot jumping in a circular pattern and more intense noise pulses in a pulse train. Seven groups performed at different combinations of stimulus and signal frequencies, higher signal frequency/stimulus frequency ratios, and lower stimulus frequencies. Stimulus frequency was a more potent determiner of performance than signal frequency, and performance was not invariant within a given signal frequency/stimulus frequency ratio. Correlations of dependent measures were also examined.-Results are discussed with reference to various theories of vigilance behavior.  相似文献   

19.
Investigation of the effect that a word recognition task has on concurrent nonverbal tasks showed (a) auditory verbal messages affected visual tracking performance but not the detection of brief light flashes in the visual periphery, (b) greater impairment, both of tracking and light detections, when verbal messages were visual rather than auditory. With a kinaesthetic tracking task, errors increased significantly during auditory messages but were even greater during visual messages. There was no interaction between the modality of tracking error feedback (auditory or visual) and the modality of the verbal message. Nor was the decrement from visual messages reduced by changing the presentation format. It is suggested that different temporal characteristics of visual and auditory information affect the attentional demands of verbal messages.  相似文献   

20.
The cross-modal correlations between auditory and visual language lateralization were examined as a function of the subject variables of handedness, sex, familial sinistrality, and handwriting posture. In this study, left-handers showed a significantly greater correlation between visual and auditory language processing asymmetries than right-handers, contradicting previous reports.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号