首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This event-related potential study investigated (i) to what extent incongruence between attention-directing cue and cued target modality affects attentional control processes that bias the system in advance to favor a particular stimulus modality and (ii) to what extent top-down attentional control mechanisms are generalized for the type of information that is to be attended. To this end, both visual and auditory word cues were used to instruct participants to direct attention to a specific visual (color) or auditory (pitch) stimulus feature of a forthcoming multisensory target stimulus. Effects of cue congruency were observed within 200 ms post-cue over frontal scalp regions and related to processes involved in shifting attention from the cue modality to the modality of the task-relevant target feature. Both directing visual attention and directing auditory attention were associated with dorsal posterior positivity, followed by sustained fronto-central negativity. However, this fronto-central negativity appeared to have an earlier onset and was more pronounced when the visual modality was cued. Together the present results suggest that the mechanisms involved in deploying attention are to some extent determined by the modality (visual, auditory) in which attention operates, and in addition, that some of these mechanisms can also be affected by cue congruency.  相似文献   

2.
In the present study we investigated cross-modal orienting in vision and hearing by using a cueing task with four different horizontal locations. Our main interest concerned cue–target distance effects, which might further our insight in the characteristics of cross-modal spatial attention mechanisms. A very consistent pattern was observed for both the unimodal (cue and target were both visual or auditory) and the cross-modal conditions (cue and target from different modalities). RTs to valid trials were faster than for invalid trials, and, most interestingly, there was a distance effect: RTs increased with greater cue–target distance. This applied to detection of visual targets and to localisation of both visual and auditory targets. The time interval between cue and target was also varied. Surprisingly, there was no indication of inhibition of return even with the longest cue–target intervals. In order to assess the role of endogenous (strategic) factors in exogenous spatial attention we increased in two additional experiments the cue validity from 25% to 80%. This appeared to have no large influence on the cueing pattern in both the detection and localisation tasks. Currently, it is assumed that spatial attention is organised in multiple strongly linked modality-specific systems. The foregoing results are discussed with respect to this supposed organisation.  相似文献   

3.
The second of two targets is often missed when presented shortly after the first target--a phenomenon referred to as the attentional blink (AB). Whereas the AB is a robust phenomenon within sensory modalities, the evidence for cross-modal ABs is rather mixed. Here, we test the possibility that the absence of an auditory-visual AB for visual letter recognition when streams of tones are used is due to the efficient use of echoic memory, allowing for the postponement of auditory processing. However, forcing participants to immediately process the auditory target, either by presenting interfering sounds during retrieval or by making the first target directly relevant for a speeded response to the second target, did not result in a return of a cross-modal AB. Thefindings argue against echoic memory as an explanation for efficient cross-modal processing. Instead, we hypothesized that a cross-modal AB may be observed when the different modalities use common representations, such as semantic representations. In support of this, a deficit for visual letter recognition returned when the auditory task required a distinction between spoken digits and letters.  相似文献   

4.
ABSTRACT

Stimuli presented with targets in a detection task are later recognised more accurately than those presented with distractors, an unusual effect labelled the attentional boost effect (ABE). This effect may reflect an enhancement triggered by target detection, the inhibition of distractor rejection, or some combination of both. To test these possibilities, the present study adopted a baseline similar to that of Swallow and Jiang ([2014b]. The attentional boost effect really is a boost: evidence from a new baseline. Attention, Perception, & Psychophysics, 76(5), 1298–1307); the goal was to separate target-induced enhancements from distractor-induced inhibition. An R/K procedure was applied to further explore the kind of memory that might be affected by target detection or distractor rejection. The results show that the memory advantage for target-paired words was robust relative to that of baseline words; this advantage was mainly observed in R responses. More importantly, a memory reduction was also observed for distractor-paired words relative to baseline words, though this reduction was only observed in R responses. These data led us to conclude that the ABE was triggered by both processes: target-induced enhancement and distractor-induced inhibition. Moreover, both processes were more likely to affect recollection-based recognition.  相似文献   

5.
Whether information perceived without awareness can affect overt performance, and whether such effects can cross sensory modalities, remains a matter of debate. Whereas influence of unconscious visual information on auditory perception has been documented, the reverse influence has not been reported. In addition, previous reports of unconscious cross-modal priming relied on procedures in which contamination of conscious processes could not be ruled out. We present the first report of unconscious cross-modal priming when the unaware prime is auditory and the test stimulus is visual. We used the process-dissociation procedure [Debner, J. A., & Jacoby, L. L. (1994). Unconscious perception: Attention, awareness and control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 304-317] which allowed us to assess the separate contributions of conscious and unconscious perception of a degraded prime (either seen or heard) to performance on a visual fragment-completion task. Unconscious cross-modal priming (auditory prime, visual fragment) was significant and of a magnitude similar to that of unconscious within-modality priming (visual prime, visual fragment). We conclude that cross-modal integration, at least between visual and auditory information, is more symmetrical than previously shown, and does not require conscious mediation.  相似文献   

6.
Spatial attention and audiovisual interactions in apparent motion   总被引:1,自引:0,他引:1  
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either by means of a blocked design or by predictive peripheral cues, and exogenously by means of nonpredictive peripheral cues. The results of 3 experiments demonstrate a reduction in the magnitude of the cross-modal dynamic capture effect on cued trials compared with uncued trials. The introduction of neutral cues (Experiments 4 and 5) confirmed the existence of both attentional costs and benefits. This attention-related reduction in cross-modal dynamic capture was larger when a peripheral cue was used compared with when attention was oriented in a purely endogenous manner. In sum, the results suggest that spatial attention reduces illusory binding by facilitating the segregation of unimodal signals, thereby modulating audiovisual interactions in information processing. Thus, the effect of spatial attention occurs prior to or at the same time as cross-modal interactions involving motion information.  相似文献   

7.
Two experiments were conducted to examine whether abrupt onsets are capable of reflexively capturing attention when they occur outside the current focus of spatial attention, as would be expected if exogenous orienting operates in a truly automatic fashion. The authors established a highly focused attentional state by means of the central presentation of a stream of visual or auditory characters, which participants sometimes had to monitor. No intramodal reflexive cuing effects were observed in either audition or vision when participants performed either an exogenous visual or auditory orthogonal cuing task together with the central focused attention task. These results suggest that reflexive unimodal orienting is not truly automatic. The fact that cuing effects were eliminated under both unimodal and cross-modal conditions is consistent with the view that auditory and visual reflexive spatial orienting are controlled by a common underlying neural substrate.  相似文献   

8.
To investigate the effect of semantic congruity on audiovisual target responses, participants detected a semantic concept that was embedded in a series of rapidly presented stimuli. The target concept appeared as a picture, an environmental sound, or both; and in bimodal trials, the audiovisual events were either consistent or inconsistent in their representation of a semantic concept. The results showed faster detection latencies to bimodal than to unimodal targets and a higher rate of missed targets when visual distractors were presented together with auditory targets, in comparison to auditory targets presented alone. The findings of Experiment 2 showed a cross-modal asymmetry, such that visual distractors were found to interfere with the accuracy of auditory target detection, but auditory distractors had no effect on either the speed or the accuracy of visual target detection. The biased-competition theory of attention (Desimone & Duncan Annual Review of Neuroscience 18: 1995; Duncan, Humphreys, & Ward Current Opinion in Neurobiology 7: 255–261 1997) was used to explain the findings because, when the saliency of the visual stimuli was reduced by the addition of a noise filter in Experiment 4, visual interference on auditory target detection was diminished. Additionally, the results showed faster and more accurate target detection when semantic concepts were represented in a visual rather than an auditory format.  相似文献   

9.
This paper presents three studies which examine the susceptibility of sentence comprehension to intrusion by extra-sentential probe words in two on-line dual-task techniques commonly used to study sentence processing: the cross-modal lexical priming paradigm and the unimodal all-visual lexical priming paradigm. It provides both a general review and a direct empirical examination of the effects of task-demand in the on-line study of sentence comprehension. In all three studies, sentential materials were presented to participants together with a target probe word which constituted either a better or a worse continuation of the sentence at a point at which it was presented. Materials were identical for all three studies. The manner of presentation of the sentence materials was, however, manipulated; presentation was either visual, auditory (normal rate) or auditory (slow rate). The results demonstrate that a technique in which a visual target probe interrupts ongoing sentence processing (such as occurs in unimodal visual presentation and in very slow auditory sentence presentation) encourages the integration of the probe word into the on-going sentence. Thus, when using such ‘sentence interrupting’ techniques, additional care to equate probes is necessary. Importantly, however, the results provide strong evidence that the standard use of fluent cross-modality sentence investigation methods are immune from such external probe word intrusions into ongoing sentence processing and are thus accurately reflect underlying comprehension processes.  相似文献   

10.
Older adults have difficulty when executive control must be brought on line to coordinate ongoing behavior. To assess age-related alterations in executive processing, task-switching performance and event-related potential (ERP) activity were compared in young and older adults on switch, post-switch, pre-switch, and no-switch trials, ordered in demand for executive processes from greatest to least. In stimulus-locked averages for young adults, only switch trials elicited fronto-central P3 components, indicative of task-set attentional reallocation, whereas in older adults, three of the four trial types evinced frontal potentials. In response-locked averages, the amplitude of a medial frontal negativity (MFN), a component reflecting conflict monitoring and detection, increased as a function of executive demands in the ERPs of the young but not those of the older adults. These data suggest altered executive processing in older adults resulting in persistent recruitment of prefrontal processes for conditions that do not require them in the young.  相似文献   

11.
Little is known about the time course of processes supporting episodic cued recall. To examine these processes, we recorded event-related scalp electrical potentials during episodic cued recall following pair-associate learning of unimodal object-picture pairs and crossmodal object-picture and sound pairs. Successful cued recall of unimodal associates was characterized by markedly early scalp potential differences over frontal areas, while cued recall of both unimodal and crossmodal associates were reflected by subsequent differences recorded over frontal and parietal areas. Notably, unimodal cued recall success divergences over frontal areas were apparent in a time window generally assumed to reflect the operation of familiarity but not recollection processes, raising the possibility that retrieval success effects in that temporal window may reflect additional mnemonic processes beyond familiarity. Furthermore, parietal scalp potential recall success differences, which did not distinguish between crossmodal and unimodal tasks, seemingly support attentional or buffer accounts of posterior parietal mnemonic function but appear to constrain signal accumulation, expectation, or representational accounts.  相似文献   

12.
ABSTRACT

Older adults have difficulty when executive control must be brought on line to coordinate ongoing behavior. To assess age-related alterations in executive processing, task-switching performance and event-related potential (ERP) activity were compared in young and older adults on switch, post-switch, pre-switch, and no-switch trials, ordered in demand for executive processes from greatest to least. In stimulus-locked averages for young adults, only switch trials elicited fronto-central P3 components, indicative of task-set attentional reallocation, whereas in older adults, three of the four trial types evinced frontal potentials. In response-locked averages, the amplitude of a medial frontal negativity (MFN), a component reflecting conflict monitoring and detection, increased as a function of executive demands in the ERPs of the young but not those of the older adults. These data suggest altered executive processing in older adults resulting in persistent recruitment of prefrontal processes for conditions that do not require them in the young.  相似文献   

13.
In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.  相似文献   

14.
The ability to process simultaneously presented auditory and visual information is a necessary component underlying many cognitive tasks. While this ability is often taken for granted, there is evidence that under many conditions auditory input attenuates processing of corresponding visual input. The current study investigated infants' processing of visual input under unimodal and cross-modal conditions. Results of the three reported experiments indicate that different auditory input had different effects on infants' processing of visual information. In particular, unfamiliar auditory input slowed down visual processing, whereas more familiar auditory input did not. These results elucidate mechanisms underlying auditory overshadowing in the course of cross-modal processing and have implications on a variety of cognitive tasks that depend on cross-modal processing.  相似文献   

15.
Subjects judged the elevation (up vs. down, regardless of laterality) of peripheral auditory or visual targets, following uninformative cues on either side with an intermediate elevation. Judgments were better for targets in either modality when preceded by an uninformative auditory cue on the side of the target. Experiment 2 ruled out nonattentional accounts for these spatial cuing effects. Experiment 3 found that visual cues affected elevation judgments for visual but not auditory targets. Experiment 4 confirmed that the effect on visual targets was attentional. In Experiment 5, visual cues produced spatial cuing when targets were always auditory, but saccades toward the cue may have been responsible. No such visual-to-auditory cuing effects were found in Experiment 6 when saccades were prevented, though they were present when eye movements were not monitored. These results suggest a one-way cross-modal dependence in exogenous covert orienting whereby audition influences vision, but not vice versa. Possible reasons for this asymmetry are discussed in terms of the representation of space within the brain.  相似文献   

16.
One theory of visual awareness proposes that electrophysiological activity related to awareness occurs in primary visual areas approximately 200 ms after stimulus onset (visual awareness negativity: VAN) and in fronto-parietal areas about 300 ms after stimulus onset (late positivity: LP). Although similar processes might be involved in auditory awareness, only sparse evidence exists for this idea. In the present study, we recorded electrophysiological activity while subjects listened to tones that were presented at their own awareness threshold. The difference in electrophysiological activity elicited by tones that subjects reported being aware of versus unaware of showed an early negativity about 200 ms and a late positivity about 300 ms after stimulus onset. These results closely match those found in vision and provide convincing evidence for an early negativity (auditory awareness negativity: AAN), as well as an LP. These findings suggest that theories of visual awareness are also applicable to auditory awareness.  相似文献   

17.
Overriding auditory attentional capture   总被引:1,自引:0,他引:1  
Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects.  相似文献   

18.
Abstract: The negativity bias is the perceptual phenomena whereby an emotionally negative stimulus is processed faster than a positive or neutral stimulus. We used the attentional blink paradigm to investigate whether attentional resources are required to obtain the negativity bias. Positive, negative or neutral words were used as a preceding target (T1) and/or a subsequent target (T2). Experiment 1 showed that the negativity bias occurred, because the attentional blink was reduced by a negative T2, but not by a positive or neutral T2. Experiment 2 indicated that a negative T1 grabbed attentional resources, interfering with the identification of a neutral T2. Experiment 3 demonstrated that the report of a negative T2 deteriorated when T1 was also negative. We conclude that attentional resources were required for the occurrence of the negativity bias.  相似文献   

19.
Multisensory neurons in the deep superior colliculus (SC) show response enhancement to cross-modal stimuli that coincide in time and space. However, multisensory SC neurons respond to unimodal input as well. It is thus legitimate to ask why not all deep SC neurons are multisensory or, at least, develop multisensory behavior during an organism's maturation. The novel answer given here derives from a signal detection theory perspective. A Bayes' ratio model of multisensory enhancement is suggested. It holds that deep SC neurons operate under the Bayes' ratio rule, which guarantees optimal performance-that is, it maximizes the probability of target detection while minimizing the false alarm rate. It is shown that optimal performance of multisensory neurons vis-à-vis cross-modal stimuli implies, at the same time, that modality-specific neurons will outperform multisensory neurons in processing unimodal targets. Thus, only the existence of both multisensory and modality-specific neurons allows optimal performance when targets of one or several modalities may occur.  相似文献   

20.
Recently, performance magic has become a source of insight into the processes underlying awareness. Magicians have highlighted a set of variables that can create moments of visual attentional suppression, which they call “off-beats.” One of these variables is akin to the phenomenon psychologists know as attentional entrainment. The current experiments, inspired by performance magic, explore the extent to which entrainment can occur across sensory modalities. Across two experiments using a difficult dot probe detection task, we find that the mere presence of an auditory rhythm can bias when visual attention is deployed, speeding responses to stimuli appearing in phase with the rhythm. However, the extent of this cross-modal influence is moderated by factors such as the speed of the entrainers and whether their frequency is increasing or decreasing. In Experiment 1, entrainment occurred for rhythms presented at .67 Hz, but not at 1.5 Hz. In Experiment 2, entrainment only occurred for rhythms that were slowing from 1.5 Hz to .67 Hz, not speeding. The results of these experiments challenge current models of temporal attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号