首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
This event-related potential study investigated (i) to what extent incongruence between attention-directing cue and cued target modality affects attentional control processes that bias the system in advance to favor a particular stimulus modality and (ii) to what extent top-down attentional control mechanisms are generalized for the type of information that is to be attended. To this end, both visual and auditory word cues were used to instruct participants to direct attention to a specific visual (color) or auditory (pitch) stimulus feature of a forthcoming multisensory target stimulus. Effects of cue congruency were observed within 200 ms post-cue over frontal scalp regions and related to processes involved in shifting attention from the cue modality to the modality of the task-relevant target feature. Both directing visual attention and directing auditory attention were associated with dorsal posterior positivity, followed by sustained fronto-central negativity. However, this fronto-central negativity appeared to have an earlier onset and was more pronounced when the visual modality was cued. Together the present results suggest that the mechanisms involved in deploying attention are to some extent determined by the modality (visual, auditory) in which attention operates, and in addition, that some of these mechanisms can also be affected by cue congruency.  相似文献   

2.
A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.  相似文献   

3.
Selective attention requires the ability to focus on relevant information and to ignore irrelevant information. The ability to inhibit irrelevant information has been proposed to be the main source of age-related cognitive change (e.g., Hasher & Zacks, 1988). Although age-related distraction by irrelevant information has been extensively demonstrated in the visual modality, studies involving auditory and cross-modal paradigms have revealed a mixed pattern of results. A comparative evaluation of these paradigms according to sensory modality suggests a twofold trend: Age-related distraction is more likely (a) in unimodal than in cross-modal paradigms and (b) when irrelevant information is presented in the visual modality, rather than in the auditory modality. This distinct pattern of age-related changes in selective attention may be linked to the reliance of the visual and auditory modalities on different filtering mechanisms. Distractors presented through the auditory modality can be filtered at both central and peripheral neurocognitive levels. In contrast, distractors presented through the visual modality are primarily suppressed at more central levels of processing, which may be more vulnerable to aging. We propose the hypothesis that age-related distractibility is modality dependent, a notion that might need to be incorporated in current theories of cognitive aging. Ultimately, this might lead to a more accurate account for the mixed pattern of impaired and preserved selective attention found in advancing age.  相似文献   

4.
赵晨  张侃  杨华海 《心理学报》2001,34(3):28-33
该研究利用空间线索技术的实验模式,考察跨视觉和听觉通道的内源性选择注意与外源性选择性注意的相互关系。实验结果表明:(1)听觉中央线索在较长的SOA(至少500毫秒)条件下,可以引导内源性空间选择性注意;同时外周线索突现也能自动化地吸引部分注意资源。(2)听觉和视觉选择注意是分离的加工通道,但二者之间存在相互联系。  相似文献   

5.
Two experiments using event-related potentials (ERPs) examined the extent to which early traumatic experiences affect children's ability to regulate voluntary and involuntary attention to threat. The authors presented physically abused and nonabused comparison children with conflicting auditory and visual emotion cues, posed by children's mothers or a stranger, to examine the effect of emotion, modality, and poser familiarity on attention regulation. Relative to controls, abused children overattended to task-relevant visual and auditory anger cues. They also attended more to task-irrelevant auditory anger cues. Furthermore, the degree of attention allocated to threat statistically mediated the relationship between physical abuse and child-reported anxiety. These findings indicate that extreme emotional experiences may promote vulnerability for anxiety by influencing the development of attention regulation abilities.  相似文献   

6.
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.  相似文献   

7.
The goal of this study was to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study was motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for auditory cues. Across three experiments, preschool-age children and adults were trained to use paired audio-visual cues to predict the location of a target. At test, the cues were switched so that auditory cues indicated one location and visual cues indicated the opposite location. In contrast to prior studies, preschool-age children did not exhibit auditory dominance. Instead, children and adults flexibly shifted their preferences as a function of the degree of contrast within each modality, with high contrast leading to greater use.  相似文献   

8.
Interactions Between Exogenous Auditory and Visual Spatial Attention   总被引:5,自引:0,他引:5  
Six experiments were carried out to investigate the issue of cross-modality between exogenous auditory and visual spatial attention employing Posner's cueing paradigm in detection, localization, and discrimination tasks. Results indicated cueing in detection tasks with visual or auditory cues and visual targets but not with auditory targets (Experiment 1). In the localization tasks, cueing was found with both visual and auditory targets. Inhibition of return was apparent only in the within-modality conditions (Experiment 2). This suggests that it is important whether the attention system is activated directly (within a modality) or indirectly (between modalities). Increasing the cue validity from 50% to 80% influenced performance only in the localization task (Experiment 4). These findings are interpreted as being indicative for modality-specific but interacting attention mechanisms. The results of Experiments 5 and 6 (up/down discrimination tasks) also show cross-modal cueing but not with visual cues and auditory targets. Furthermore, there was no inhibition of return in any condition. This suggests that some cueing effects might be task dependent.  相似文献   

9.
李毕琴  张明 《心理科学进展》2012,20(11):1749-1754
通过两个实验探讨了来自不同感觉通道或不同空间方位的奇异刺激在视听搜索范式中的注意捕获。实验一使用定位任务探讨刺激驱动注意捕获中跨通道效应且及其不对称性, 结果发现, 当靶子在听觉通道时, 只有来自听觉同侧空间方位的奇异刺激成功的捕获了注意; 而来自视觉同侧空间和异侧空间的奇异刺激并没有捕获注意。实验二使用检测任务进一步探讨刺激驱动的注意捕获中的跨通道效应且具有不对称性, 结果发现, 来自同侧和异侧的听觉奇异刺激均捕获了注意, 但是视觉奇异刺激却不能捕获注意。研究表明, 视听搜索范式中的奇异刺激的注意捕获具有不对称性, 且存在着一种超感觉通道的注意资源。  相似文献   

10.
This study tested the hypothesis that even the simplest cognitive tasks require the storage of information in working memory (WM), distorting any information that was previously stored in WM. Experiment 1 tested this hypothesis by requiring observers to perform a simple letter discrimination task while they were holding a single orientation in WM. We predicted that performing the task on the interposed letter stimulus would cause the orientation memory to become less precise and more categorical compared to when the letter was absent or when it was present but could be ignored. This prediction was confirmed. Experiment 2 tested the modality specificity of this effect by replacing the visual letter discrimination task with an auditory pitch discrimination task. Unlike the interposed visual stimulus, the interposed auditory stimulus produced little or no disruption of WM, consistent with the use of modality‐specific representations. Thus, performing a simple visual discrimination task, but not a simple auditory discrimination task, distorts information about a single feature being maintained in visual WM. We suggest that the interposed task eliminates information stored within the focus of attention, leaving behind a WM representation outside the focus of attention that is relatively imprecise and categorical.  相似文献   

11.
A driving simulator was used to examine the effects on driving performance of auditory cues in an in-vehicle information search task. Drivers' distraction by the search tasks was measured on a peripheral detection task. The difficulty of the search task was systematically varied to test the distraction caused by a quantified visual load. 58 participants completed the task. Performance on both search tasks and peripheral detection tasks was measured by mean response time and percent error. Analyses indicated that in-vehicle information search performance can be severely degraded when a target is located within a group of diverse distractors. Inclusion of an auditory cue in the visual search increased the mean response time as a result of a change in modality from auditory to visual. Inclusion of such an auditory cue seemed to influence distraction as measured by performance on the peripheral detection task; accuracy was lower when auditory cues were provided, and responses were slower when no auditory cues were provided. Distraction by the auditory cue varied according to the difficulty of the search task.  相似文献   

12.
Nonhuman primates appear to capitalize more effectively on visual cues than corresponding auditory versions. For example, studies of inferential reasoning have shown that monkeys and apes readily respond to seeing that food is present (“positive” cuing) or absent (“negative” cuing). Performance is markedly less effective with auditory cues, with many subjects failing to use this input. Extending recent work, we tested eight captive tufted capuchins (Cebus apella) in locating food using positive and negative cues in visual and auditory domains. The monkeys chose between two opaque cups to receive food contained in one of them. Cup contents were either shown or shaken, providing location cues from both cups, positive cues only from the baited cup, or negative cues from the empty cup. As in previous work, subjects readily used both positive and negative visual cues to secure reward. However, auditory outcomes were both similar to and different from those of earlier studies. Specifically, all subjects came to exploit positive auditory cues, but none responded to negative versions. The animals were also clearly different in visual versus auditory performance. Results indicate that a significant proportion of capuchins may be able to use positive auditory cues, with experience and learning likely playing a critical role. These findings raise the possibility that experience may be significant in visually based performance in this task as well, and highlight that coming to grips with evident differences between visual versus auditory processing may be important for understanding primate cognition more generally.  相似文献   

13.
There is now convincing evidence that an involuntary shift of spatial attention to a stimulus in one modality can affect the processing of stimuli in other modalities, but inconsistent findings across different paradigms have led to controversy. Such inconsistencies have important implications for theories of cross-modal attention. The authors investigated why orienting attention to a visual event sometimes influences responses to subsequent sounds and why it sometimes fails to do so. They examined visual-cue-on-auditory-target effects in two paradigms--implicit spatial discrimination (ISD) and orthogonal cuing (OC)--that have yielded conflicting findings in the past. Consistent with previous research, visual cues facilitated responses to same-side auditory targets in the ISD paradigm but not in the OC paradigm. Furthermore, in the ISD paradigm, visual cues facilitated responses to auditory targets only when the targets were presented directly at the cued location, not when they appeared above or below the cued location. This pattern of results confirms recent claims that visual cues fail to influence responses to auditory targets in the OC paradigm because the targets fall outside the focus of attention. (PsycINFO Database Record (c) 2008 APA, all rights reserved).  相似文献   

14.
In a previous study, Ward (1994) reported that spatially uninformative visual cues orient auditory attention but that spatially uninformative auditory cues fail to orient visual attention. This cross-modal asymmetry is consistent with other intersensory perceptual phenomena that are dominated by the visual modality (e.g., ventriloquism). However, Spence and Driver (1997) found exactly the opposite asymmetry under different experimental conditions and with a different task. In spite of the several differences between the two studies, Spence and Driver (see also Driver & Spence, 1998) argued that Ward's findings might have arisen from response-priming effects, and that the cross-modal asymmetry they themselves reported, in which auditory cues affect responses to visual targets but not vice versa, is in fact the correct result. The present study investigated cross-modal interactions in stimulus-driven spatial attention orienting under Ward's complex cue environment conditions using an experimental procedure that eliminates response-priming artifacts. The results demonstrate that the cross-modal asymmetry reported by Ward (1994) does occur when the cue environment is complex. We argue that strategic effects in cross-modal stimulus-driven orienting of attention are responsible for the opposite asymmetries found by Ward and by Spence and Driver (1997).  相似文献   

15.
Age-related deficits in selective attention have often been demonstrated in the visual modality and, to a lesser extent, in the auditory modality. In contrast, a mounting body of evidence has suggested that cross-modal selective attention is intact in aging, especially in visual tasks that require ignoring the auditory modality. Our goal in this study was to investigate age-related differences in the ability to ignore cross-modal auditory and visual distraction and to assess the role of cognitive control demands thereby. In a set of two experiments, 30 young (mean age = 23.3 years) and 30 older adults (mean age = 67.7 years) performed a visual and an auditory n-back task (0 ≤ n ≤ 2), with and without cross-modal distraction. The results show an asymmetry in cross-modal distraction as a function of sensory modality and age: Whereas auditory distraction did not disrupt performance on the visual task in either age group, visual distraction disrupted performance on the auditory task in both age groups. Most important, however, visual distraction was disproportionately larger in older adults. These results suggest that age-related distraction is modality dependent, such that suppression of cross-modal auditory distraction is preserved and suppression of cross-modal visual distraction is impaired in aging.  相似文献   

16.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

17.
Can a space-perception conflict be solved with three sense modalities?   总被引:1,自引:0,他引:1  
Bedford FL 《Perception》2007,36(4):508-515
A cross-modal conflict over location was resolved in an unexpected way. When vision and proprioception provide conflicting information, which modality should dominate is ambiguous. A visual-proprioceptive conflict was created with a prism and, to logically disambiguate the problem, auditory information was added that either agreed with vision (group 1), agreed with proprioception (group 2), or was absent (group 3). While a scarcity of research addresses the interaction of three modalities, I predicted error should be attributed to the modality in the minority. Instead, the opposite was found: adaptation consisted of a large change in arm proprioception and a small change affecting vision in group 2, and the reverse in group 1. Group 1 was not different than group 3. Findings suggested adaptation to separate two-way conflicts, possibly influenced by direction of attention, rather than a direct solution to a three-way modality problem.  相似文献   

18.
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners’ ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.  相似文献   

19.
Two experiments investigated the effect of test modality (visual or auditory) on source memory and event-related potentials (ERPs). Test modality influenced source monitoring such that source memory was better when the source and test modalities were congruent. Test modality had less of an influence when alternative information (i.e., cognitive operations) could be used to inform source judgments in Experiment 2. Test modality also affected ERP activity. Variation in parietal ERPs suggested that this activity reflects activation of sensory information, which can be attenuated when the sensory information is misleading. Changes in frontal ERPs support the hypothesis that frontal systems are used to evaluate source-specifying information present in the memory trace.  相似文献   

20.
It is well known that stimuli grab attention to their location, but do they also grab attention to their sensory modality? The modality shift effect (MSE), the observation that responding to a stimulus leads to reaction time benefits for subsequent stimuli in the same modality, suggests that this may be the case. If noninformative cue stimuli, which do not require a response, also lead to benefits for their modality, this would suggest that the effect is automatic. We investigated the time-course of the visuotactile MSE and the difference between the effects of cues and targets. In Experiment 1, when visual and tactile tasks and stimulus locations were matched, uninformative cues did not lead to reaction time benefits for targets in the same modality. However, the modality of the previous target led to a significant MSE. Only stimuli that require a response, therefore, appear to lead to reaction time benefits for their modality. In Experiment 2, increasing attention to the cue stimuli attenuated the effect of the previous target, but the cues still did not lead to a MSE. In Experiment 3, a MSE was demonstrated between successive targets, and this effect decreased with increasing intertrial intervals. Overall, these studies demonstrate how cue- and target-induced effects interact and suggest that modalities do not automatically capture attention as locations do; rather, the MSE is more similar to other task repetition effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号