首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments investigated adult age differences in the explicit (knowledge-based) and implicit (repetition priming) components of top-down attentional guidance during discrimination of a target singleton. Experiment 1 demonstrated an additional contribution of explicit top-down attention, relative to the implicit effect of repetition priming, which was similar in magnitude for younger and older adults. Experiment 2 examined repetition priming of target activation and distractor inhibition independently. The additional contribution of explicit top-down attention, relative to the repetition priming of distractor inhibition, was greater for older adults than for younger adults. The results suggest that some forms of top-down attentional control are preserved as a function of adult age and may operate in a compensatory manner.  相似文献   

2.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

3.
Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and bottom-up processes in conjunction search. The role of bottom-up processing was assayed by inclusion of an irrelevant-size singleton in a search for a conjunction of color and orientation. One object was uniquely larger on each trial, with chance probability of coinciding with the target; thus, the irrelevant feature of size was not predictive of the target's location. Participants searched more efficiently for the target when it was also the size singleton, and they searched less efficiently for the target when a nontarget was the size singleton. Although a conjunction target cannot be detected on the basis of bottom-up processing alone, participants used search strategies that relied significantly on bottom-up guidance in finding the target, resulting in interference from the irrelevant-size singleton.  相似文献   

4.
How much attention is needed to produce implicit learning? Previous studies have found inconsistent results, with some implicit learning tasks requiring virtually no attention while others rely on attention. In this study we examine the degree of attentional dependency in implicit learning of repeated visual search context. Observers searched for a target among distractors that were either highly similar to the target or dissimilar to the target. We found that the size of contextual cueing was comparable from repetition of the two types of distractors, even though attention dwelled much longer on distractors highly similar to the target. We suggest that beyond a minimal amount, further increase in attentional dwell time does not contribute significantly to implicit learning of repeated search context.  相似文献   

5.
In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and “checkerboards” in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli.  相似文献   

6.
Using a visual search paradigm, we investigated how a top-down goal modified attentional bias for threatening facial expressions. In two experiments, participants searched for a facial expression either based on stimulus characteristics or a top-down goal. In Experiment 1, participants searched for a discrepant facial expression in a homogenous crowd of faces. Consistent with previous research, we obtained a shallower response time (RT) slope when the target face was angry than when it was happy. In Experiment 2, participants searched for a specific type of facial expression (allowing a top-down goal). When the display included a target, we found a shallower RT slope for the angry than for the happy face search. However, when an angry or happy face was present in the display in opposition to the task goal, we obtained equivalent RT slopes, suggesting that the mere presence of an angry face in opposition to the task goal did not support the well-known angry face superiority effect. Furthermore, RT distribution analyses supported the special status of an angry face only when it was combined with the top-down goal. On the basis of these results, we suggest that a threatening facial expression may guide attention as a high-priority stimulus in the absence of a specific goal; however, in the presence of a specific goal, the efficiency of facial expression search is dependent on the combined influence of a top-down goal and the stimulus characteristics.  相似文献   

7.
Two types of mechanisms have dominated theoretical accounts of efficient visual search. The first are bottom-up processes related to the characteristics of retinotopic feature maps. The second are top-down mechanisms related to feature selection. To expose the potential involvement of other mechanisms, we introduce a new search paradigm whereby a target is defined only in a context-dependent manner by multiple conjunctions of feature dimensions. Because targets in a multiconjunction task cannot be distinguished from distractors either by bottom-up guidance or top-down guidance, current theories of visual search predict inefficient search. While inefficient search does occur for the multiple conjunctions of orientation with color or luminance, we find efficient search for multiple conjunctions of luminance/size, luminance/shape, and luminance/topology. We also show that repeated presentations of either targets or a set of distractors result in much faster performance and that bottom-up feature extraction and top-down selection cannot account for efficient search on their own. In light of this, we discuss the possible role of perceptual organization in visual search. Furthermore, multiconjunction search could provide a new method for investigating perceptual grouping in visual search.  相似文献   

8.
Humans conduct visual search more efficiently when the same display is presented for a second time, showing learning of repeated spatial contexts. In this study, we investigate spatial context learning in two tasks: visual search and change detection. In both tasks, we ask whether subjects learn to associate the target with the entire spatial layout of a repeated display (configural learning) or with individual distractor locations (nonconfigural learning). We show that nonconfigural learning results from visual search tasks, but not from change detection tasks. Furthermore, a spatial layout acquired in visual search tasks does not enhance change detection on the same display, whereas a spatial layout acquired in change detection tasks moderately enhances visual search. We suggest that although spatial context learning occurs in multiple tasks, the content of learning is, in part, task specific.  相似文献   

9.
Four eyetracking experiments examined whether semantic and visual-shape representations are routinely retrieved from printed word displays and used during language-mediated visual search. Participants listened to sentences containing target words that were similar semantically or in shape to concepts invoked by concurrently displayed printed words. In Experiment 1, the displays contained semantic and shape competitors of the targets along with two unrelated words. There were significant shifts in eye gaze as targets were heard toward semantic but not toward shape competitors. In Experiments 2–4, semantic competitors were replaced with unrelated words, semantically richer sentences were presented to encourage visual imagery, or participants rated the shape similarity of the stimuli before doing the eyetracking task. In all cases, there were no immediate shifts in eye gaze to shape competitors, even though, in response to the Experiment 1 spoken materials, participants looked to these competitors when they were presented as pictures (Huettig & McQueen, 2007). There was a late shape-competitor bias (more than 2,500 ms after target onset) in all experiments. These data show that shape information is not used in online search of printed word displays (whereas it is used with picture displays). The nature of the visual environment appears to induce implicit biases toward particular modes of processing during language-mediated visual search.  相似文献   

10.
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by changing the size of the scene from preview to search. Taken together, the results suggest that an abstract (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.  相似文献   

11.
Younger (19-27 years of age) and older (60-82 years of age) adults performed a letter search task in which a color singleton was either noninformative (baseline condition) or highly informative (guided condition) regarding target location. In the guided condition, both age groups exhibited a substantial decrease in response time (RT) to singleton targets, relative to the baseline condition, as well as an increase in RT to nonsingleton targets. The authors conclude that under conditions that equate the physical structure of individual displays, top-down attentional guidance can be at least as effective for older adults as for younger adults.  相似文献   

12.
孙琪  任衍具 《心理科学》2014,37(2):265-271
以真实场景图像中的物体搜索为实验任务, 操纵场景情境和目标模板, 采用眼动技术将搜索过程分为起始阶段、扫描阶段和确认阶段, 考察场景情境和目标模板对视觉搜索过程的影响机制。结果发现, 场景情境和目标模板的作用方式及时间点不同, 二者交互影响搜索的正确率和反应时, 仅场景情境影响起始阶段的时间, 随后二者交互影响扫描阶段和确认阶段的时间及主要眼动指标。作者由此提出了场景情境和目标模板在视觉搜索中的交互作用模型。  相似文献   

13.
Hodsoll and Humphreys (2001) have assessed the relative contributions of stimulus-driven and user-driven knowledge on linearly- and nonlinearly separable searches. However, the target feature used to determine linear separability in their task (i.e., target size) was required to locate the target. In the present work, we investigated the contributions of stimulus-driven and user-driven knowledge when a linearly- or a nonlinearly-separable feature is available but not required for target identification. We asked observers to complete a series of standard color × orientation conjunction searches in which target size was either linearly- or nonlinearly separable from the size of the distractors. When guidance by color × orientation and guidance by size information are both available, observers rely on whichever information results in the best search efficiency. This is the case irrespective of whether we provide target foreknowledge by blocking stimulus conditions, suggesting that feature information is used in both a stimulus-driven and a user-driven fashion.  相似文献   

14.
Nabeta T  Ono F  Kawahara J 《Perception》2003,32(11):1351-1358
Under incidental learning conditions, spatial layouts can be acquired implicitly and facilitate visual search (contextual-cueing effect). We examined whether the contextual-cueing effect is specific to the visual modality or transfers to the haptic modality. The participants performed 320 (experiment 1) or 192 (experiment 2) visual search trials based on a typical contextual-cueing paradigm, followed by haptic search trials in which half of the trials had layouts used in the previous visual search trials. The visual contextual-cueing effect was obtained in the learning phase. More importantly, the effect was transferred from visual to haptic searches; there was greater facilitation of haptic search trials when the spatial layout was the same as in the previous visual search trials, compared with trials in which the spatial layout differed from those in the visual search. This suggests the commonality of spatial memory to allocate focused attention in both visual and haptic modalities.  相似文献   

15.
In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1–3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1–3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of hypothesis generation (HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008), linking attention with long-term and working memory, accounts for the present results and provides a useful framework of cued recall visual search.  相似文献   

16.
Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.  相似文献   

17.
Recent literature suggests that observers can use advance knowledge of the target feature to guide their search but fail to do so whenever the target is reliably a singleton. Instead, they engage in singletondetection mode—that is, they search for the most salient object. In the present study, we aimed to test the notion of a default salience-based search mode. Using several measures, we compared search for a known target when it is always a singleton (fixed-singleton search) relative to when it is incidentally a singleton (multiple-target search). We examined the relative contributions of strategic factors (knowledge that the target is a singleton) and intertrial repetition effects (singleton priming, or the advantage of responding to a singleton target if the target on the previous trial had also been a singleton). In two experiments, singleton priming eliminated all the differences in performance between fixed-singleton and multiple-target search, suggesting that search for a known singleton may be feature based rather than salience based.  相似文献   

18.
The two-stage model of amodal completion or TSM (Sekuler & Palmer, 1992), and the ambiguity theory (Rauschenberger, Peterson, Mosca, & Bruno, 2004) provide conflicting accounts of the phenomenon of amodal completion in 2-D images. TSM claims that an initial mosaic (2-D) representation gives way to a later amodally completed (3-D) representation. Furthermore, the 2-D representation is accessible only prior to formation of the 3-D representation. On the other hand, the ambiguity theory claims that the 2-D and 3-D representations develop in parallel and that preference for one of the coexisting representations over the other may be subject to the influence of spatiotemporal context provided by other elements in the visual display. Our experiments support the claim that, once formed, both representations coexist, with spatiotemporal context potentially determining which representation is perceived.  相似文献   

19.
Previous studies have shown that the efficiency of visual search does not improve when participants search through the same unchanging display for hundreds of trials (repeated search), even though the participants have a clear memory of the search display. In this article, we ask two important questions. First, why do participants not use memory to help search the repeated display? Second, can context be introduced so that participants are able to guide their attention to the relevant repeated items? Experiments 1-4 show that participants choose not to use a memory strategy because, under these conditions, repeated memory search is actually less efficient than repeated visual search, even though the latter task is in itself relatively inefficient. However, when the visual search task is given context, so that only a subset of the items are ever pertinent, participants can learn to restrict their attention to the relevant stimuli (Experiments 5 and 6).  相似文献   

20.
Subjects were given a simplified proofreading task in which they were instructed to circle every occurrence of a target letter in a prose passage or in a scrambled prose passage. It was found that the presence of a prose context enhanced the subjects ability to find a target letter when the target letter was in a content word, but impaired the subjects ability to find a target letter when it was in a function word. This interaction sheds light on a number of conflicting reports in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号