首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
已有研究发现物体识别存在视角依赖效应,那么在真实物体搜索中是否也存在视角依赖效应呢?本研究采用物体阵列搜索任务,通过眼动技术探讨真实物体搜索中的视角依赖效应及其来源。结果发现:(1)图片提示的搜索绩效优于名称提示;(2)以名称提示时,常规视角的搜索绩效高于新异视角,与新异视角相比,以常规视角呈现的目标物体在扫描和确认阶段的注视点数较少且持续时间较短,表现出视角依赖效应;(3)以图片为提示时,搜索绩效与视角无关。由此可见,在以名称提示的物体搜索中,常规视角比新异视角条件下的注意引导更高效,对目标物体的确认也更快,扫描和确认阶段均存在视角依赖效应,支持目标模板的双功能理论。  相似文献   

2.
Counter-terrorism strategies rely on the assumption that it is possible to increase threat detection by providing explicit verbal instructions to orient people's attention to dangerous objects and hostile behaviours in their environment. Nevertheless, whether verbal cues can be used to enhance threat detection performance under laboratory conditions is currently unclear. In Experiment 1, student participants were required to detect a picture of a dangerous or neutral object embedded within a visual search display on the basis of an emotional strategy ‘is it dangerous?’ or a semantic strategy ‘is it an object?’. The results showed a threat superiority effect that was enhanced by the emotional visual search strategy. In Experiment 2, whilst trainee police officers displayed a greater threat superiority effect than student controls, both groups benefitted from performing the task under the emotional than semantic visual search strategy. Manipulating situational threat levels (high vs. low) in the experimental instructions had no effect on visual search performance. The current findings provide new support for the language-as-context hypothesis. They are also consistent with a dual-processing account of threat detection involving a verbally mediated route in working memory and the deployment of a visual template developed as a function of training.  相似文献   

3.
Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.  相似文献   

4.
Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.  相似文献   

5.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory–visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential association predictions. We also recently showed that the underlying object-based auditory–visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory–visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory–visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory–visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.  相似文献   

6.
The efficiency of feature-based subitization and counting   总被引:2,自引:0,他引:2  
The enumeration of small numbers of objects (approximately 4) proceeds rapidly, accurately, and with little effort via a process termed subitization. Four experiments examined whether it was possible to subitize the number of features rather than objects present in a display. Overall, the findings showed that when features are presented randomly and are uncorrelated with object numerosity, efficient enumeration is not possible. This suggests that the visual system does not have parallel access to multiple feature maps and that subitization processes operate exclusively on representations coding the locations of objects. The data are discussed with respect to theories of visual enumeration and search.  相似文献   

7.
黎昂  杨锦绵朱磊 《心理科学》2021,44(6):1282-1289
采用眼动追踪范式,本研究通过两个实验探讨局部注意干扰效应(LAI)的注意分配特点和机制。实验一采用搜索时间有限的范式,目标呈现70毫秒后消失,在行为和眼动指标上均发现LAI效应:当两个目标距离较近时,反应时增加,正确率降低,总注视时长增长、注视点增加和眼跳速度加快。实验二增加目标呈现时间至1500毫秒以鼓励搜索,产生了比实验一更强烈的LAI效应,且眼跳速度的模式也有不同,说明增加搜索时间并不利于减轻LAI,从而推断LAI的竞争发生在判断阶段,而非搜索和识别目标阶段。  相似文献   

8.
赵欣  袁杰  徐依宁  傅世敏 《心理科学进展》2014,22(11):1708-1722
视觉选择性注意的研究是认知心理学领域的热点之一。注意选择的表征不仅可以基于空间, 也可以基于物体。基于物体的注意(object-based attention, OBA)的研究范式主要有双矩形提示范式和侧干扰范式, 有关基于物体的注意的机制的理论有感觉增强理论、注意优先理论和注意转换理论。影响基于物体的注意效应的因素主要有刺激本身的特征(如刺激呈现时间)、其它知觉过程和经验因素等。视觉物体的概念既包括格式塔知觉组织原则定义的物体, 也包括无意识下的物体、变化之后的物体和自上而下的物体。  相似文献   

9.
Space-based accounts of visual attention assume that we select a limited spatial region independent of the number of objects it contains. In contrast, object-based accounts suggest that we select objects independent of their location. We investigated the boundary conditions on the selection modes of attention in a series of tachistoscopic visual search tasks, where the nature of capacity limitations on search was examined. Observers had to search for a horizontally oriented target ellipse among differently oriented distractor ellipses. Across four experiments, we orthogonally manipulated target-distractor (TD) similarity and distractor-distractor (DD) similarity. Each experiment consisted of a two-way design: Firstly, with a central cue, we indicated the spatial extent of the relevant search area. Secondly, we varied the number and spatial proximity of items in the display. Performance could be accounted for in terms of capacity limited object-based attention, assuming also that the spatial proximity of items enhances performance when there is high DD-similarity (and grouping). In addition, the cueing effect interacted with spatial proximity when DD-similarity was high, suggesting that grouping was influenced by attention. We propose that any capacity limits on visual search are due to object-based attention, and that the formation of perceptual objects and object groups is also subject to attentional modulation.  相似文献   

10.
任衍具  孙琪 《心理学报》2014,46(11):1613-1627
采用视空工作记忆任务和真实场景搜索任务相结合的双任务范式, 结合眼动技术将搜索过程划分为起始阶段、扫描阶段和确认阶段, 探究视空工作记忆负载对真实场景搜索绩效的影响机制, 同时考查试次间搜索目标是否变化、目标模板的具体化程度以及搜索场景画面的视觉混乱度所起的调节作用。结果表明, 视空工作记忆负载会降低真实场景搜索的成绩, 在搜索过程中表现为视空负载条件下扫描阶段持续时间的延长、注视点数目的增加和空间负载条件下确认阶段持续时间的延长, 视空负载对搜索过程的影响与目标模板的具体化程度有关; 空间负载会降低真实场景搜索的效率, 且与搜索画面的视觉混乱度有关, 而客体负载则不会。由此可见, 视空工作记忆负载对真实场景搜索绩效的影响不同, 空间负载对搜索过程的影响比客体负载更长久, 二者均受到目标模板具体化程度的调节; 仅空间负载会降低真实场景的搜索效率, 且受到搜索场景画面视觉混乱度的调节。  相似文献   

11.
Probing distractor inhibition in visual search: inhibition of return   总被引:4,自引:0,他引:4  
The role of inhibition of return (IOR) in serial visual search was reinvestigated using R. Klein's (1988) paradigm of a search task followed by a probe-detection task. Probes were presented at either the location of a potentially inhibited search distractor or an empty location. No evidence of IOR was obtained when the search objects were removed after the search-task response. But when the search objects remained on, a pattern of effects similar to Klein's results emerged. However, when just the search-critical object parts were removed or when participants received immediate error feedback to prevent rechecking of the search objects, IOR effects were observed only when probes appeared equally likely at search array and empty locations. These results support the operation of object-based IOR in serial visual search, with IOR demonstrable only when rechecking is prevented (facilitating task switching) and monitoring for probes is not biased toward search objects.  相似文献   

12.
When observers search for a target object, they incidentally learn the identities and locations of "background" objects in the same display. This learning can facilitate search performance, eliciting faster reaction times for repeated displays. Despite these findings, visual search has been successfully modeled using architectures that maintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the current study, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) what does viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments, we tracked eye movements during repeated visual search, and we tested incidental memory for repeated nontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated to assess their respective contributions to learning. Using viewing behavior, we contrasted three potential accounts for faster searching with experience. The results indicate that learning does not result in faster object identification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution of search decisions, whether targets are present or absent.  相似文献   

13.
The processing of visual information was investigated in the context of two visual search tasks. The first was a forced-choice task in which one of two alternative letters appeared in a visual display of from one to five letters. The second task included trials on which neither of the two alternatives was present in the display. Search rates were estimated from the slopes of best linear fits to response latencies plotted as a function of the number of items in the visual display. These rates were found to be much slower than those estimated in yes-no search tasks. This result was interpreted as indicating that the processes underlying visual search in yes-no and forced-choice tasks are not the same.  相似文献   

14.
In two experiments, we demonstrated that an asymmetric effect of the brain electric activity that is elicited by nonattended visual stimuli is similar to the one found in responses observed in the performance of visual search tasks. The automatic detection of violated sequential regularities was investigated by measuring the visual mismatch negativity (vMMN) component of event-related brain potentials (ERPs). In Experiment 1, within a sequence of stimulus displays with O characters, infrequently presented Q characters elicited an earlier vMMN than did infrequent O characters within a sequence of Q characters. In Experiment 2, similar asymmetric results emerged if only 16 % of the characters were different within an infrequent display. In both experiments, these stimuli were irrelevant; during the stimulus sequences, participants performed a demanding videogame. We suggest that the underlying match/mismatch and decision processes are similar in the vMMN and in the attention-related visual search paradigm, at least in the case of the stimuli in the present experiments.  相似文献   

15.
Two experiments examined how well the long-term visual memories of objects that are encountered multiple times during visual search are updated. Participants searched for a target two or four times (e.g., white cat) among distractors that shared the target's colour, category, or were unrelated while their eye movements were recorded. Following the search, a surprise visual memory test was given. With additional object presentations, only target memory reliably improved; distractor memory was unaffected by the number of object presentations. Regression analyses using the eye movement variables as predictors indicated that number of object presentations predicted target memory with no additional impact of other viewing measures. In contrast, distractor memory was best predicted by the viewing pattern on the distractor objects. Finally, Experiment 2 showed that target memory was influenced by number of target object presentations, not number of searches for the target. Each of these experiments demonstrates visual memory differences between target and distractor objects and may provide insight into representational differences in visual memory.  相似文献   

16.
The aim of the present research was to study the processes involved in knowledge emergence. In a short-term priming paradigm, participants had to categorize pictures of objects as either "kitchen objects" or "do-it-yourself tools". The primes and targets represented objects belonging to either the same semantic category or different categories (object category similarity), and their use involved gestures that were either similar or very different (gesture similarity). The condition with a SOA of 100ms revealed additive effects of motor similarity and object category similarity, whereas another condition with a SOA of 300ms showed an interaction between motor and category similarity. These results were interpreted in terms of the activation and integration processes involved in the emergence of mental representations.  相似文献   

17.
Search for a category target in clutter   总被引:2,自引:0,他引:2  
Bravo MJ  Farid H 《Perception》2004,33(6):643-652
An airport security worker searching a suitcase for a weapon is engaging in an especially difficult search task: the target is not well-specified, it is not salient, and it is not predicted by its context. Under these conditions, search may proceed item-by-item. In the experiment reported here we tested whether the items for this form of search are whole familiar objects. Our displays were composed of color photographs of ordinary objects, that were either uniform in color and texture (simple), or had two or more parts with different colors or textures (compound). The observer's task was to detect the presence of a target belonging to a broad category (food). We found that when the objects were presented in a sparse array, search times to find the target were similar for displays composed of simple and compound objects. But when the same objects were presented as dense clutter, search functions were steeper for displays composed of compound objects. We attribute this difference to the difficulty of segmenting compound objects in clutter: compared with simple objects, compound objects are less likely to be organized into a single object by bottom--up grouping processes. Our results indicate that while search rates in a sparse display may be determined by the number of objects, search rates in clutter are also affected by the number of object parts.  相似文献   

18.
Visual search is modulated by action intentions   总被引:3,自引:1,他引:2  
The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target. Target selection processes prior to the first saccadic eye movement were modulated by the different action intentions. Specifically, fewer saccades to objects with the wrong orientation were made in the grasping condition than in the pointing condition, whereas the number of saccades to an object with the wrong color was the same in the two conditions. Saccadic latencies were similar under the different task conditions, so the results cannot be explained by a speed-accuracy trade-off. The results suggest that a specific action intention, such as grasping, can enhance visual processing of action-relevant features, such as orientation. Together, the findings support the view that visual attention can be best understood as a selection-for-action mechanism.  相似文献   

19.
ABSTRACT

Previous research has observed that the size of age differences in short-term memory (STM) depends on the type of material to be remembered, but has not identified the mechanism underlying this pattern. The current study focused on visual STM and examined the contribution of information load, as estimated by the rate of visual search, to STM for two types of stimuli – meaningful and abstract objects. Results demonstrated higher information load and lower STM for abstract objects. Age differences were greater for abstract than meaningful objects in visual search, but not in STM. Nevertheless, older adults demonstrated a decreased capacity in visual STM for meaningful objects. Furthermore, in support of Salthouse's processing speed theory, controlling for search rates eliminated all differences in STM related to object type and age. The overall pattern of findings suggests that STM for visual objects is dependent upon processing rate, regardless of age or object type.  相似文献   

20.
Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object’s image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object’s image. The same pattern of results held when the target was invariant (Exps. 23) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号