首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Search targets are typically remembered much better than other objects even when they are viewed for less time. However, targets have two advantages that other objects in search displays do not have: They are identified categorically before the search, and finding them represents the goal of the search task. The current research investigated the contributions of both of these types of information to the long-term visual memory representations of search targets. Participants completed either a predefined search or a unique-object search in which targets were not defined with specific categorical labels before searching. Subsequent memory results indicated that search target memory was better than distractor memory even following ambiguously defined searches and when the distractors were viewed significantly longer. Superior target memory appears to result from a qualitatively different representation from those of distractor objects, indicating that decision processes influence visual memory.  相似文献   

2.
Affective stimuli capture attention, whether their affective value stems from emotional content or a history of reward. The uniqueness of such stimuli within their experimental contexts might imbue them with an enhanced categorical distinctiveness that accounts for their impact on attention. Indeed, in emotion-induced blindness, categorically distinctive neutral pictures disrupt target perception, albeit to a lesser degree than do emotional pictures. Here, we manipulated the categorical distinctiveness of distractors in an emotion-induced blindness task. Participants searched within RSVP streams for a target that followed an emotional or a neutral distractor picture. In a categorically homogenous condition, all non-distractor items were exemplars from a uniform category, thus enhancing the distractor's categorical distinctiveness. In a categorically heterogeneous condition, each non-distractor item represented a distinct category. Neutral distractors disrupted target perception only in the homogenous condition, but emotional distractors did so regardless of their categorical distinctiveness.  相似文献   

3.
We conducted three experiments to investigate how opportunities to view objects together in time influence memory for location. Children and adults learned the locations of 20 objects marked by dots on the floor of an open, square box. During learning, participants viewed the objects either simultaneously or in isolation. At test, participants replaced the objects without the aid of the dots. Experiment 1 showed that when the box was divided into quadrants and the objects in each quadrant were categorically related, 7-, 9-, and 11-year-olds and adults in the simultaneous viewing condition exhibited categorical bias, but only 11-year-olds and adults in the isolated viewing condition exhibited categorical bias. Experiment 2 showed that when the objects were categorically related but no boundaries were present, 11-year-olds and adults in the simultaneous viewing condition exhibited categorical bias, but only adults showed bias in the isolated viewing condition. Experiment 3 revealed that adults exhibited bias in both simultaneous and isolated viewing conditions when boundaries were present but the objects were not related. These findings suggest that opportunities to see objects together in time interact with cues available for grouping objects to help children form spatial groups.  相似文献   

4.
Abecassis, Sera, Yonas, and Schwade (2001) showed that young children represent shapes more metrically, and perhaps more holistically, than do older children and adults. How does a child transition from representing objects and events as undifferentiated wholes to representing them explicitly in terms of their attributes? According to RBC (Recognition‐by‐Components theory; Biederman, 1987 ), objects are represented as collections of categorical geometric parts (“geons”) in particular categorical spatial relations. We propose that the transition from holistic to more categorical visual shape processing is a function of the development of geon‐like representations via a process of progressive intersection discovery. We present an account of this transition in terms of DORA ( Doumas, Hummel, & Sandhofer, 2008 ), a model of the discovery of relational concepts. We demonstrate that DORA can learn representations of single geons by comparing objects composed of multiple geons. In addition, as DORA is learning it follows the same performance trajectory as children, originally generalizing shape more metrically/holistically and eventually generalizing categorically.  相似文献   

5.
Spatial relation information can be encoded in two different ways: categorically, which is abstract, and coordinately, which is metric. Although categorical and coordinate spatial relation processing is commonly conceived as relying on spatial representations and spatial cognitive processes, some suggest that representations and cognitive processes involved in categorical spatial relation processing can be verbal as well as spatial. We assessed the extent to which categorical and coordinate spatial relation processing engages verbal and spatial representations and processes using a dual-task paradigm. Participants performed the classical dot-bar paradigm and simultaneously performed either a spatial tapping task, or an articulatory suppression task. When participants were requested to tap blocks in a given pattern (spatial tapping), their performance decreased in both the categorical and coordinate tasks compared to a control condition without interference. In contrast, articulatory suppression did not affect performance in either spatial relation task. A follow-up experiment indicated that this outcome could not be attributed to different levels of difficulty of the two interference tasks. These results provide strong evidence that both coordinate and categorical spatial relation processing relies mainly on spatial mechanisms. These findings have implications for theories on why categorical and coordinate spatial relations processing are lateralised in the brain.  相似文献   

6.
The purpose of the present investigation was to determine whether the positions of objects in a scene are coded relative to one another categorically (i.e., above, below, or side of; Experiment 1) and to determine whether spatial position in scene perception is coded preattentively or only under focused attention (Experiment 2). In Experiment 1, participants viewed alternating versions of a scene in which one of the objects in the scene changed its categorical relationship to the closest object in the scene, changed only its metric relationship to the closest object in a scene, or appeared and disappeared. Participants were faster at detecting changes that disrupted categorical relations than at detecting changes that disrupted only metric relations. In Experiment 2, this categorical advantage still occurred even when participants were cued to the location of the change. These results suggest that categorical spatial relations are being coded in scene perception and that attention is required in order to encode spatial relations.  相似文献   

7.
During visual search, the selection of target objects is guided by stored representations of target‐defining features (attentional templates). It is commonly believed that such templates are maintained in visual working memory (WM), but empirical evidence for this assumption remains inconclusive. Here, we tested whether retaining non‐spatial object features (shapes) in WM interferes with attentional target selection processes in a concurrent search task that required spatial templates for target locations. Participants memorized one shape (low WM load) or four shapes (high WM load) in a sample display during a retention period. On some trials, they matched them to a subsequent memory test display. On other trials, a search display including two lateral bars in the upper or lower visual field was presented instead, and participants reported the orientation of target bars that were defined by their location (e.g., upper left or lower right). To assess the efficiency of attentional control under low and high WM load, EEG was recorded and the N2pc was measured as a marker of attentional target selection. Target N2pc components were strongly delayed when concurrent WM load was high, indicating that holding multiple object shapes in WM competes with the simultaneous retention of spatial attentional templates for target locations. These observations provide new electrophysiological evidence that such templates are maintained in WM, and also challenge suggestions that spatial and non‐spatial contents are represented in separate independent visual WM stores.  相似文献   

8.
Peripheral vision outside the focus of attention may rely on summary statistics. We used a gaze-contingent paradigm to directly test this assumption by asking whether search performance differed between targets and statistically-matched visualizations of the same targets. Four-object search displays included one statistically-matched object that was replaced by an unaltered version of the object during the first eye movement. Targets were designated by previews, which were never altered. Two types of statistically-matched objects were tested: One that maintained global shape and one that did not. Differences in guidance were found between targets and statistically-matched objects when shape was not preserved, suggesting that they were not informationally equivalent. Responses were also slower after target fixation when shape was not preserved, suggesting an extrafoveal processing of the target that again used shape information. We conclude that summary statistics must include some global shape information to approximate the peripheral information used during search.  相似文献   

9.
In visual search tasks, subjects look for a target among a variable number of distractor items. If the target is defined by a conjunction of two different features (e.g., color × orientation), efficient search is possible when parallel processing of information about color and about orientation is used to “guid” the deployment of attention to the target. Another type of conjunction search has targets defined by two instances of one type of feature (e.g., a conjunction of two colors). In this case, search is inefficient when the target is an item defined by parts of two different colors but much more efficient if the target can be described as a whole item of one color with a part of another color (Wolfe, Friedman-Hill, & Bilsky, 1994). In this paper, we show that the same distinction holds for size. “Part— whole” size × size conjunction searches are efficient; “part-part” searches are not (Experiments 1–3). In contrast, all orientation × orientation searches are inefficient (Experiments 4–6). This difference between preattentive processing of color and size, on the one hand, and orientation, on the other, may reflect structural relationships between features in real-world objects.  相似文献   

10.
We present a computational model of grasping of non-fixated (extrafoveal) target objects which is implemented on a robot setup, consisting of a robot arm with cameras and gripper. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) which states that spatial attention is a consequence of the preparation of goal-directed, spatially coded movements (especially saccadic eye movements). In our model, we add the hypothesis that saccade planning is accompanied by the prediction of the retinal images after the saccade. The foveal region of these predicted images can be used to determine the orientation and shape of objects at the target location of the attention shift. This information is necessary for precise grasping. Our model consists of a saccade controller for target fixation, a visual forward model for the prediction of retinal images, and an arm controller which generates arm postures for grasping. We compare the precision of the robotic model in different task conditions, among them grasping (1) towards fixated target objects using the actual retinal images, (2) towards non-fixated target objects using visual prediction, and (3) towards non-fixated target objects without visual prediction. The first and second setting result in good grasping performance, while the third setting causes considerable errors of the gripper orientation, demonstrating that visual prediction might be an important component of eye–hand coordination. Finally, based on the present study we argue that the use of robots is a valuable research methodology within psychology.  相似文献   

11.
Two experiments examined the impact of task-set on people's use of the visual and semantic features of words during visual search. Participants' eye movements were recorded while the distractor words were manipulated. In both experiments, the target word was either given literally (literal task) or defined by a semantic clue (categorical task). According to Kiefer and Martens, participants should preferentially use either the visual or semantic features of words depending on their relevance for the task. This assumption was partially supported. As expected, orthographic neighbours of the target word attracted participants' attention more and took longer to reject, once fixated, during the literal task. Conversely, semantic associates of the target word took longer to reject during the categorical task. However, they did not attract participants' attention more than in the literal task. This unexpected finding is discussed in relation to the processing of words in the peripheral visual field.  相似文献   

12.
Treisman and Gelade's (1980) feature-integration model claims that the search for separate ("primitive") stimulus features is parallel, but that the conjunctions of those features require serial scan. Recently, evidence has accumulated that parallel processing is not limited to these "primitive" stimulus features, but that combinations of features can also produce parallel search. In the experiments reported here, the processing of feature conjunctions was studied when the stimulus features of a combination were at different spatial scales. The patterns in the search array were composed of three cross-shaped or T-shaped (local) elements, which formed an oblique bar (the global pattern) 45 deg or 135 deg in orientation. When the target and distractors differed from each other at one spatial scale only (either in the bar orientation or in the shape of the local elements), target detection was independent of the number of distractors, i.e., the search was parallel. In the conjunction task, in which the target and distractors were defined as the combinations of the bar orientation and the element shape, i.e., both spatial scales were relevant, the detection of the target required slow serial scrutiny of the search array. It is possible that the conjunction search could not be performed in parallel because switches between the two scales (or spatial frequency channels) are linked to attention and the task required the use of both scales in order to find the target.  相似文献   

13.
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.  相似文献   

14.
Unlike in laboratory visual search tasks—wherein participants are typically presented with a pictorial representation of the item they are asked to seek out—in real-world searches, the observer rarely has veridical knowledge of the visual features that define their target. During categorical search, observers look for any instance of a categorically defined target (e.g., helping a family member look for their mobile phone). In these circumstances, people may not have information about noncritical features (e.g., the phone’s color), and must instead create a broad mental representation using the features that define (or are typical of) the category of objects they are seeking out (e.g., modern phones are typically rectangular and thin). In the current investigation (Experiment 1), using a categorical visual search task, we add to the body of evidence suggesting that categorical templates are effective enough to conduct efficient visual searches. When color information was available (Experiment 1a), attentional guidance, attention restriction, and object identification were enhanced when participants looked for categories with consistent features (e.g., ambulances) relative to categories with more variable features (e.g., sedans). When color information was removed (Experiment 1b), attention benefits disappeared, but object recognition was still better for feature-consistent target categories. In Experiment 2, we empirically validated the relative homogeneity of our societally important vehicle stimuli. Taken together, our results are in line with a category-consistent view of categorical target templates (Yu, Maxfield, & Zelinsky in, Psychological Science, 2016. doi: 10.1177/0956797616640237), and suggest that when features of a category are consistent and predictable, searchers can create mental representations that allow for the efficient guidance and restriction of attention as well as swift object identification.  相似文献   

15.
Several studies have shown that targets defined on the basis of the spatial relations between objects yield highly inefficient visual search performance (e.g., Logan, 1994; Palmer, 1994), suggesting that the apprehension of spatial relations may require the selective allocation of attention within the scene. In the present study, we tested the hypothesis that depth relations might be different in this regard and might support efficient visual search. This hypothesis was based, in part, on the fact that many perceptual organization processes that are believed to occur early and in parallel, such as figure-ground segregation and perceptual completion, seem to depend on the assignment of depth relations. Despite this, however, using increasingly salient cues to depth (Experiments 2–4) and including a separate test of the sufficiency of the most salient depth cue used (Experiment 5), no evidence was found to indicate that search for a target defined by depth relations is any different than search for a target defined by other types of spatial relations, with regard to efficiency of search. These findings are discussed within the context of the larger literature on early processing of three-dimensional characteristics of visual scenes.  相似文献   

16.
Several studies have shown that targets defined on the basis of the spatial relations between objects yield highly inefficient visual search performance (e.g., Logan, 1994; Palmer, 1994), suggesting that the apprehension of spatial relations may require the selective allocation of attention within the scene. In the present study, we tested the hypothesis that depth relations might be different in this regard and might support efficient visual search. This hypothesis was based, in part, on the fact that many perceptual organization processes that are believed to occur early and in parallel, such as figure-ground segregation and perceptual completion, seem to depend on the assignment of depth relations. Despite this, however, using increasingly salient cues to depth (Experiments 2-4) and including a separate test of the sufficiency of the most salient depth cue used (Experiment 5), no evidence was found to indicate that search for a target defined by depth relations is any different than search for a target defined by other types of spatial relations, with regard to efficiency of search. These findings are discussed within the context of the larger literature on early processing of three-dimensional characteristics of visual scenes.  相似文献   

17.
Olivers and van der Helm (1998) showed that symmetry-defined visual search (for both symmetry and asymmetry) requires selective spatial attention. We hypothesize that an attentional set for the orientation of a symmetry axis also is involved in symmetry-defined visual search. We conducted three symmetry-defined visual search experiments with manipulations of the axis of symmetry orientations, and performance was better when the axis orientations within the search array were uniform, rather than a mixture of two orientations, and the attentional set for the axis orientation could be kept. In addition, search performance when the target was defined by the presence of symmetry was equivalent to that when the target was defined by a difference of symmetry axis orientation. These results suggest that attentional set for axis orientation plays a fundamental role in symmetry-defined visual search.  相似文献   

18.
Quinn PC 《Perception》2004,33(8):897-906
Four experiments were conducted to examine whether visual-orientation information is perceived categorically. In experiments 1 and 3, adult participants sorted oriented line stimuli into broad oblique and narrow vertical or horizontal categories. Experiments 2 and 4 showed that categorical discrimination of orientation occurred only near the vertical-oblique boundary. The data indicate that there is categorical perception near vertical and more continuous perception near horizontal. The results are relevant to the debate over whether categorical perception is derived from perceptual structure, verbal coding, or within-task learning. In addition, the asymmetrical perception of orientation around vertical and horizontal is consistent with the possibility that there may be differences in the functional significance of orientation near the two main axes.  相似文献   

19.
Two object-naming experiments explored the influence of extrafoveal preview information and flanker object context on transsaccadic object identification. Both the presence of an extrafoveal preview of the target object and the contextual constraint provided by extrafoveal flanker objects were found to influence the speed of object identification, but the latter effect occurred only when an extrafoveal preview of the target object was not presented prior to fixation. The context effect was found to be due to facilitation from related flankers rather than inhibition from unrelated flankers. No evidence was obtained for the hypothesis that constraining context can increase the usefulness of an extrafoveal preview of a to-be-fixated object.  相似文献   

20.
The Hatfield Polytechnic, Hatfield, Herts ALIO 9AB, England The experiment utilized a serial choice reaction time (RT) paradigm in which only one alphanumeric stimulus was presented per trial, and the target set consisted of a single identified item. The categorical relationship between the target and nontarget items was varied as a property of blocks of trials. Target and nontarget RTs were smaller when the specified target item (e.g., the number 6) was categorically distinct from the nontargets (e.g., letters) than when it was from the same category (e.g., digits). The processing of catch-trial stimuli (items from the alternate category to the nontargets) and homographie category-ambiguous items was inhibited only in the former, between-category, condition. The results are contrasted with those obtained in visual search tasks. They suggest that a “locational-cue” explanation of alphanumeric category effects is inadequate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号