首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In two experiments, participants were trained to recognize a playground scene from four vantage points and were subsequently asked to recognize the playground from a novel perspective between the four learned viewing perspectives, as well as from the trained perspectives. In both experiments, people recognized the novel view more efficiently than those that they had recently used in order to learn the scene. Additionally, in Experiment 2, participants who viewed a novel stimulus on their very first test trial correctly recognized it more quickly (and also tended to recognize it more accurately) than did participants whose first test trial was a familiar view of the scene. These findings call into question the idea that scenes are recognized by comparing them with single previous experiences, and support a growing body of literature on the existence of psychological mechanisms that combine spatial information from multiple views of a scene.  相似文献   

2.
VIEWPOINT DEPENDENCE IN SCENE RECOGNITION   总被引:9,自引:0,他引:9  
Abstract— Two experiments investigated the viewpoint dependence of spatial memories. In Experiment 1, participants learned the locations of objects on a desktop from a single perspective and then took part in a recognition test, test scenes included familiar and novel views of the layout. Recognition latency was a linear function of the angular distance between a test view and the study view. In Experiment 2, participants studied a layout from a single view and then learned to recognize the layout from three additional training views. A final recognition test showed that the study view and the training views were represented in memory, and that latency was a linear function of the angular distance to the nearest study or training view. These results indicate that interobject spatial relations are encoded in a viewpoint-dependent manner, and that recognition of novel views requires normalization to the most similar representation in memory. These findings parallel recent results in visual object recognition  相似文献   

3.
When we recognize an object, do we automatically know how big it is in the world? We employed a Stroop-like paradigm, in which two familiar objects were presented at different visual sizes on the screen. Observers were faster to indicate which was bigger or smaller on the screen when the real-world size of the objects was congruent with the visual size than when it was incongruent--demonstrating a familiar-size Stroop effect. Critically, the real-world size of the objects was irrelevant for the task. This Stroop effect was also present when only one item was present at a congruent or incongruent visual size on the display. In contrast, no Stroop effect was observed for participants who simply learned a rule to categorize novel objects as big or small. These results show that people access the familiar size of objects without the intention of doing so, demonstrating that real-world size is an automatic property of object representation.  相似文献   

4.
We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four “views” of a two dimensional visual array derived from a three-dimensional scene. In Experiments 1 and 2, the stimuli were arrays of colored rectangles that preserved the relative sizes, distances, and angles among objects in the original scene, as well as the original occlusion relations. Participants recognized a novel central view more efficiently than any of the Trained views, which in turn were recognized more efficiently than equidistant novel views. Experiment 2 eliminated presentation frequency as an explanation for this effect. Experiment 3 used colored dots that preserved only identity and relative location information, which resulted in a weaker effect, though still one that was inconsistent with both part-based and normalization accounts of recognition. We argue that, for recognition processes to function so effectively with such minimalist stimuli, view combination must be a very general and fundamental mechanism, potentially enabling both visual recognition and categorization.  相似文献   

5.
Research on contextual cueing has demonstrated that with simple arrays of letters and shapes, search for a target increases in efficiency as associations between a search target and its surrounding visual context are learned. We investigated whether the visual context afforded by repeated exposure to real-world scenes can also guide attention when the relationship between the scene and a target position is arbitrary. Observers searched for and identified a target letter embedded in photographs of real-world scenes. Although search time within novel scenes was consistent across trials, search time within repeated scenes decreased across repetitions. Unlike previous demonstrations of contextual cueing, however, memory for scene-target covariation was explicit. In subsequent memory tests, observers recognized repeated contexts more often than those that were presented once and displayed superior recall of target position within the repeated scenes. In addition, repetition of inverted scenes, which made the scene more difficult to identify, produced a markedly reduced rate of learning, suggesting semantic information concerning object and scene identity are used to guide attention.  相似文献   

6.
Two experiments examine a novel method of assessing face familiarity that does not require explicit identification of presented faces. Earlier research (Clutterbuck & Johnston, 2002; Young, Hay, McWeeny, Flude, & Ellis, 1985) has shown that different views of the same face can be matched more quickly for familiar than for unfamiliar faces. This study examines whether exposure to previously novel faces allows the speed with which they can be matched to be increased, thus allowing a means of assessing how faces become familiar. In Experiment 1, participants viewed two sets of unfamiliar faces presented for either many, short intervals or for few, long intervals. At test, previously familiar (famous) faces were matched more quickly than novel faces or learned faces. In addition, learned faces seen on many, brief occasions were matched more quickly than the novel faces or faces seen on fewer, longer occasions. However, this was only observed when participants performed “different” decision matches. In Experiment 2, the similarity between face pairs was controlled more strictly. Once again, matches were performed on familiar faces more quickly than on unfamiliar or learned items. However, matches made to learned faces were significantly faster than those made to completely novel faces. This was now observed for both same and different match decisions. The use of this matching task as a means of tracking how unfamiliar faces become familiar is discussed.  相似文献   

7.
The extent to which nonhumans recognize the correspondence between static pictures and the objects they represent remains an interesting and controversial issue. Pictures displayed on computers are used extensively for research on behavioral and neural mechanisms of cognition in birds, yet attempts to show that birds recognize the objects seen in pictures have produced mixed and inconclusive results. We trained pigeons to discriminate between two identically colored but differently shaped three-dimensional objects seen directly or as pictures, and we found clear bidirectional transfer of the learned object discrimination. Transfer from objects to pictures occurred even when pigeons were trained with 12 views and only novel views of the objects were presented in transfer. This study provides the strongest evidence yet that pigeons can recognize the correspondence between objects and pictures.  相似文献   

8.
In cluttered scenes, some object boundaries may not be marked by image cues. In such cases, the boundaries must be defined top-down as a result of object recognition. Here we ask if observers can retain the boundaries of several recognized objects in order to segment an unfamiliar object. We generated scenes consisting of neatly stacked objects, and the objects themselves consisted of neatly stacked coloured blocks. Because the blocks were stacked the same way within and across objects, there were no visual cues indicating which blocks belonged to which objects. Observers were trained to recognize several objects and we tested whether they could segment a novel object when it was surrounded by these familiar, studied objects. The observer's task was to count the number of blocks comprising the target object. We found that observers were able to accurately count the target blocks when the target was surrounded by up to four familiar objects. These results indicate that observers can use the boundaries of recognized objects in order to accurately segment, top-down, a novel object.  相似文献   

9.
When faces are learned from rotating view sequences, novel views may be recognized by matching them with an integrated representation of the sequence or with individual views. An integrated-representation process should benefit from short view durations, and thus from the inclusion of views in a short temporal window, allowing the distribution of attention over the entire sequence. A view-matching process should benefit from long view durations, allowing the attention to focus on each view. In a sequential comparison task, we tested the recognition of learned and novel interpolated and extrapolated views after learning faces from rapid and slow sequences (240 ms or 960 ms for each view). We found a superiority of rapid over slow sequences, in favour of the integrated-representation hypothesis. In addition, the recognition pattern for the different viewpoints in the sequence depended on the absence or presence of extrapolated views, showing a bias of the distribution of attention.  相似文献   

10.
Covert face recognition in neurologically intact participants   总被引:1,自引:0,他引:1  
Prosopagnosic patients may maintain some ability to recognize familiar faces, although they remain unaware of this ability. This phenomenon – called covert face recognition – was investigated in neurologically intact participants, using priming techniques. Participants were quicker to indicate that a target-name was familiar when the preceding prime-face belonged to the same person compared with an unrelated familiar person. This was observed both when prime-faces could be recognized overtly and when they were presented too briefly to be recognized overtly (Exps. 1 and 2). Thus, covert face recognition was observed in neurologically intact participants. In Exp. 3, participants were quicker to recognize a familiar face when that person's face had been seen previously, but only when it had been recognized overtly on the first encounter. These results are interpreted within the framework of an interactive activation model of face recognition. Received: 21 September 1998 / Accepted: 2 March 1999  相似文献   

11.
Two experiments dissociated the roles of intrinsic orientation of a shape and participants’ study viewpoint in shape recognition. In Experiment 1, participants learned shapes with a rectangular background that was oriented differently from their viewpoint, and then recognized target shapes, which were created by splitting study shapes along different intrinsic axes, at different views. Results showed that recognition was quicker when the study shapes were split along the axis parallel to the orientation of the rectangular background than when they were split along the axis parallel to participants’ viewpoint. In Experiment 2, participants learned shapes without the rectangular background. The results showed that recognition was quicker when the study shape was split along the axis parallel to participants’ viewpoint. In both experiments, recognition was quicker at the study view than at a novel view. An intrinsic model of object representation and recognition was proposed to explain these findings.  相似文献   

12.
In three experiments, we explored how pigeons use edges, corresponding to orientation and depth discontinuities, in visual recognition tasks. In experiment 1, we compared the pigeon's ability to recognize line drawings of four different geons when trained with shaded images. The birds were trained with either a single view or five different views of each object. Because the five training views had markedly different appearances and locations of shaded surfaces, reflectance edges, etc, the pigeons might have been expected to rely more on the orientation and depth discontinuities that were preserved over rotation and in the line drawings. In neither condition, however, was there any transfer from the rendered images to the outline drawings. In experiment 2, some pigeons were trained with line drawings and shaded images of the same objects associated with the same response (consistent condition), whereas other pigeons were trained with a line drawing and a shaded image of two different objects associated with the same response (inconsistent condition). If the pigeons perceived any correspondence between the stimulus types, then birds in the consistent condition should have learned the discrimination more quickly than birds in the inconsistent condition. But, there was no difference in performance between birds in the consistent and inconsistent conditions. In experiment 3, we explored pigeons' processing of edges by comparing their discrimination of shaded images or line drawings of four objects. Once trained, the pigeons were tested with planar rotations of those objects. The pigeons exhibited different patterns of generalization depending on whether they were trained with line drawings or shaded images. The results of these three experiments suggest that pigeons may place greater importance on surface features indicating materials, such as food or water. Such substances do not have definite boundaries cued by edges which are thought to be central to human recognition.  相似文献   

13.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

14.
In this study, we examined the effect of within-category diversity on people's ability to learn perceptual categories, their inclination to generalize categories to novel items, and their ability to distinguish new items from old. After learning to distinguish a control category from an experimental category that was either clustered or diverse, participants performed a test of category generalization or old-new recognition. Diversity made learning more difficult, increased generalization to novel items outside the range of training items, and made it difficult to distinguish such novel items from familiar ones. Regression analyses using the generalized context model suggested that the results could be explained in terms of similarities between old and new items combined with a rescaling of the similarity space that varied according to the diversity of the training items. Participants who learned the diverse category were less sensitive to psychological distance than were the participants who learned a more clustered category.  相似文献   

15.
Human spatial encoding of three-dimensional navigable space was studied, using a virtual environment simulation. This allowed subjects to become familiar with a realistic scene by making simulated rotational and translational movements during training. Subsequent tests determined whether subjects could generalize their recognition ability by identifying novel-perspective views and topographic floor plans of the scene. Results from picture recognition tests showed that familiar direction views were most easily recognized, although significant generalization to novel views was observed. Topographic floor plans were also easily identified. In further experiments, novel-view performance diminished when active training was replaced by passive viewing of static images of the scene. However, the ability to make self-initiated movements, as opposed to watching dynamic movie sequences, had no effect on performance. These results suggest that representation of navigable space is view dependent and highlight the importance of spatial-temporal continuity during learning.  相似文献   

16.
17.
Traditional conceptions of spoken language assume that speech recognition and talker identification are computed separately. Neuropsychological and neuroimaging studies imply some separation between the two faculties, but recent perceptual studies suggest better talker recognition in familiar languages than unfamiliar languages. A familiar-language benefit in talker recognition potentially implies strong ties between the two domains. However, little is known about the nature of this language familiarity effect. The current study investigated the relationship between speech and talker processing by assessing bilingual and monolingual listeners’ ability to learn voices as a function of language familiarity and age of acquisition. Two effects emerged. First, bilinguals learned to recognize talkers in their first language (Korean) more rapidly than they learned to recognize talkers in their second language (English), while English-speaking participants showed the opposite pattern (learning English talkers faster than Korean talkers). Second, bilinguals’ learning rate for talkers in their second language (English) correlated with age of English acquisition. Taken together, these results suggest that language background materially affects talker encoding, implying a tight relationship between speech and talker representations.  相似文献   

18.
Faces learned from multiple viewpoints are recognized better with left than right three-quarter views. This left-view superiority could be explained by perceptual experience, facial asymmetry, or hemispheric specialization. In the present study, we investigated whether left-view sequences are also more effective in recognizing same and novel views of a face. In a sequential matching task, a view sequence showing a face rotating around a left (?30°) or a right (+30°) angle, with an amplitude of 30°, was followed by a static test view with the same viewpoint as the sequence (?30° or +30°) or with a novel one (0°, +30°, or ?30°). We found a superiority of left-view sequences independently of the test viewpoint, but no superiority of left over right test views. These results do not seem compatible with the perceptual experience hypothesis, which predicts superiority only for left-side test views (?30°). Also, a facial asymmetry judgement task showed no correlation between the asymmetry of individual faces and the left-view sequence superiority. A superiority of left-view sequences for novel as well as same test views argues in favour of an explanation by hemispheric specialization, because of the possible role of the right hemisphere in extracting facial identity information.  相似文献   

19.
One critical aspect of learning is the ability to apply learned knowledge to new situations. This ability to transfer is often limited, and its development is not well understood. The current research investigated the development of transfer between 8 and 16 months of age. In Experiment 1, 8- and 16-month-olds (who were established to have a preference to the beginning of a visual sequence) were trained to attend to the end of a sequence. They were then tested on novel visual sequences. Results indicated transfer of learning, with both groups changing baseline preferences as a result of training. In Experiment 2, participants were trained to attend to the end of a visual sequence and were then tested on an auditory sequence. Unlike Experiment 1, only older participants exhibited transfer of learning by changing baseline preferences. These findings suggest that the generalization of learning becomes broader with development, with transfer across modalities developing later than transfer within a modality.  相似文献   

20.
The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved “online,” such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号