首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
This study contrasted the role of surfaces and volumetric shape primitives in three-dimensional object recognition. Observers (N?=?50) matched subsets of closed contour fragments, surfaces, or volumetric parts to whole novel objects during a whole–part matching task. Three factors were further manipulated: part viewpoint (either same or different between component parts and whole objects), surface occlusion (comparison parts contained either visible surfaces only, or a surface that was fully or partially occluded in the whole object), and target–distractor similarity. Similarity was varied in terms of systematic variation in nonaccidental (NAP) or metric (MP) properties of individual parts. Analysis of sensitivity (d′) showed a whole–part matching advantage for surface-based parts and volumes over closed contour fragments—but no benefit for volumetric parts over surfaces. We also found a performance cost in matching volumetric parts to wholes when the volumes showed surfaces that were occluded in the whole object. The same pattern was found for both same and different viewpoints, and regardless of target–distractor similarity. These findings challenge models in which recognition is mediated by volumetric part-based shape representations. Instead, we argue that the results are consistent with a surface-based model of high-level shape representation for recognition.  相似文献   

3.
Priming visual face-processing mechanisms: electrophysiological evidence   总被引:4,自引:0,他引:4  
Accumulated evidence from electrophysiology and neuroimaging suggests that face perception involves extrastriate visual mechanisms specialized in processing physiognomic features and building a perceptual representation that is categorically distinct and can be identified by face-recognition units. In the present experiment, we recorded event-related brain potentials in order to explore possible contextual influences on the activity of this perceptual mechanism. Subjects were first exposed to pairs of small shapes, which did not elicit any face-specific brain activity. The same stimuli, however, elicited face-specific brain activity after subjects saw them embedded in schematic faces, which probably primed the subjects to interpret the shapes as schematic eyes. No face-specific activity was observed when objects rather than faces were used to form the context. We conclude that the activity of face-specific extrastriate perceptual mechanisms can be modulated by contextual constraints that determine the significance of the visual input.  相似文献   

4.
The impact of age of acquisition (AoA) on object recognition was explored in three experiments measuring visual duration threshold (VDT) for the identification of pictures labelled with early and late acquired names. Participants viewed briefly displayed images preceded and followed by a pattern mask. The minimum display duration required for correct identification was shorter for pictures labelled with early names than for those labelled with late names. In Experiments 2 and 3 we explored the effects of two forms of visual degradation on VDT for pictures with early and late acquired names. Both degradation by superimposed visual elements, and degradation by contrast reduction extended VDT, but only the former interacted with AoA. We conclude that both AoA and degradation by superimposed visual elements affect the efficiency of visual object recognition, but only degradation by contrast and not AoA affects the efficiency of earlier pre-recognition processes.  相似文献   

5.
Rats' exploration of stimulus P (e.g., a domestic object) is reduced following either its direct exposure or its indirect exposure and is taken to indicate recognition memory. Procedures for demonstrating indirect object recognition involve an initial presentation of object P with stimulus X (and of an object Q with stimulus Y). On test, stimulus X is presented with objects P and Q and rats' exploration of Q exceeds their exploration of P. One interpretation here is that the presentation of stimulus X on test associatively activates the memory of object P, which diminishes exploration of P relative to Q. It is possible, instead, that performance is simply the result of a novel pattern of stimulation generated by the unfamiliar combination of X and Q. The authors modified this procedure to reduce the likelihood of such a process. Their procedure involved first the presentation of PX and QY before the presentation of stimulus X alone. During the test that followed, objects P and Q were presented but stimulus X was removed. The authors found that exploration of Q remained greater than that of P despite these modifications and discuss some theoretical implications of indirect, associative processes in recognition memory.  相似文献   

6.
7.
The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more reliably than items for which such information was not available; this was true when all items were non-living. Naming of objects from their associated sound was normal. These data suggest that both information about object form computed in the ventral visual system as well as sensory-motor information specifying the manner of manipulation contribute to object recognition.  相似文献   

8.
Spatial representations in the visual system were probed in 4 experiments involving A. H., a woman with a developmental deficit in localizing visual stimuli. Previous research (M. McCloskey et al., 1995) has shown that A. H.'s localization errors take the form of reflections across a central vertical or horizontal axis (e.g., a stimulus 30 degrees to her left localized to a position 30 degrees to her right). The present experiments demonstrate that A. H.'s errors vary systematically as a function of where her attention is focused, independent of how her eyes, head, or body are oriented, or what potential reference points are present in the visual field. These results suggest that the normal visual system constructs attention-referenced spatial representations, in which the focus of attention defines the origin of a spatial coordinate system. A more general implication is that some of the brain's spatial representations take the form of coordinate systems.  相似文献   

9.
We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results demonstrate that the crossmodal facilitation of participants' visual identification performance elicited by the presentation of a simultaneous sound occurs over a very narrow range of ISIs. This critical time-window lies just beyond the interval needed for participants to differentiate the target and mask as constituting two distinct perceptual events (Experiment 1) and can be dissociated from any facilitation elicited by making the visual target physically brighter (Experiment 2). When the sound is presented at the same time as the mask, a facilitatory, rather than an inhibitory effect on visual target identification performance is still observed (Experiment 3). We further demonstrate that the crossmodal facilitation of the visual target by the sound depends on the establishment of a reliable temporally coincident relationship between the two stimuli (Experiment 4); however, by contrast, spatial coincidence is not necessary (Experiment 5). We suggest that when visual and auditory stimuli are always presented synchronously, a better-consolidated object representation is likely to be constructed (than that resulting from unimodal visual stimulation).  相似文献   

10.
A fundamental question of memory is whether the representations of different items are stored in localist/discrete or superimposed/overlapping manners. Neural evidence suggests that neocortical areas underlying visual object identification utilize superimposed representations that undergo continual adjustments, but there has been little corroborating behavioral evidence. We hypothesize that the representation of an object is strengthened, after it is identified, via small representational changes; this strengthening is responsible for repetition priming for that object, but it should also be responsible for antipriming of other objects that have representations superimposed with that of the primed object. Functional evidence for antipriming is reported in young adults, amnesic patients, and matched control participants, and neurocomputational models. The findings from patients dismiss explicit memory explanations, and the models fit the behavioral performance exceptionally well. Putative purposes of priming and comparisons with other theories are discussed. Priming and antipriming may reflect ongoing adjustments of superimposed representations in neocortex.  相似文献   

11.
We report evidence from visual search that people can develop robust representations for highly overlearned faces. When observers searched for their own face versus the face of an unfamiliar observer, search slopes and intercepts revealed consistently faster processing of self than stranger. These processing advantages persisted even after hundreds of presentations of the unfamiliar face and even for atypical profile and upside-down views. Observers not only showed rapid asymptotic recognition of their own face as the target, but could reject their own face more quickly as the distractor. These findings suggest that robust representations for a highly overlearned face may (a) mediate rapid asymptotic visual processing, (b) require extensive experience to develop, (c) contain abstract or view-invariant information, (d) facilitate a variety of processes such as target recognition and distractor rejection, and (e) demand less attentional resources.  相似文献   

12.
Harris IM  Dux PE 《Cognition》2005,95(1):73-93
The question of whether object recognition is orientation-invariant or orientation-dependent was investigated using a repetition blindness (RB) paradigm. In RB, the second occurrence of a repeated stimulus is less likely to be reported, compared to the occurrence of a different stimulus, if it occurs within a short time of the first presentation. This failure is usually interpreted as a difficulty in assigning two separate episodic tokens to the same visual type. Thus, RB can provide useful information about which representations are treated as the same by the visual system. Two experiments tested whether RB occurs for repeated objects that were either in identical orientations, or differed by 30, 60, 90, or 180 degrees . Significant RB was found for all orientation differences, consistent with the existence of orientation-invariant object representations. However, under some circumstances, RB was reduced or even eliminated when the repeated object was rotated by 180 degrees , suggesting easier individuation of the repeated objects in this case. A third experiment confirmed that the upside-down orientation is processed more easily than other rotated orientations. The results indicate that, although object identity can be determined independently of orientation, orientation plays an important role in establishing distinct episodic representations of a repeated object, thus enabling one to report them as separate events.  相似文献   

13.
Prosodic phonological representations early in visual word recognition   总被引:2,自引:0,他引:2  
Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable information before lexical access. In a modified lexical decision task (Experiment 1), words were preceded by parafoveal previews that were congruent with a target's initial syllable as well as previews that contained 1 letter more or less than the initial syllable. Lexical decision times were faster in the syllable congruent conditions than in the incongruent conditions. In Experiment 2, we recorded brain electrical potentials (electroencephalograms) during single word reading in a masked priming paradigm. The event-related potential waveform elicited in the syllable congruent condition was more positive 250-350 ms posttarget compared with the waveform elicited in the syllable incongruent condition. In combination, these experiments demonstrate that readers process prosodic syllable information early in visual word recognition in English. They offer further evidence that skilled readers routinely activate elaborated, speechlike phonological representations during silent reading.  相似文献   

14.
Semantic interference from visual object recognition on visual imagery   总被引:1,自引:0,他引:1  
A new technique for examining the interaction between visual object recognition and visual imagery is reported. The "image-picture interference" paradigm requires participants to generate and make a response to a mental image of a previously memorized object, while ignoring a simultaneously presented picture distractor. Responses in 2 imagery tasks (making left-right higher spatial judgments and making taller-wider judgments) were longer when the simultaneous picture distractor was categorically related to the target distractor relative to unrelated and neutral target-distractor combinations. In contrast, performance was not influenced in this way when the distractor was a related word, when a semantic categorization decision was made to the target, or when distractor and target were visually but not categorically related to one another. The authors discuss these findings in terms of the semantic representations shared by visual object recognition and visual imagery that mediate performance.  相似文献   

15.
Under numerous circumstances, humans recognize visual objects in their environment with remarkable response times and accuracy. Existing artificial visual object recognition systems have not yet surpassed human vision, especially in its universality of application. We argue that modeling the recognition process in an exclusive feedforward manner hinders those systems’ performance. To bridge that performance gap between them and human vision, we present a brief review of neuroscientific data, which suggests that considering an agent’s internal influences (from cognitive systems that peripherally interact with visual-perceptual processes) recognition can be improved. Then, we propose a model for visual object recognition which uses these systems’ information, such as affection, for generating expectation to prime the object recognition system, thus reducing its execution times. Later, an implementation of the model is described. Finally, we present and discuss an experiment and its results.  相似文献   

16.
Previous studies have reported that longer stimulus presentation decreases the magnitude of priming. In the present study, we used meaningless kaleidoscope images, which were reported to minimize conceptual processing, to investigate the mechanism of the phenomenon. We assessed the impact of stimulus duration on perceptual priming (Experiment 1) and implicit recognition memory (Experiment 2). Both the magnitude of priming and the accuracy of implicit recognition were lower with the longer stimulus presentation (350 ms) compared with the shorter presentation (250 ms). This coincidence of temporal dynamics between priming and implicit recognition suggests similar underlying memory mechanisms. In both cases, the decrease of performance with longer presentation can be explained by either changes in perceptual processes or interference from explicit memory retrieval.  相似文献   

17.
Viewpoint dependence in visual and haptic object recognition   总被引:5,自引:0,他引:5  
On the whole, people recognize objects best when they see the objects from a familiar view and worse when they see the objects from views that were previously occluded from sight. Unexpectedly, we found haptic object recognition to be viewpoint-specific as well, even though hand movements were unrestricted. This viewpoint dependence was due to the hands preferring the back "view" of the objects. Furthermore, when the sensory modalities (visual vs. haptic) differed between learning an object and recognizing it, recognition performance was best when the objects were rotated back-to-front between learning and recognition. Our data indicate that the visual system recognizes the front view of objects best, whereas the hand recognizes objects best from the back.  相似文献   

18.
19.
Craddock M  Martinovic J  Lawson R 《Perception》2011,40(10):1154-1163
In aperture viewing the field-of-view is restricted, such that only a small part of an image is visible, enforcing serial exploration of different regions of an object in order to successfully recognise it. Previous studies have used either active control or passive observation of the viewing aperture, but have not contrasted the two modes. Active viewing has previously been shown to confer an advantage in visual object recognition. We displayed objects through a small moveable aperture and tested whether people's ability to identify the images as familiar or novel objects was influenced by how the window location was controlled. Participants recognised objects faster when they actively controlled the window using their finger on a touch-screen, as opposed to passively observing the moving window. There was no difference between passively viewing again one's own window movement as generated in a previous block of trials versus viewing window movements that had been generated by other participants. These results contrast with those from comparable studies of haptic object recognition, which have found a benefit for passive over active stimulus exploration, but accord with findings of an advantage of active viewing in visual object recognition.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号