首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
G W Humphreys  E M Forde 《The Behavioral and brain sciences》2001,24(3):453-76; discussion 476-509
Category-specific impairments of object recognition and naming are among the most intriguing disorders in neuropsychology, affecting the retrieval of knowledge about either living or nonliving things. They can give us insight into the nature of our representations of objects: Have we evolved different neural systems for recognizing different categories of object? What kinds of knowledge are important for recognizing particular objects? How does visual similarity within a category influence object recognition and representation? What is the nature of our semantic knowledge about different objects? We review the evidence on category-specific impairments, arguing that deficits even for one class of object (e.g., living things) cannot be accounted for in terms of a single information processing disorder across all patients; problems arise at contrasting loci in different patients. The same apparent pattern of impairment can be produced by damage to different loci. According to a new processing framework for object recognition and naming, the hierarchical interactive theory (HIT), we have a hierarchy of highly interactive stored representations. HIT explains the variety of patients in terms of (1) lesions at different levels of processing and (2) different forms of stored knowledge used both for particular tasks and for particular categories of object.  相似文献   

2.
How good are we at recognizing objects by touch? Intuition may suggest that the haptic system is a poor recognition device, and previous research with nonsense shapes and tangible-graphics displays supports this opinion. We argue that the recognition capabilities of touch are best assessed with three-dimensional, familiar objects. The present study provides a baseline measure of recognition under those circumstances, and it indicates that haptic object recognition can be both rapid and accurate.  相似文献   

3.
How do observers recognize objects after spatial transformations? Recent neurocomputational models have proposed that object recognition is based on coordinate transformations that align memory and stimulus representations. If the recognition of a misoriented object is achieved by adjusting a coordinate system (or reference frame), then recognition should be facilitated when the object is preceded by a different object in the same orientation. In the two experiments reported here, two objects were presented in brief masked displays that were in close temporal contiguity; the objects were in either congruent or incongruent picture-plane orientations. Results showed that naming accuracy was higher for congruent than for incongruent orientations. The congruency effect was independent of superordinate category membership (Experiment 1) and was found for objects with different main axes of elongation (Experiment 2). The results indicate congruency effects for common familiar objects even when they have dissimilar shapes. These findings are compatible with models in which object recognition is achieved by an adjustment of a perceptual coordinate system.  相似文献   

4.
How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled‐rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks (Gallus gallus) in strictly controlled environments that contained no objects other than a single virtual object, and then measured the speed at which the chicks could recognize that object from familiar and novel viewpoints. The chicks were able to recognize the object rapidly, at presentation rates of 125 ms per image. Further, recognition speed was equally fast whether the object was presented from familiar viewpoints or novel viewpoints (30° and 60° azimuth rotations). Thus, newborn chicks can recognize objects across novel viewpoints within a fraction of a second. These results demonstrate that newborns are capable of both rapid and invariant object recognition at the onset of vision.  相似文献   

5.
In an attempt to reconcile results of previous studies, several theorists have suggested that object recognition performance should range from viewpoint invariant to highly viewpoint dependent depending on how easy it is to differentiate the objects in a given recognition situation. The present study assessed recognition across depth rotations of a single general class of novel objects in three contexts that varied in difficulty. In an initial experiment, recognition in the context involving the most discriminable object differences was viewpoint invariant, but recognition in the least discriminable context and recognition in the intermediate context were equally viewpoint dependent. In a second experiment, utilizing gray-scale versions of the same stimuli, almost identical viewpoint-cost functions were obtained in all three contexts. These results suggest that differences in the geometry of stimulus objects, rather than task difficulty, lie at the heart of previously discrepant findings.  相似文献   

6.
《Acta psychologica》2013,143(1):40-51
The fidelity of visual working memory was assessed for faces and non-face objects. In two experiments, four levels of memory load (1, 2, 3, or 4 items) were combined with four perceptual distances between probe and study items, with maximum item confusability occurring for the minimum memory load. Under these conditions, recognition memory for multiple faces exceeded that of a single face. This result was primarily due to the higher false alarm rates for faces than non-face objects, even though the two classes of stimuli had been matched for perceptual discriminability. Control experiments revealed that this counterintuitive result emerged only for old–new recognition choices based on near-threshold image differences. For non-face objects, instead, recognition performance decreased with increasing memory load. It is speculated that the low memorial discriminability of the transient properties of a face may serve the purpose of enhancing recognition at the individual-exemplar level.  相似文献   

7.
Five experiments demonstrated that adults can identify certain novel views of 3-dimensional model objects on the basis of knowledge of a single perspective. Geometrically irregular contour (wire) and surface (clay) objects and geometrically regular surface (pipe) objects were accurately recognized when rotated 180 degrees about the vertical (y) axis. However, recognition accuracy was poor for all types of objects when rotated around the y-axis by 90 degrees. Likewise, more subtle rotations in depth (i.e., 30 degrees and 60 degrees) induced decreases in recognition of both contour and surface objects. These results suggest that accurate recognition of objects rotated in depth by 180 degrees may be achieved through use of information in objects' 2-dimensional bounding contours, the shapes of which remain invariant over flips in depth. Consistent with this interpretation, a final study showed that even slight rotations away from 180 degrees cause precipitous drops in recognition accuracy.  相似文献   

8.
FROM BLOBS TO BOUNDARY EDGES:   总被引:13,自引:1,他引:12  
  相似文献   

9.
We investigated whether the relative position of objects and the body would influence haptic recognition. People felt objects on the right or left side of their body midline, using their right hand. Their head was turned towards or away from the object, and they could not see their hands or the object. People were better at naming 2-D raised line drawings and 3-D small-scale models of objects and also real, everyday objects when they looked towards them. However, this head-towards benefit was reliable only when their right hand crossed their body midline to feel objects on their left side. Thus, haptic object recognition was influenced by people's head position, although vision of their hand and the object was blocked. This benefit of turning the head towards the object being explored suggests that proprioceptive and haptic inputs are remapped into an external coordinate system and that this remapping is harder when the body is in an unusual position (with the hand crossing the body midline and the head turned away from the hand). The results indicate that haptic processes align sensory inputs from the hand and head even though either hand-centered or object-centered coordinate systems should suffice for haptic object recognition.  相似文献   

10.
The ability to distinguish people from things sheds light on an important theoretical question: how is the development of social cognition related to the development of physical cognition? According to Piaget (1954), cognition is unitary and the processes used in dealing with the physical world are the same as those employed in the social world. This statement should be questioned. Although people and objects share certain fundamental properties (size, shape, etc.), only people can communicate, act independently and have feelings and intentions. Thus, people seem much more complex to deal with than things. If all cognitive development derives from the growth of a unitary system, then knowledge about animate objects should lag behind that of inanimate objects. The present paper explores this idea by examining what infants know about the attributes that distinguish people from things. It is concluded that the onset of this distinction begins early in life. Even 2-month-old infants treat people and objects differently when confounding variables of the stimuli are controlled. Rather than lagging behind, the infants' understanding of people appears precocious. The infants' recognition of the crucial distinction between the two classes suggests that a conceptual system is beginning to be formed soon after birth. This conceptual system appears different for social and non-social objects and serves as a foundation from which infants might come to understand the distinctive properties of animate and inanimate objects.  相似文献   

11.
Leek EC 《Perception》1998,27(7):803-816
How does the visual system recognise stimuli presented at different orientations? According to the multiple-views hypothesis, misoriented objects are matched to one of several orientation-specific representations of the same objects stored in long-term memory. Much of the evidence for this hypothesis comes from the observation of group mean orientation effects in recognition memory tasks showing that the time taken to identify objects increases as a function of the angular distance between the orientation of the stimulus and its nearest familiar orientation The aim in this paper is to examine the validity of this interpretation of group mean orientation effects. In particular, it is argued that analyses based on group performance averages that appear consistent with the multiple-views hypothesis may, under certain circumstances, obscure a different theoretically relevant underlying pattern of results. This problem is examined by using hypothetical data and through the detailed analysis of the results from an experiment based on a recognition memory task used in several previous studies. Although a pattern of results that is consistent with the multiple-views hypothesis was observed in both the group mean performance and the underlying data, it is argued that the potential limitations of analyses based solely on group performance averages must be considered in future studies that use orientation effects to make inferences about the kinds of shape representations that mediate visual recognition.  相似文献   

12.
Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.  相似文献   

13.
Under numerous circumstances, humans recognize visual objects in their environment with remarkable response times and accuracy. Existing artificial visual object recognition systems have not yet surpassed human vision, especially in its universality of application. We argue that modeling the recognition process in an exclusive feedforward manner hinders those systems’ performance. To bridge that performance gap between them and human vision, we present a brief review of neuroscientific data, which suggests that considering an agent’s internal influences (from cognitive systems that peripherally interact with visual-perceptual processes) recognition can be improved. Then, we propose a model for visual object recognition which uses these systems’ information, such as affection, for generating expectation to prime the object recognition system, thus reducing its execution times. Later, an implementation of the model is described. Finally, we present and discuss an experiment and its results.  相似文献   

14.
The ability to recognize single letters, an important step in reading, is traditionally assumed to depend only on visual processes. However, as many of the objects surrounding us, letters are learnt through a matching between a visual configuration and movements. We review arguments suggesting that the characteristics of writing movements impact visual recognition of letters, at both a behavioral and neural levels. This impact might be especially strong when the orientation of letters has to be processed.  相似文献   

15.
Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.  相似文献   

16.
17.
Studies of object-based attention have demonstrated poorer performance in dividing attention between two objects in a scene than in focusing attention on a single object. However, objects often are composed of several parts, and parts are central to theories of object recognition. Are parts also important for visual attention? That is, can attention be limited in the number of parts processed simultaneously? We addressed this question in four experiments. In Experiments 1 and 2, participants reported two attributes that appeared on the same part or on different parts of a single multipart object. Participants were more accurate in reporting the attributes on the same part than attributes on different parts. This part-based effect was not influenced by the spatial distance between the parts, ruling out a simple spatial attention interpretation of our results. A control study demonstrated that our spatial manipulation was sufficient to observe shifts of spatial attention. This study revealed an effect of spatial distance, indicating that our spatial manipulation was adequate for observing spatial attention. The absence of a distance effect in Experiments 1 and 2 suggests that part-based attention may not rely entirely on simple shifts of spatial attention. Finally, in Experiment 4 we found evidence for part-based attention, using stimuli controlled for the distance between the parts of an object. The results of these experiments indicate that visual attention can selectively process the parts of an object. We discuss the relationship between parts and objects and the locus of part-based attentional selection.  相似文献   

18.
Three experiments investigated the effects of naming pictures of objects during study on the subsequent recognition of physically identical, name-match, and new objects. Prior naming improved correct classification of all three item types at recognition. For line drawings and for photographs of functionally distinct objects, prior naming reduced the tendency to confuse identical and same-name alternatives. In Experiment 2, prior naming eliminated the right visual field/left hemisphere advantage for speeded recognition of name-match pictures, suggesting that prior naming reduces the likelihood that pictures are named at recognition. The implications of these results for dual-encoding (Paivio, 1971) and sensory-semantic (Nelson, Reed, & McEvoy, 1977) models of picture and word processing are discussed. The results suggest that the semantic representations of objects that are perceptually distinct but share a common name are not identical, and that the effect of naming such objects is to insure that a distinct semantic representation becomes a part of the resulting memory code.  相似文献   

19.
20.
Prominent theories of action recognition suggest that during the recognition of actions the physical patterns of the action is associated with only one action interpretation (e.g., a person waving his arm is recognized as waving). In contrast to this view, studies examining the visual categorization of objects show that objects are recognized in multiple ways (e.g., a VW Beetle can be recognized as a car or a beetle) and that categorization performance is based on the visual and motor movement similarity between objects. Here, we studied whether we find evidence for multiple levels of categorization for social interactions (physical interactions with another person, e.g., handshakes). To do so, we compared visual categorization of objects and social interactions (Experiments 1 and 2) in a grouping task and assessed the usefulness of motor and visual cues (Experiments 3, 4, and 5) for object and social interaction categorization. Additionally, we measured recognition performance associated with recognizing objects and social interactions at different categorization levels (Experiment 6). We found that basic level object categories were associated with a clear recognition advantage compared to subordinate recognition but basic level social interaction categories provided only a little recognition advantage. Moreover, basic level object categories were more strongly associated with similar visual and motor cues than basic level social interaction categories. The results suggest that cognitive categories underlying the recognition of objects and social interactions are associated with different performances. These results are in line with the idea that the same action can be associated with several action interpretations (e.g., a person waving his arm can be recognized as waving or greeting).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号