首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Pigeons and humans were trained to discriminate between pictures of three-dimensional objects that differed in global shape. Each pair of objects was shown at two orientations that differed by a depth rotation of 90° during training. Pictures of the objects at novel depth rotations were then tested for recognition. The novel test rotations were 30°, 45°, and 90° from the nearest trained orientation and were either interpolated between the trained orientations or extrapolated outside of the training range. For both pigeons and humans, recognition accuracy and/or speed decreased as a function of distance from the nearest trained orientation. However, humans, but not pigeons, were more accurate in recognizing novel interpolated views than novel extrapolated views. The results suggest that pigeons’ recognition was based on independent generalization from each training view, whereas humans showed view-combination processes that resulted in a benefit for novel views interpolated between the training views.  相似文献   

2.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

3.
Echolocating bottlenose dolphins (Tursiops truncatus) discriminate between objects on the basis of the echoes reflected by the objects. However, it is not clear which echo features are important for object discrimination. To gain insight into the salient features, the authors had a dolphin perform a match-to-sample task and then presented human listeners with echoes from the same objects used in the dolphin's task. In 2 experiments, human listeners performed as well or better than the dolphin at discriminating objects, and they reported the salient acoustic cues. The error patterns of the humans and the dolphin were compared to determine which acoustic features were likely to have been used by the dolphin. The results indicate that the dolphin did not appear to use overall echo amplitude, but that it attended to the pattern of changes in the echoes across different object orientations. Human listeners can quickly identify salient combinations of echo features that permit object discrimination, which can be used to generate hypotheses that can be tested using dolphins as subjects.  相似文献   

4.
Achieving visual object constancy across plane rotation and depth rotation.   总被引:5,自引:0,他引:5  
R Lawson 《Acta psychologica》1999,102(2-3):221-245
Visual object constancy is the ability to recognise an object from its image despite variation in the image when the object is viewed from different angles. I describe research which probes the human visual system's ability to achieve object constancy across plane rotation and depth rotation. I focus on the ecologically important case of recognising familiar objects, although the recognition of novel objects is also discussed. Cognitive neuropsychological studies of patients with specific deficits in achieving object constancy are reviewed, in addition to studies which test neurally intact subjects. In certain cases, the recognition of invariant features allows objects to be recognised irrespective of the view depicted, particularly if small, distinctive sets of objects are presented repeatedly. In contrast, in most situations, recognition is sensitive to both the view in-plane and in-depth from which an object is depicted. This result suggests that multiple, view-specific, stored representations of familiar objects are accessed in everyday, entry-level visual recognition, or that transformations such as mental rotation or interpolation are used to transform between retinal images of objects and view-specific, stored representations.  相似文献   

5.
Priming effects on the object possibility task, in which participants decide whether line drawings could or could not be possible three-dimensional objects, may be supported by the same processes and representations used in recognizing and identifying objects. Three experiments manipulating objects’ picture-plane orientation provided limited support for this hypothesis. Like old/new recognition performance, possibility priming declined as study-test orientation differences increased from 0° to 60°. However, while significant possibility priming was not observed for larger orientation differences, recognition performance continued to decline following 60°–180° orientation shifts. These results suggest that possibility priming and old/new recognition may rely on common viewpoint-specific representations but that access to these representations in the possibility test occurs only when study and test views are sufficiently similar (i.e., rotated less than 60°).  相似文献   

6.
In two experiments, we attempted to replicate Shallo and Rock's (1988) finding that 5- and 6-year-old children exhibit size constancy for a distant object when tested with comparison objects that are matched for visual angle. Experiment 1 (N = 80) included four age groups: 5-, 6-, and 9-year-olds and adults. Participants viewed one standard object from 61 m and indicated which of nine nearby comparison objects matched the standard object in size. The comparison objects subtended equal visual angles in one condition and different visual angles in another. In both conditions, the 5- and 6-year-old children underestimated the size of the standard object, whereas the 9-year-old children and adults made nearly accurate size estimates. In Experiment 2 (N = 32), we replicated the finding that 6-year-old children underestimate size when tested with comparison objects that subtend equal visual angles. Our results conflict with those of Shallo and Rock and support earlier findings that young children do not exhibit size constancy for distant objects.  相似文献   

7.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

8.
How does the human visual system determine the depth-orientation of familiar objects? We examined reaction times and errors in the detection of 15° differences in the depth orientations of two simultaneously presented familiar objects, which were the same objects (Experiment 1) or different objects (Experiment 2). Detection of orientation differences was best for 0° (front) and 180° (back), while 45° and 135° yielded poorer results, and 90° (side) showed intermediate results, suggesting that the visual system is tuned for front, side and back orientations. We further found that those advantages are due to orientation-specific features such as horizontal linear contours and symmetry, since the 90° advantage was absent for objects with curvilinear contours, and asymmetric object diminished the 0° and 180° advantages. We conclude that the efficiency of visually determining object orientation is highly orientation-dependent, and object orientation may be perceived in favor of front-back axes.  相似文献   

9.
Although geometric reorientation has been extensively studied in numerous species, most research has been conducted in enclosed environments and has focused on use of the geometric property of relative wall length. The current studies investigated how angular information is used by adult humans and pigeons to orient and find a goal in enclosures or arrays that did not provide relative wall length information. In enclosed conditions, the angles formed a diamond shape connected by walls, whereas in array conditions, free-standing angles defined the diamond shape. Adult humans and pigeons were trained to locate two geometrically equivalent corners, either the 60° or 120° angles. Blue feature panels were located in the goal corners so that participants could use either the features or the local angular information to orient. Subsequent tests in manipulated environments isolated the individual cues from training or placed them in conflict with one another. In both enclosed and array environments, humans and pigeons were able to orient when either the angles or the features from training were removed. On conflict tests, female, but not male, adult humans weighted features more heavily than angular geometry. For pigeons, angles were weighted more heavily than features for birds that were trained to go to acute corners, but no difference in weighting was seen for birds trained to go to obtuse corners. These conflict test results were not affected by environment type. A subsequent test with pigeons ruled out an interpretation based on exclusive use of a principal axis rather than angle. Overall, the results indicate that, for both adult humans and pigeons, angular amplitude is a salient orientation cue in both enclosures and arrays of free-standing angles.  相似文献   

10.
Men and women learned to discriminate between two different size angles presented to them as objects within a real-world task. During Experiment 1, participants in group 50 were trained to choose a 50° angle and participants in group 75 were trained to choose a 75° angle. During testing, both groups were provided with a choice between their training angle and one of a set of test angles that was either smaller or larger than the training angle. Results showed a generalized pattern of responding, with group 50 showing increased responding to test angles smaller than 50° and group 75 showing increased responding to test angles larger than 75°. Further analysis of the response patterns revealed that participants in group 50 showed evidence of absolute learning, whereas participants in group 75 showed evidence of relational learning. During Experiment 2, a third group of participants (group 25) trained to choose a smaller angle (25°) was included in addition to group 50 and group 75. Participants were trained with three angles present and tested with just two, one being their training angle and the other being one of a set of novel test angles. Similar to the participants from Experiment 1, group 75 showed evidence of relational learning. Group 50, for which no relational rule could be applied during training, showed an absolute learning pattern with no response shift to test angles smaller or larger than their training angle. Group 25 showed evidence of absolute responding that was more pronounced than that found for the smallest training angle during Experiment 1. These findings suggest differential learning of geometric angles based on amplitude with smaller angles perceived as more distinct and thus more resistant to broader generalization than larger angles. Implications of these results are that certain geometric properties may be subject to different learning processes based on the specific magnitude of that property.  相似文献   

11.
Object recognition is a complex adaptive process that can be impaired in children with neurodevelopmental disabilities. Recently, we found a significant effect of time on the development of unimodal and crossmodal recognition skills for common objects in typical children and this was a starting point for the study of visuo-haptic object recognition skills in impaired populations. In this study, we investigated unimodal visual information, unimodal haptic information and visuo-haptic information transfer in 30 children, from 4.0 to 10.11 years of age, with bilateral Periventricular Leukomalacia (PVL) and bilateral cerebral palsy. Results were matched with those of 116 controls. Participants were tested using a clinical protocol, adopted in the previous study, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects and visuo-haptic transfer of these two types of information. Results show that in the PVL group as in controls, there is an age-dependent development of object recognition abilities for visual, haptic and visuo-haptic modalities, even if PVL children perform worse in all the three conditions, in comparison with the typical group. Furthermore, PVL children have a specific deficit both in visual and haptic information processing, that improves with age, probably thanks to everyday experience, but the visual modality shows a better and more rapid maturation, remaining more salient compared to the haptic one. However, multisensory processes partially facilitate recognition of common objects also in PVL children and this finding could be useful for planning early intervention in children with brain lesion.  相似文献   

12.
When humans grasp objects, the grasps foreshadow the intended object manipulation. It has been suggested that grasps are selected that lead to medial arm postures, which facilitate movement speed and precision, during critical phases of the object manipulation. In Experiment 1, it has been tested whether grasp selections lead to medial postures during rotations of a dial. Participants twisted their arms considerably before grasping the dial, even when the upcoming dial rotation was minimal (5°). Participants neither assumed a medial posture at any point during a short rotation, nor did they assume any of the postures involved in short rotations in the opposite direction. Thus, grasp selections did not necessarily lead to specific postures at any point of the object manipulation. Experiment 2 examined the effect of various grasps on the speed of dial rotations. A medial initial grasp resulted in the fastest dial rotations for most rotation angles. Spontaneously selected grasps were more excursed than necessary to maximize dial rotation speed. This apparent overshot might be explained by participants’ sensitive to the variability of their grasps and is in line with the assumption that grasps facilitate control over the grasped object.  相似文献   

13.
The present study investigated whether memory for a room-sized spatial layout learned through auditory localization of sounds exhibits orientation dependence similar to that observed for spatial memory acquired from stationary viewing of the environment. Participants learned spatial layouts by viewing objects or localizing sounds and then performed judgments of relative direction among remembered locations. The results showed that direction judgments following auditory learning were performed most accurately at a particular orientation in the same way as were those following visual learning, indicating that auditorily encoded spatial memory is orientation dependent. In combination with previous findings that spatial memories derived from haptic and proprioceptive experiences are also orientation dependent, the present finding suggests that orientation dependence is a general functional property of human spatial memory independent of learning modality.  相似文献   

14.
Fish move in a three-dimensional environment in which it is important to discriminate between stimuli varying in colour, size, and shape. It is also advantageous to be able to recognize the same structures or individuals when presented from different angles, such as back to front or front to side. This study assessed visual discrimination abilities of rotated three-dimensional objects in eight individuals of Pseudotropheus sp. using various plastic animal models. All models were displayed in two choice experiments. After successful training, fish were presented in a range of transfer tests with objects rotated in the same plane and in space by 45° and 90° to the side or to the front. In one experiment, models were additionally rotated by 180°, i.e., shown back to front. Fish showed quick associative learning and with only one exception successfully solved and finished all experimental tasks. These results provide first evidence for form constancy in this species and in fish in general. Furthermore, Pseudotropheus seemed to be able to categorize stimuli; a range of turtle and frog models were recognized independently of colour and minor shape variations. Form constancy and categorization abilities may be important for behaviours such as foraging, recognition of predators, and conspecifics as well as for orienting within habitats or territories.  相似文献   

15.
We investigated the role of global (body) and local (parts) motion on the recognition of unfamiliar objects. Participants were trained to categorise moving objects and were then tested on their recognition of static images of these targets using a priming paradigm. Each static target shape was primed by a moving object that comprised either the same body and parts motion; same body, different parts motion; different body, same part motion as the learned target or was non-moving. Only the same body but not the same part motion facilitated shape recognition (Experiment 1), even when either motion was diagnostic of object identity (Experiment 2). When parts motion was more related to the object's body motion then it facilitated the recognition of the static target (Experiment 3). Our results suggest that global and local motions are independently accessed during object recognition and have important implications for how objects are represented in memory.  相似文献   

16.
Little is known about the ability of human observers to process objects in the far periphery of their visual field and nothing about its evolution in case of central vision loss. We investigated implicit and explicit recognition at two large visual eccentricities. Pictures of objects were centred at 30° or 50° eccentricity. Implicit recognition was tested through a priming paradigm. Participants (normally sighted observers and people with 10–20 years of central vision loss) categorized pictures as animal/transport both in a study phase (Block 1) and in a test phase (Block 2). In explicit recognition participants decided for each picture presented in Block 2 whether it had been displayed in Block 1 (“yes”/“no”). Both visual (identical) and conceptual/lexical (same-name) priming occurred at 30° and at 50°. Explicit recognition was observed only at 30°. In people with central vision loss testing was only performed at 50° eccentricity. The pattern of results was similar to that of normally sighted observers but global performance was lower. The results suggest that vision, at large eccentricity, is mainly based on nonconscious coarse representations. Moreover, after 10–20 years of central vision loss, no evidence was found for an increased ability to use peripheral information in object recognition.  相似文献   

17.
The purpose of the present investigation was to determine whether the orientation between an object's parts is coded categorically for object recognition and physical discrimination. In three experiments, line drawings of novel objects in which the relative orientation of object parts varied by steps of 30 degrees were used. Participants performed either an object recognition task, in which they had to determine whether two objects were composed of the same set of parts, or a physical discrimination task, in which they had to determine whether two objects were physically identical. For object recognition, participants found it more difficult to compare the 0 degrees and 30 degrees versions and the 90 degrees and 60 degrees versions of an object than to compare the 30 degrees and 60 degrees versions, but only at an extended interstimulus interval (ISI). Categorical coding was also found in the physical discrimination task. These results suggest that relative orientation is coded categorically for both object recognition and physical discrimination, although metric information appears to be coded as well, especially at brief ISIs.  相似文献   

18.
How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled‐rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks (Gallus gallus) in strictly controlled environments that contained no objects other than a single virtual object, and then measured the speed at which the chicks could recognize that object from familiar and novel viewpoints. The chicks were able to recognize the object rapidly, at presentation rates of 125 ms per image. Further, recognition speed was equally fast whether the object was presented from familiar viewpoints or novel viewpoints (30° and 60° azimuth rotations). Thus, newborn chicks can recognize objects across novel viewpoints within a fraction of a second. These results demonstrate that newborns are capable of both rapid and invariant object recognition at the onset of vision.  相似文献   

19.
Processing multiple complex features to create cohesive representations of objects is an essential aspect of both the visual and auditory systems. It is currently unclear whether these processes are entirely modality specific or whether there are amodal processes that contribute to complex object processing in both vision and audition. We investigated this using a dual-stream target detection task in which two concurrent streams of novel visual or auditory stimuli were presented. We manipulated the degree to which each stream taxed processing conjunctions of complex features. In two experiments, we found that concurrent visual tasks that both taxed conjunctive processing strongly interfered with each other but that concurrent auditory and visual tasks that both taxed conjunctive processing did not. These results suggest that resources for processing conjunctions of complex features within vision and audition are modality specific.  相似文献   

20.
Similar to certain bats and dolphins, some blind humans can use sound echoes to perceive their silent surroundings. By producing an auditory signal (e.g., a tongue click) and listening to the returning echoes, these individuals can obtain information about their environment, such as the size, distance, and density of objects. Past research has also hinted at the possibility that blind individuals may be able to use echolocation to gather information about 2-D surface shape, with definite results pending. Thus, here we investigated people’s ability to use echolocation to identify the 2-D shape (contour) of objects. We also investigated the role played by head movements—that is, exploratory movements of the head while echolocating—because anecdotal evidence suggests that head movements might be beneficial for shape identification. To this end, we compared the performance of six expert echolocators to that of ten blind nonecholocators and ten blindfolded sighted controls in a shape identification task, with and without head movements. We found that the expert echolocators could use echoes to determine the shapes of the objects with exceptional accuracy when they were allowed to make head movements, but that their performance dropped to chance level when they had to remain still. Neither blind nor blindfolded sighted controls performed above chance, regardless of head movements. Our results show not only that experts can use echolocation to successfully identify 2-D shape, but also that head movements made while echolocating are necessary for the correct identification of 2-D shape.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号