首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two experiments were conducted to investigate whether locomotion to a novel test view would eliminate viewpoint costs in visual object processing. Participants performed a sequential matching task for object identity or object handedness, using novel 3-D objects displayed in a head-mounted display. To change the test view of the object, the orientation of the object in 3-D space and the test position of the observer were manipulated independently. Participants were more accurate when the test view was the same as the learned view than when the views were different no matter whether the view change of the object was 50° or 90°. With 50° rotations, participants were more accurate at novel test views caused by participants’ locomotion (object stationary) than caused by object rotation (observer stationary) but this difference disappeared when the view change was 90°. These results indicate that facilitation of spatial updating during locomotion occurs within a limited range of viewpoints, but that such facilitation does not eliminate viewpoint costs in visual object processing.  相似文献   

2.
Current theories of object recognition in human vision make different predictions about whether the recognition of complex, multipart objects should be influenced by shape information about surface depth orientation and curvature derived from stereo disparity. We examined this issue in five experiments using a recognition memory paradigm in which observers (N = 134) memorized and then discriminated sets of 3D novel objects at trained and untrained viewpoints under either mono or stereo viewing conditions. In order to explore the conditions under which stereo-defined shape information contributes to object recognition we systematically varied the difficulty of view generalization by increasing the angular disparity between trained and untrained views. In one series of experiments, objects were presented from either previously trained views or untrained views rotated (15°, 30°, or 60°) along the same plane. In separate experiments we examined whether view generalization effects interacted with the vertical or horizontal plane of object rotation across 40° viewpoint changes. The results showed robust viewpoint-dependent performance costs: Observers were more efficient in recognizing learned objects from trained than from untrained views, and recognition was worse for extrapolated than for interpolated untrained views. We also found that performance was enhanced by stereo viewing but only at larger angular disparities between trained and untrained views. These findings show that object recognition is not based solely on 2D image information but that it can be facilitated by shape information derived from stereo disparity.  相似文献   

3.
Faces learned from multiple viewpoints are recognized better with left than right three-quarter views. This left-view superiority could be explained by perceptual experience, facial asymmetry, or hemispheric specialization. In the present study, we investigated whether left-view sequences are also more effective in recognizing same and novel views of a face. In a sequential matching task, a view sequence showing a face rotating around a left (?30°) or a right (+30°) angle, with an amplitude of 30°, was followed by a static test view with the same viewpoint as the sequence (?30° or +30°) or with a novel one (0°, +30°, or ?30°). We found a superiority of left-view sequences independently of the test viewpoint, but no superiority of left over right test views. These results do not seem compatible with the perceptual experience hypothesis, which predicts superiority only for left-side test views (?30°). Also, a facial asymmetry judgement task showed no correlation between the asymmetry of individual faces and the left-view sequence superiority. A superiority of left-view sequences for novel as well as same test views argues in favour of an explanation by hemispheric specialization, because of the possible role of the right hemisphere in extracting facial identity information.  相似文献   

4.
Two important and related developments in children between 18 and 24 months of age are the rapid expansion of object name vocabularies and the emergence of an ability to recognize objects from sparse representations of their geometric shapes. In the same period, children also begin to show a preference for planar views (i.e., views of objects held perpendicular to the line of sight) of objects they manually explore. Are children's emerging view preferences somehow related to contemporary changes in object name vocabulary and object perception? Children aged 18 to 24 months old explored richly detailed toy objects while wearing a head camera that recorded their object views. Both children's vocabulary size and their success in recognizing sparse three-dimensional representations of the geometric shapes of objects were significantly related to their spontaneous choice of planar views of those objects during exploration. The results suggest important interdependencies among developmental changes in perception, action, word learning, and categorization in very young children.  相似文献   

5.
6.
We examined whether the orientation of the face influences speech perception in face-to-face communication. Participants identified auditory syllables, visible syllables, and bimodal syllables presented in an expanded factorial design. The syllables were /ba/, /va/, /δa/, or /da/. The auditory syllables were taken from natural speech whereas the visible syllables were produced by computer animation of a realistic talking face. The animated face was presented either as viewed in normal upright orientation or inverted orientation (180° frontal rotation). The central intent of the study was to determine if an inverted view of the face would change the nature of processing bimodal speech or simply influence the information available in visible speech. The results with both the upright and inverted face views were adequately described by the fuzzy logical model of perception (FLMP). The observed differences in the FLMP’s parameter values corresponding to the visual information indicate that inverting the view of the face influences the amount of visible information but does not change the nature of the information processing in bimodal speech perception  相似文献   

7.
Previous research findings regarding the relative difficulty of the different positions occupied by the other observer in a perspectives task are contradictory; this may be due to a lack of control of the object arrangement and differences in masking from each viewpoint. In the present experiment, an object array was employed in which the views from the experimental positions were objectively of equal difficulty. Children of 6, 8, and 10 years found that representing the 90 and 270° views was equally difficult, but both of these views were easier than the 180° view.  相似文献   

8.
Priming effects on the object possibility task, in which participants decide whether line drawings could or could not be possible three-dimensional objects, may be supported by the same processes and representations used in recognizing and identifying objects. Three experiments manipulating objects’ picture-plane orientation provided limited support for this hypothesis. Like old/new recognition performance, possibility priming declined as study-test orientation differences increased from 0° to 60°. However, while significant possibility priming was not observed for larger orientation differences, recognition performance continued to decline following 60°–180° orientation shifts. These results suggest that possibility priming and old/new recognition may rely on common viewpoint-specific representations but that access to these representations in the possibility test occurs only when study and test views are sufficiently similar (i.e., rotated less than 60°).  相似文献   

9.
Pigeons and humans were trained to discriminate between pictures of three-dimensional objects that differed in global shape. Each pair of objects was shown at two orientations that differed by a depth rotation of 90° during training. Pictures of the objects at novel depth rotations were then tested for recognition. The novel test rotations were 30°, 45°, and 90° from the nearest trained orientation and were either interpolated between the trained orientations or extrapolated outside of the training range. For both pigeons and humans, recognition accuracy and/or speed decreased as a function of distance from the nearest trained orientation. However, humans, but not pigeons, were more accurate in recognizing novel interpolated views than novel extrapolated views. The results suggest that pigeons’ recognition was based on independent generalization from each training view, whereas humans showed view-combination processes that resulted in a benefit for novel views interpolated between the training views.  相似文献   

10.
Stephan BC  Caine D 《Perception》2007,36(2):189-198
In recognising a face the visual system shows a remarkable ability in overcoming changes in viewpoint. However, the mechanisms involved in solving this complex computational problem, particularly in terms of information processing, have not been clearly defined. Considerable evidence indicates that face recognition involves both featural and configural processing. In this study we examined the contribution of featural information across viewpoint change. Participants were familiarised with unknown faces and were later tested for recognition in complete or part-face format, across changes in view. A striking effect of viewpoint resulting in a reduction in profile recognition compared with the three-quarter and frontal views was found. However, a complete-face over part-face advantage independent of transformation was demonstrated across all views. A hierarchy of feature salience was also demonstrated. Findings are discussed in terms of the problem of object constancy as it applies to faces.  相似文献   

11.
Subjects decided whether an object drawing matched the entry-level name that immediately preceded it in a name-object sequence. When objects in the stimulus set were visually similar with respect to global shape and configuration of parts, response time increased linearly from 0° to 120° for both match and mismatch trials. Similar effects of orientation were found on match trials when objects in the stimulus set were visually dissimilar. No effects of orientation were observed when name and drawing did not match in the visually dissimilar condition. The results are consistent with the view that, in a variety of viewing situations, the initial identification of an object at the entry level is accomplished by viewpoint-dependent mechanisms.  相似文献   

12.
In an earlier report (Harman, Humphrey, & Goodale, 1999), we demonstrated that observers who actively rotated three-dimensional novel objects on a computer screen later showed faster visual recognition of these objects than did observers who had passively viewed exactly the same sequence of images of these virtual objects. In Experiment 1 of the present study we showed that compared to passive viewing, active exploration of three-dimensional object structure led to faster performance on a "mental rotation" task involving the studied objects. In addition, we examined how much time observers concentrated on particular views during active exploration. As we found in the previous report, they spent most of their time looking at the "side" and "front" views ("plan" views) of the objects, rather than the three-quarter or intermediate views. This strong preference for the plan views of an object led us to examine the possibility in Experiment 2 that restricting the studied views in active exploration to either the plan views or the intermediate views would result in differential learning. We found that recognition of objects was faster after active exploration limited to plan views than after active exploration of intermediate views. Taken together, these experiments demonstrate (1) that active exploration facilitates learning of the three-dimensional structure of objects, and (2) that the superior performance following active exploration may be a direct result of the opportunity to spend more time on plan views of the object.  相似文献   

13.
Two experiments are reported that examined the act of prehension when subjects were asked to grasp with their thumb and index finger pads an elongated object resting horizontally on a surface and placed at different orientations with respect to the subject. In Experiment 1, the pad opposition preferences were determined for the six angles of orientation examined. For angles of 90° (object parallel to frontal plane) or less, no rotation of the wrist (pronation) was used; for angles 110° or greater, pronation was systematically employed to reorient the finger opposition space. Only one angle, 100°, produced any evidence of ambiguity in how to grasp the object: Approximately 60% of these grasps involved pronation and 40% did not.

Using the foregoing grasp preference data, in Experiment 2 we examined the kinematics of the wrist and elbow trajectories during prehension movements directed at an object in different orientations. Movement time, time to peak acceleration, velocity, and deceleration were measured. No kinematic differences were observed when the object orientation either required (110°) or did not require (80°) a pronation. By contrast, if the orientation was changed at the onset of the movement, such that an unpredicted pronation had to be introduced to achieve the grasp, kinematics were affected: Movement time was increased, and the time devoted to deceleration was lengthened.

These data are interpreted as evidence that when natural prehension occurs, pronation can be included in the motor plan without affecting the movement kinematics. When constraints are imposed on the movement execution as a consequence of a perturbation, however, the introduction of a pronation component requires kinematic rearrangement.  相似文献   

14.
Object knowledge refers to the understanding that all objects share certain properties. Various components of object knowledge (e.g., object occlusion, object causality) have been examined in human infants to determine its developmental origins. Viewpoint invariance--the understanding that an object viewed from different viewpoints is still the same object--is one area of object knowledge, however, that has received less attention. To this end, infants' capacity for viewpoint-invariant perception of multi-part objects was investigated. Three-month-old infants were tested for generalization to an object displayed on a mobile that differed only in orientation (i.e., viewpoint) from a training object. Infants were given experience with a wide range of object views (Experiment 1) or a more restricted range during training (Experiment 2). The results showed that infants generalized between a horizontal and vertical viewpoint (Experiment 1) that they could clearly discriminate between in other contexts (i.e., with restricted view experience, Experiment 2). Overall, the outcome shows that training experience with multiple viewpoints plays an important role in infants' ability to develop a general percept of an object's 3D structure and promotes viewpoint-invariant perception of multi-part objects; in contrast, restricting training experience impedes viewpoint-invariant recognition of multi-part objects.  相似文献   

15.
To determine if layout affects boundary extension (BE; false memory beyond view boundaries; Intraub & Richardson, 1989), 12 single-object scenes were photographed from three vantage points: central (0°), shifted rightward (45°), and shifted leftward (45°). Size and position of main objects were held constant. Pictures were presented for 15 s each and were repeated at test with their boundaries: (a) displaced inward or outward (Experiment 1: N=120), or (b) identical to the stimulus views (Experiment 2: N=72). When participants adjusted test boundaries to match memory, BE always occurred, but tended to be smaller for 45° views. We propose this reflects the fact that more of the 3-D scene is visible in the 45° views. This suggests that scene representation reflects the 3-D world conveyed by the global characteristics of layout, rather than the 2-D distance between the main object and the boundaries of a picture.  相似文献   

16.
How does the human visual system determine the depth-orientation of familiar objects? We examined reaction times and errors in the detection of 15° differences in the depth orientations of two simultaneously presented familiar objects, which were the same objects (Experiment 1) or different objects (Experiment 2). Detection of orientation differences was best for 0° (front) and 180° (back), while 45° and 135° yielded poorer results, and 90° (side) showed intermediate results, suggesting that the visual system is tuned for front, side and back orientations. We further found that those advantages are due to orientation-specific features such as horizontal linear contours and symmetry, since the 90° advantage was absent for objects with curvilinear contours, and asymmetric object diminished the 0° and 180° advantages. We conclude that the efficiency of visually determining object orientation is highly orientation-dependent, and object orientation may be perceived in favor of front-back axes.  相似文献   

17.
Orientation processing is essential for segmenting contour from the background, which allows perception of the shape and stability of objects. However, little is known about how monkeys determine the degree and direction of orientation. In this study, to determine the reference axis for orientation perception in monkeys, post-discrimination generalization tests were conducted following discrimination training between the 67.5° and 112.5° orientations and between the 22.5° and 157.5° orientations. After discrimination training between the 67.5° and 112.5° orientations, the slope of the generalization gradient around the S+ orientation was broad, while the slope was steep after discrimination training between the 22.5° and 157.5° orientations. Comparing the shapes of the gradients indicated that the subjective distance between the 67.5° and 112.5° orientations was small, while the subjective distance between the 22.5° and 157.5° orientation was large. In other words, the monkeys recognized that the former and the latter distances were 45° and 135° across the vertical axis, rather than 135° and 45° across the horizontal axis, respectively. These findings indicate that the monkeys determined the degree and direction of the tilt using the vertical reference.  相似文献   

18.
Shape recognition can be achieved through vision or touch, raising the issue of how this information is shared across modalities. Here we provide a short review of previous findings on cross-modal object recognition and we provide new empirical data on multisensory recognition of actively explored objects. It was previously shown that, similar to vision, haptic recognition of objects fixed in space is orientation specific and that cross-modal object recognition performance was relatively efficient when these views of the objects were matched across the sensory modalities (Newell, Ernst, Tjan, & Bülthoff, 2001). For actively explored (i.e., spatially unconstrained) objects, we now found a cost in cross-modal relative to within-modal recognition performance. At first, this may seem to be in contrast to findings by Newell et al. (2001). However, a detailed video analysis of the visual and haptic exploration behaviour during learning and recognition revealed that one view of the objects was predominantly explored relative to all others. Thus, active visual and haptic exploration is not balanced across object views. The cost in recognition performance across modalities for actively explored objects could be attributed to the fact that the predominantly learned object view was not appropriately matched between learning and recognition test in the cross-modal conditions. Thus, it seems that participants naturally adopt an exploration strategy during visual and haptic object learning that involves constraining the orientation of the objects. Although this strategy ensures good within-modal performance, it is not optimal for achieving the best recognition performance across modalities.  相似文献   

19.
When there are equally strong claimants for a scarce good, lotteries are often argued to be a fair method of allocation. This paper reproduces four of the views on the fairness of lotteries that have been presented in the literature: the distributive view; the preference view; the actual consent view; and the expressive view. It argues that these four views cannot offer plausible explanations for the fairness of lotteries. The distributive view is argued to be inadequate because, even though receiving expectations to a good is of value to the participants, this value cannot plausibly make a contribution to the satisfaction of a participant’s claim. Both the preference and actual consent views are argued to be implausible because they lead to accepting procedures as legitimate that fail to correspond with what a claim is. Finally, it is contended that the expressive view identifies a value that is relevant to respecting equal claimants, but that cannot plausibly be related to a procedure’s fairness. The paper concludes by maintaining that an equal treatment view can accept all the valid insights from these four views without needing to accept their untenable implications.  相似文献   

20.
The specificity of face recognition is usually related to the hypothesis that it could be based mainly on global processing of the whole face. However, this does not mean that more analytic processes, like the local processing of some features, could not play a role in the recognition process in some circumstances. An experiment was conducted with 18 normal subjects in order to evaluate the role of analytic processes in the recognition of faces viewed from different angles. Makeup was applied to eyes and lips to enhance local processing (local cue). We found a positive effect of the makeup cue, which could be due to better recognition when the shape of eyes and lips are highlighted. Also, we found better recognition with left-sided views compared to right-sided views. Finally, the makeup effect was significant in left three-quarter views but not in right three-quarter views. If this effect is due to analytic processing of some facial features, it could be related to a left hemisphere operation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号