首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Certain simple visual displays consisting of moving 2-D geometric shapes can give rise to percepts with high-level properties such as causality and animacy. This article reviews recent research on such phenomena, which began with the classic work of Michotte and of Heider and Simmel. The importance of such phenomena stems in part from the fact that these interpretations seem to be largely perceptual in nature - to be fairly fast, automatic, irresistible and highly stimulus driven - despite the fact that they involve impressions typically associated with higher-level cognitive processing. This research suggests that just as the visual system works to recover the physical structure of the world by inferring properties such as 3-D shape, so too does it work to recover the causal and social structure of the world by inferring properties such as causality and animacy.  相似文献   

2.
Light is the origin of vision. The pattern of shading reflected from object surfaces is one of several optical features that provide fundamental information about shape and surface orientation. To understand how surface and object shading is processed by birds, six pigeons were tested with differentially illuminated convex and concave curved surfaces in five experiments using a go/no-go procedure. We found that pigeons rapidly learned this type of visual discrimination independent of lighting direction, surface coloration and camera perspective. Subsequent experiments varying the pattern of the lighting on these surfaces through changes in camera perspective, surface height, contrast, material specularity, surface shape, light motion, and perspective movement were consistent with the hypothesis that the pigeons were perceiving these illuminated surfaces as three-dimensional surfaces containing curved shapes. The results suggest that the use of relative shading for objects in a visual scene creates highly salient features for shape processing in birds.  相似文献   

3.
Soranzo A  Agostini T 《Perception》2006,35(2):185-192
The relation between perceptual belongingness and lightness perception has historically been studied in the contrast domain (Benary, 1924 Psychologische Forschung 5 131 - 142). However, scientists have shown that two equal grey patches may differ in lightness when belonging to different reflecting surfaces. We extend this investigation to the constancy domain. In a CRT simulation of a bipartite field of illumination, we manipulated the arrangement of twelve patches: six squares and six diamonds. Patches of the same shape could be placed: (i) all within the same illumination field; or (ii) forming a row across the illumination fields. Furthermore, we manipulated proximity between the innermost patches and the illumination edge. The patches could be (i) touching (forming an X-junction); or (ii) not touching (not forming an X-junction). Observers were asked to perform a lightness match between two additional patches, one illuminated and the other in shadow. We found better lightness constancy when the patches of the same shape formed a row across the fields, with no effect of X-junctions. Since lightness constancy is improved by strengthening the belongingness across the illumination fields, we conclude that belongingness might help the visual system to aggregate the differently illuminated surfaces, and facilitate the scission process.  相似文献   

4.
Investigators have proposed that qualitative shapes are the primitive information of spatial vision: They preserve an approximately one-to-one mapping between surfaces, images, and perception. Given their importance, we examined how the visual system recovers these primitives from sparse disparity fields that do not provide sufficient information for their recovery. We hypothesized that the visual system interpolates sparse disparities with planes, resulting in a patchwork approximation of the implicitly defined shapes. We presented observers with stereo displays simulating planar or smooth curved surfaces having different curvatures. The observers' task was to detect whether dots deviated from these surfaces or to discriminate planar from curved or planar from scrambled surfaces. Consistent with our hypothesis, increasing curvature had detrimental effects on observers' performance (Experiments 1-3). Importantly, this patchwork approximation leads to the recovery of the proposed shape primitives, since observers were more accurate at discriminating planar-from-curved than planar-from-scrambled surfaces with matched disparity range (Experiment 4).  相似文献   

5.
We review the organization of the neural networks that underlie haptic object processing and compare that organization with the visual system. Haptic object processing is separated into at least two neural pathways, one for geometric properties or shape, and one for material properties, including texture. Like vision, haptic processing pathways are organized into a hierarchy of processing stages, with different stages represented by different brain areas. In addition, the haptic pathway for shape processing may be further subdivided into different streams for action and perception. These streams may be analogous to the action and perception streams of the visual system and represent two points of neural convergence for vision and haptics.  相似文献   

6.
Stiles-Davis proposes that the infants in our experiments (Hofsten & Spelke, 1985) did not reach for perceived objects in order to manipulate them, but rather touched perceived surfaces in order to explore their boundaries. Her commentary raises questions about infants' perception of the boundaries, the unity, and the manipulability of objects. More deeply, it raises the question of what an object is for an infant. We consider each of these questions in turn, in light of our own findings and those of other studies of object-directed reaching, object perception, and the object concept. We suggest that young infants organize the visual world into entities that are bounded, unitary, and manipulable and that infants endow those entities with the core properties of physical objects.  相似文献   

7.
One of the main theoretical challenges of vision science is to explain how the visual system interpolates missing structure. Two forms of visual completion have been distinguished on the basis of the phenomenological states that they induce. Modal completion refers to the formation of visible surfaces and/or contours in image regions where these properties are not specified locally. Amodal completion refers to the perceived unity of objects that are partially obscured by occluding surfaces. Although these two forms of completion elicit very different phenomenological states, it has been argued that a common mechanism underlies modal and amodal boundary and surface interpolation (the "identity hypothesis"; Kellman & Shipley, 1991; Kellman, 2001). Here, we provide new data, demonstrations, and theoretical principles that challenge this view. We show that modal boundary and surface completion processes exhibit a strong dependence on the prevailing luminance relationships of a scene, whereas amodal completion processes do not. We also demonstrate that the shape of interpolated contours can change when a figure undergoes a transition from a modal to an amodal appearance, in direct contrast to the identity hypothesis. We argue that these and previous results demonstrate that modal and amodal completion do not result from a common interpolation mechanism.  相似文献   

8.
The superiority of ground surfaces over ceiling surfaces in determining the representation of the visual world, demonstrated in several studies of visual perception and visual search, has been attributed to a preference for top-away projections resulting from ecological constraints. Recent research on binocular rivalry indicates that ecological constraints affect predominance relations. The present study considered whether there is a difference in predominance between ground and ceiling surfaces. In Experiment 1, we examined whether a ground surface would dominate a ceiling surface when one surface was presented to each eye. In Experiment 2, we used an eye-swapping paradigm to determine whether a ground surface would come to dominance faster than a ceiling surface when presented to the suppressed eye. The eye-swapping paradigm was used again in Experiment 3, but the ground and ceiling planes were replaced with frontal planes with similar variations in texture density. The results of these experiments indicate that ground surfaces are predominant over ceiling surfaces, with this predominance affecting both the dominance and suppression phases of binocular rivalry. This superiority of ground planes is independent of image properties such as the increase or decrease in texture density from the lower half to the upper half of the images.  相似文献   

9.
10.
Cancellation tests have been widely used in clinical practice and in research to evaluate visuospatial attention, visual scanning patterns, and neglect problems. The aim of the present work is to present a visualized interface for the visuospatial attentional assessment system that can be employed to monitor and analyze attention performance and the search strategies used during visuospatial processing of target cancellation. We introduce a pattern identification mechanism for visual search patterns and report our findings from examining the visual search performance and patterns. We also present a comparison of results across various cancellation tests and age groups. The present study demonstrates that our system can obtain more processing data about spatiotemporal features of visual search than can conventional tests.  相似文献   

11.
What is the task of educational theory or philosophy if it is not merely conceived as specification of philosophical doctrines in the realm of education? In my view it is the particular task of educational-philosophical theory to work critically on the historically developed cultural constructs that shape our (educational) experience. Thus, the activity that educational theorists are to perform is the critical reflection of the “limits of our world” by drawing on philosophical references and theories. In this text I describe this activity drawing from my own research practice with a particular focus on its relation to what is called thinking.  相似文献   

12.
Singh M  Anderson BL 《Perception》2002,31(5):531-552
In constructing the percept of transparency, the visual system must decompose the light intensity at each image location into two components one for the partially transmissivc surface, the other for the underlying surface seen through it. Theories of perceptual transparency have typically assumed that this decomposition is defined quantitatively in terms of the inverse of some physical model (typically, Metelli's 'episcotister model'). In previous work, we demonstrated that the visual system uses Michelson contrast as a critical image variable in assigning transmittance to transparent surfaces not luminance differences as predicted by Metelli's model [F Metelli, 1974 Scientific American 230(4) 90 98]. In this paper, we study the contribution of another variable in determining perceived transmittance, namely, the image blur introduced by the light-scattering properties of translucent surfaces and materials. Experiment 1 demonstrates that increasing the degree of blur in the region of transparency leads to a lowering in perceived transmittance, even if Michelson contrast remains constant in this region. Experiment 2 tests how this addition of blur affects apparent contrast in the absence of perceived transparency. The results demonstrate that, although introducing blur leads to a lowering in apparent contrast, the magnitude of this decrease is relatively small, and not sufficient to explain the decrease in perceived transmittance observed in experiment 1. The visual system thus takes the presence of blur in the region of transparency as an additional image cue in assigning transmittance to partially transmissive surfaces.  相似文献   

13.
ABSTRACT— When we move, the visual world moves toward us. That is, self-motion normally produces visual signals (flow) that tell us about our own motion. But these signals are distorted by our motion: Visual flow actually appears slower while we are moving than it does when we are stationary and our surroundings move past us. Although for many years these kinds of distortions have been interpreted as a suppression of flow to promote the perception of a stable world, current research has shown that these shifts in perceived visual speed may have an important function in measuring our own self-motion. Specifically, by slowing down the apparent rate of visual flow during self-motion, our visual system is able to perceive differences between actual and expected flow more precisely. This is useful in the control of action.  相似文献   

14.
Shape constancy is referred to as the tendency for the perceived shape of an object to remain unchanged even under changed viewing and illumination conditions. We investigated, in two experiments, whether shape constancy would hold for images of 3-D solid objects defined by shading only, whose renderings differed in terms of surface material type (bi-directional reflectance distribution functions), light field, light direction, shape, and specularity. Observers were presented with the image of a sphere or an ellipsoid and required to set perceived orientation and cross-section profile on designated points of the image. Results showed that shape judgments varied with all the aforementioned variables except specularity. Shape estimates were more precise with specular than asperity scattering surfaces, collimated than hemispherical diffuse lighting conditions, lower than higher elevations, spherical than ellipsoidal shapes, but not different between surfaces having differing specularity. These results suggest that shape judgments are made largely on the basis of the overall intensity distribution of shading, and that the portions of intensity distribution that are due to nonstructural variables such as surface material type or light field are not excluded in the process of shape estimation, as if being due to structural components. It is concluded that little constancy is expected in the perception of shape from shading.  相似文献   

15.
Spatial variations of visual-auditory fusion areas   总被引:2,自引:0,他引:2  
Godfroy M  Roumes C  Dauchy P 《Perception》2003,32(10):1233-1245
The tolerance to spatial disparity between two synchronous visual and auditory components of a bimodal stimulus has been investigated in order to assess their respective contributions to perceptual fusion. The visual and auditory systems each have specific information-processing mechanisms, and provide different cues for scene perception, with the respective dominance of space for vision and of time for hearing. A broadband noise burst and a spot of light, 500 ms in duration, have been simultaneously presented to participants who had to judge whether these cues referred to a single spatial event. We examined the influence of (i) the range and the direction of spatial disparity between the visual and auditory components of a stimulation and (ii) the eccentricity of the bimodal stimulus in the observer's perceptual field. Size and shape properties of visual-auditory fusion areas have been determined in two dimensions. The greater the eccentricity within the perceptual field, the greater the dimension of these areas; however, this increase in size also depends on whether the direction of the disparity is vertical or horizontal. Furthermore, the relative location of visual and auditory signals significantly modifies the perception of unity in the vertical plane. The shape of the fusion areas, their variation in the field, and the perceptual result associated with the relative location of the visual and auditory components of the stimulus, concur towards a strong contribution of audition to visual-auditory fusion. The spatial ambiguity of the localisation capabilities of the auditory system may play a more essential role than accurate visual resolution in determining fusion.  相似文献   

16.
Rogers B  Brecher K 《Perception》2007,36(9):1275-1289
Helmholtz's famous pincushioned chessboard figure has been used to make the point that straight lines in the world are not always perceived as straight and, conversely, that curved lines in the world can sometimes be seen as straight. However, there is little agreement as to the cause of these perceptual errors. Some authors have attributed the errors to the shape of the retina, or the amount of cortex devoted to the processing of images falling on different parts of the retina, while others have taken the effects to indicate that visual space itself is curved. Helmholtz himself claimed that the 'uncurved lines on the visual globe' corresponded to 'direction circles' defined as those arcs described by the line of fixation when the eye moves according to Listing's law. Careful re-reading of Helmholtz together with some additional observations lead us to the conclusion that two other factors are also involved in the effect: (i) a lack of information about the distance of peripherally viewed objects and (ii) the preference of the visual system for seeing the pincushion squares as similar in size.  相似文献   

17.
18.
Kaski D 《Perception》2002,31(6):717-731
Vision is the most highly developed sense in man and represents the doorway through which most of our knowledge of the external world arises. Visual imagery can be defined as the representation of perceptual information in the absence of visual input. Visual imagery has been shown to complement vision in this acquisition of knowledge--it is used in memory retrieval, problem solving, and the recognition of properties of objects. The processes underlying visual imagery have been assimilated to those of the visual system and are believed to share a neural substrate. However, results from studies in congenitally and cortically blind subjects have opposed this hypothesis. Here I review the currently available evidence.  相似文献   

19.
The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.  相似文献   

20.
A crucial question in cognitive science is how linguistic and visual information are integrated. Previous research has shown that eye movements to objects in the visual environment are locked to linguistic input. More surprisingly, listeners fixate on now-empty regions that had previously been occupied by relevant objects. This 'looking at nothing' phenomenon has been linked to the claim that the visual system constructs sparse representations of the external world and relies on saccades and fixations to extract information in a just-in-time manner. Our model provides a different explanation: based on recent work in visual cognition and memory, it assumes that the visual system creates and stores detailed internal memory representations, and that looking at nothing facilitates retrieval of those representations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号