首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The correspondence problem is a classic issue in vision and cognition. Frequent perceptual disruptions, such as saccades and brief occlusion, create gaps in perceptual input. How does the visual system establish correspondence between objects visible before and after the disruption? Current theories hold that object correspondence is established solely on the basis of an object’s spatiotemporal properties and that an object’s surface feature properties (such as color or shape) are not consulted in correspondence operations. In five experiments, we tested the relative contributions of spatiotemporal and surface feature properties to establishing object correspondence across brief occlusion. Correspondence operations were strongly influenced both by the consistency of an object’s spatiotemporal properties across occlusion and by the consistency of an object’s surface feature properties across occlusion. These data argue against the claim that spatiotemporal cues dominate the computation of object correspondence. Instead, the visual system consults multiple sources of relevant information to establish continuity across perceptual disruption.  相似文献   

2.
The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts.  相似文献   

3.
Vision is based on spatial correspondences between physically different structures--in environment, retina, brain, and perception. An examination of the correspondence between environmental surfaces and their retinal images showed that this consists of 2-dimensional 2nd-order differential structure (effectively 4th-order) associated with local surface shape, suggesting that this might be a primitive form of spatial information. Next, experiments on hyperacuities for detecting relative motion and binocular disparity among separated image features showed that spatial positions are visually specified by the surrounding optical pattern rather than by retinal coordinates, minimally affected by random image perturbations produced by 3-D object motions. Retinal image space, therefore, involves 4th-order differential structure. This primitive spatial structure constitutes information about local surface shape.  相似文献   

4.
To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks’ object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.  相似文献   

5.
ABSTRACT

Intermixing central, directional arrow targets with the peripheral targets typically used in the Posnerian spatial cueing paradigm offers a useful diagnostic for ascertaining the relative contributions of output and input processes to oculomotor inhibition of return (IOR). Here, we use this diagnostic to determine whether object-based oculomotor IOR comprises output and/or input processes. One of two placeholder objects in peripheral vision was cued, then both objects rotated smoothly either 90 or 180 degrees around the circumference of an imaginary circle. After this movement, a saccade was made to the location marked by a peripheral onset target or indicated by the central arrow. In our first three experiments, whereas there was evidence for IOR when measured by central arrow or peripheral onset targets at cued locations, there was little trace of IOR at the cued object. We thereafter precisely replicated the seminal experiment for object-based oculomotor IOR (Abrams, R. A., & Dobkin, R. S. (1994). Inhibition of return: Effects of attentional cuing on eye movement latencies. Journal of Experimental Psychology: Human Perception and Performance, 20(3), 467–477; Experiment 4) but again found little evidence of an object-based IOR effect. Finally, we ran a paradigm with only peripheral targets and with motion and stationary trials randomly intermixed. Here we again showed IOR at the cued location but not at the cued object. Together, the findings suggest that object-based representation of oculomotor IOR is much more tenuous than implied by the literature.  相似文献   

6.
《Journal of Applied Logic》2015,13(3):239-258
This paper provides a semantics for input/input output logic based on formal concept analysis. The central result shows that an input/output logic axiomatised by a relation R is the same as the logic induced by deriving pairs from the concept lattice generated by R using a ∧- and ∨-classical Scott consequence relation. This correspondence offers powerful analytical techniques for classifying, visualising and analysing input/output relations, revealing implicit hierarchical structure and/or natural clusterings and dependencies. The application of all formal developments are illustrated by a worked example towards the end.  相似文献   

7.
We examined Goslin, Dixon, Fischer, Cangelosi, and Ellis’s (Psychological Science 23:152–157, 2012) claim that the object-based correspondence effect (i.e., faster keypress responses when the orientation of an object’s graspable part corresponds with the response location than when it does not) is the result of object-based attention (vision–action binding). In Experiment 1, participants determined the category of a centrally located object (kitchen utensil vs. tool), as in Goslin et al.’s study. The handle orientation (left vs. right) did or did not correspond with the response location (left vs. right). We found no correspondence effect on the response times (RTs) for either category. The effect was also not evident in the P1 and N1 components of the event-related potentials, which are thought to reflect the allocation of early visual attention. This finding was replicated in Experiment 2 for centrally located objects, even when the object was presented 45 times (33 more times than in Exp. 1). Critically, the correspondence effects on RTs, P1s, and N1s emerged only when the object was presented peripherally, so that the object handle was clearly located to the left or right of fixation. Experiment 3 provided further evidence that the effect was observed only for the base-centered objects, in which the handle was clearly positioned to the left or right of center. These findings contradict those of Goslin et al. and provide no evidence that an intended grasping action modulates visual attention. Instead, the findings support the spatial-coding account of the object-based correspondence effect.  相似文献   

8.
Multiresolution (orpyramid) approaches to computer vision provide the capability of rapidly detecting and extracting global structures (features, regions, patterns, etc.) from an image. The human visual system also is able to spontaneously (orpreattentively) perceive various types of global structure in visual input; this process is sometimes calledperceptual organization. This paper describes a set of pyramid-based algorithms that can detect and extract these types of structure; included are algorithms for inferring three-dimensional information from images and for processing time sequences of images. If implemented in parallel on cellular pyramid hardware, these algorithms require processing times on the order of the logarithm of the image diameter.  相似文献   

9.
We present a computational model of grasping of non-fixated (extrafoveal) target objects which is implemented on a robot setup, consisting of a robot arm with cameras and gripper. This model is based on the premotor theory of attention (Rizzolatti et al., 1994) which states that spatial attention is a consequence of the preparation of goal-directed, spatially coded movements (especially saccadic eye movements). In our model, we add the hypothesis that saccade planning is accompanied by the prediction of the retinal images after the saccade. The foveal region of these predicted images can be used to determine the orientation and shape of objects at the target location of the attention shift. This information is necessary for precise grasping. Our model consists of a saccade controller for target fixation, a visual forward model for the prediction of retinal images, and an arm controller which generates arm postures for grasping. We compare the precision of the robotic model in different task conditions, among them grasping (1) towards fixated target objects using the actual retinal images, (2) towards non-fixated target objects using visual prediction, and (3) towards non-fixated target objects without visual prediction. The first and second setting result in good grasping performance, while the third setting causes considerable errors of the gripper orientation, demonstrating that visual prediction might be an important component of eye–hand coordination. Finally, based on the present study we argue that the use of robots is a valuable research methodology within psychology.  相似文献   

10.
Neuropsychological studies suggest the existence of lateralized networks that represent categorical and coordinate types of spatial information. In addition, studies with neural networks have shown that they encode more effectively categorical spatial judgments or coordinate spatial judgments, if their input is based, respectively, on units with relatively small, nonoverlapping receptive fields, as opposed to units with relatively large, overlapping receptive fields. These findings leave open the question of whether interactive processes between spatial detectors and types of spatial relations can be modulated by spatial attention. We hypothesized that spreading the attention window to encompass an area that includes two objects promotes coordinate spatial relations, based on coarse coding by large, overlapping, receptive fields. In contrast, narrowing attention to encompass an area that includes only one of the objects benefits categorical spatial relations, by effectively parsing space. By use of a cueing procedure, the spatial attention window was manipulated to select regions of differing areas. As predicted, when the attention window was large, coordinate spatial transformations were noticed faster than categorical transformations; in contrast, when the attention window was relatively smaller, categorical spatial transformations were noticed faster than coordinate transformations. Another novel finding was that coordinate changes were noticed faster when cueing an area that included both objects as well as the empty space between them than when simultaneously cueing both areas including the objects while leaving the gap between them uncued.  相似文献   

11.
Summary When judging in stereoscopic vision whether an object is lying in front of or behind the point of momentary fixation, the visual system extracts depth information by using retinal disparity; in this case it computes one angular difference between retinal images (simple positional disparity). But if the task is to discriminate two or more objects in their depth (relative to the point of fixation) and the relative distances between them, two or more such angular differences have to be determined (relative positional disparity). An investigation was carried out to determine whether depth extraction is more complex for relative distances than for object positions and therefore demands a longer processing time. For this purpose stimuli with simple and relative positional disparity were foveally and parafoveally presented (each followed by a masking stimulus). It was shown that the duration threshold for the detection of stimuli with relative disparity was about 2.5 times larger than that for stimuli with simple disparity (Exp. 1). This difference could not be attributed to differences in stimulus configuration between simple and relative disparity (Exp. 2). The results are discussed in terms of a serial, hierarchically structured, disparity processing.  相似文献   

12.
Spatial ventriloquism refers to the phenomenon that a visual stimulus such as a flash can attract the perceived location of a spatially discordant but temporally synchronous sound. An analogous example of mutual attraction between audition and vision has been found in the temporal domain, where temporal aspects of a visual event, such as its onset, frequency, or duration, can be biased by a slightly asynchronous sound. In this review, we examine various manifestations of spatial and temporal attraction between the senses (both direct effects and aftereffects), and we discuss important constraints on the occurrence of these effects. Factors that potentially modulate ventriloquism—such as attention, synesthetic correspondence, and other cognitive factors—are described. We trace theories and models of spatial and temporal ventriloquism, from the traditional unity assumption and modality appropriateness hypothesis to more recent Bayesian and neural network approaches. Finally, we summarize recent evidence probing the underlying neural mechanisms of spatial and temporal ventriloquism.  相似文献   

13.
In order to find objects or places in the world, multiple sources of information, such as visual input, auditory input and asking for directions, can help you. These different sources of information can be converged into a spatial image, which represents configurational characteristics of the world. This paper discusses the findings on the nature of spatial images and the role of spatial language in generating these spatial images in both blind and sighted individuals. Congenitally blind individuals have never experienced visual input, yet they are able to perform several tasks traditionally associated with spatial imagery, such as mental scanning, mental pathway completions and mental clock time comparison, though perhaps not always in a similar manner as sighted. Therefore, they offer invaluable insights into the exact nature of spatial images. We will argue that spatial imagery exceeds the input from different input modalities to form an abstract mental representation while maintaining connections with the input modalities. This suggests that the nature of spatial images is supramodal, which can explain functional equivalent results from verbal and perceptual inputs for spatial situations and subtle to moderate behavioral differences between the blind and sighted.  相似文献   

14.
Abstract

Objective: Previous research has shown that people consume less food in the dark compared to normal vision conditions. While this effect is commonly attributed to increased attention to internal cues, it could also be caused by increased difficulty to maneuver in a dark setting. This study investigated this potential alternative explanation.

Design: A 2 (dark versus normal vision setting)?×?2 (highlighted versus non-highlighted utensils) between-subjects design was employed.

Main outcome measures: Perceived difficulty of maneuvering and consumption of yoghurt were assessed as main outcome measures.

Results: Participants consumed marginally less in dark compared to normal vision conditions, and experienced higher difficulty of maneuvering. Importantly, both effects were qualified by a significant interaction with highlighting, which increased consumption and reduced perceived difficulty compared to no highlights. Difficulty of maneuvering did not mediate the interactive effect of vision and highlighting on consumption.

Conclusion: Difficulty to maneuver should be considered when investigating eating behaviour under dark conditions. In line with an embodied cognition account, results also reveal the necessity of visual information for interaction with objects in the environment and imply that detail-deprived object information may be sufficient for activation of the motor system.  相似文献   

15.
Three experiments were performed to examine the role that central and peripheral vision play in the perception of the direction of translational self-motion, or heading, from optical flow. When the focus of radial outflow was in central vision, heading accuracy was slightly higher with central circular displays (10°–25° diameter) than with peripheral annular displays (40° diameter), indicating that central vision is somewhat more sensitive to this information. Performance dropped rapidly as the eccentricity of the focus of outflow increased, indicating that the periphery does not accurately extract radial flow patterns. Together with recent research on vection and postural adjustments, these results contradict theperipheral dominance hypothesis that peripheral vision is specialized for perception of self-motion. We propose afunctional sensitivity hypothesis—that. self-motion is perceived on the basis of optical information rather than the retinal locus of stimulation, but that central and peripheral vision are differentially sensitive to the information characteristic of each retinal region.  相似文献   

16.
A successful vision system must solve the problem of deriving geometrical information about three-dimensional objects from two-dimensional photometric input. The human visual system solves this problem with remarkable efficiency, and one challenge in vision research is to understand how neural representations of objects are formed and what visual information is used to form these representations. Ideal observer analysis has demonstrated the advantages of studying vision from the perspective of explicit generative models and a specified visual task, which divides the causes of image variations into the separate categories of signal and noise. Classification image techniques estimate the visual information used in a task from the properties of “noise” images that interact most strongly with the task. Both ideal observer analysis and classification image techniques rely on the assumption of a generative model. We show here how the ability of the classification image approach to understand how an observer uses visual information can be improved by matching the type and dimensionality of the model to that of the neural representation or internal template being studied. Because image variation in real world object tasks can arise from both geometrical shape and photometric (illumination or material) changes, a realistic image generation process should model geometry as well as intensity. A simple example is used to demonstrate what we refer to as a “classification object” approach to studying three-dimensional object representations.  相似文献   

17.
Picture perception and ordinary perception of real objects differ in several respects. Two of their main differences are: (1) Depicted objects are not perceived as present and (2) We cannot perceive significant spatial shifts as we move with respect to them. Some special illusory pictures escape these visual effects obtained in usual picture perception. First, trompe l'oeil paintings violate (1): the depicted object looks, even momentarily, like a present object. Second, anamorphic paintings violate (2): they lead to appreciate spatial shifts resulting from movement. However, anamorphic paintings do not violate (1): they are still perceived as clearly pictorial, that is, nonpresent. What about the relation between trompe l'oeil paintings and (2)? Do trompe l'oeils allow us to perceive spatial shifts? Nobody has ever focused on this aspect of trompe l'oeil perception. I offer the first speculation about this question. I suggest that, if we follow our most recent theories in philosophy and vision science about the mechanisms of picture perception, then, the only plausible answer, in line with phenomenological intuitions, is that, differently from nonillusory, usual picture perception, and similarly to ordinary perception, trompe l'oeil perception does allow us to perceive spatial shifts resulting from movement. I also discuss the philosophical implications of this claim.  相似文献   

18.
Mary A. Ashley 《Sophia》2018,57(1):103-118
Although a conventional environmentalism focuses on the health of ecological systems, Pope Francis’s 2015 environmental encyclical Laudato Sí invokes St. Francis of Assisi to emphasize God’s love for the individual organism, no matter how small. Decrying the tendency to regard other creatures as mere objects to be controlled and used, Pope Francis urges our enactment of a ‘universal communion’ governed by love. I suggest, however, that Laudato Sí’s animal ethic, as focused on ordering human and animal need, is inadequate to its overarching vision of cross-species communion. This vision requires the sort of cross-species relational bridge implicit in Maurice Merleau-Ponty’s view of agency as an irreducibly ‘animate’ expression of choice and afforded further definition in Kenneth J. Shapiro’s conception of a ‘kinesthetic empathy.’ As the phenomenological epistemology underlying both discourses makes possible a rough correspondence, I put these in conversation to demonstrate that a Merleau-Pontyan and reciprocal agency is a constitutive aspect of the fullest sort of cross-species relation, such that recognition of this agency can both deepen our understanding of ‘universal communion’ and foster engagement in its practice.  相似文献   

19.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.  相似文献   

20.
《Visual cognition》2013,21(2):113-142
Vision is critical for the efficient execution of prehension movements, providing information about: The location of a target object with respect to the viewer; its spatial relationship to other objects; as well as intrinsic properties of the object such as its size and orientation. This paper reports three experiments which examined the role played by binocular vision in the execution of prehension movements. Specifically, transport and grasp kinematics were examined for prehension movements executed under binocular, monocular, and no vision (memory-guided and open-loop) viewing conditions. The results demonstrated an overall advantage for reaches executed under binocular vision; movement duration and the length of the deceleration phase were longer, and movement velocity reduced, when movements were executed with monocular vision. Furthermore, the results indicated that binocular vision is particularly important during “selective” reaching, that is reaching for target objects which are accompanied by flanker objects. These results are related to recent neuro psychological investigations suggesting that stereopsis may be critical for the visual control of prehension.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号