首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A sampling theoretical and experimental framework for the study of spatial vision is introduced. It is suggested that spatial Gestalt perception can be fruitfully analyzed by applying the concepts and methods of modern spatial filtering theory as they are known in the theory of image sampling and reconstruction. Demonstrations of the sampling processes in spatial vision are given and an experimental method for estimating the spatial reconstruction power of the human visual system is described. The experimental results presented suggest that high spatial frequency information has a special significance for human vision. Evidently, high frequency information is transmitted more easily through the visual system than has been generally assumed on the basis of contrast sensitivity studies.  相似文献   

2.
《Brain and cognition》2006,60(3):258-268
We investigate how vision affects haptic performance when task-relevant visual cues are reduced or excluded. The task was to remember the spatial location of six landmarks that were explored by touch in a tactile map. Here, we use specially designed spectacles that simulate residual peripheral vision, tunnel vision, diffuse light perception, and total blindness. Results for target locations differed, suggesting additional effects from adjacent touch cues. These are discussed. Touch with full vision was most accurate, as expected. Peripheral and tunnel vision, which reduce visuo-spatial cues, differed in error pattern. Both were less accurate than full vision, and significantly more accurate than touch with diffuse light perception, and touch alone. The important finding was that touch with diffuse light perception, which excludes spatial cues, did not differ from touch without vision in performance accuracy, nor in location error pattern. The contrast between spatially relevant versus spatially irrelevant vision provides new, rather decisive, evidence against the hypothesis that vision affects haptic processing even if it does not add task-relevant information. The results support optimal integration theories, and suggest that spatial and non-spatial aspects of vision need explicit distinction in bimodal studies and theories of spatial integration.  相似文献   

3.
Recent advances in technology and the increased use of tablet computers for mobile health applications such as vision testing necessitate an understanding of the behavior of the displays of such devices, to facilitate the reproduction of existing or the development of new vision assessment tests. The purpose of this study was to investigate the physical characteristics of one model of tablet computer (iPad mini Retina display) with regard to display consistency across a set of devices (15) and their potential application as clinical vision assessment tools. Once the tablet computer was switched on, it required about 13 min to reach luminance stability, while chromaticity remained constant. The luminance output of the device remained stable until a battery level of 5%. Luminance varied from center to peripheral locations of the display and with viewing angle, whereas the chromaticity did not vary. A minimal (1%) variation in luminance was observed due to temperature, and once again chromaticity remained constant. Also, these devices showed good temporal stability of luminance and chromaticity. All 15 tablet computers showed gamma functions approximating the standard gamma (2.20) and showed similar color gamut sizes, except for the blue primary, which displayed minimal variations. The physical characteristics across the 15 devices were similar and are known, thereby facilitating the use of this model of tablet computer as visual stimulus displays.  相似文献   

4.
It is often that the spatial senses (vision, hearing and the tactual senses) operate as distinct and independent modalities and, moreover, that vision is crucial to the development of spatial abilities. However, well controlled studies of blind persons with adequate experience show that they can function usefully in space. In other words, vision is not a necessary condition for spatial awareness. On the other hand, thought the blind may be equal or even superior to the sighted when performing spatial tasks within the body space, they may be deficient, either developmentally or absolutely, in tasks which involve events at a distance from the body, principally in auditory localization. One possible explanation of the differences between blind and sighted (McKinney, 1964; Attneave & Benson, 1969, Warren, 1970) is that vision is the primary spatial reference, and inputs from other modalities are fitted to a visual map. Several criticisms of this theory are adduced and an alternative theory derived from Sherrington (1947), in which all sensory inputs map on to efferent patterns, is sketched.  相似文献   

5.
The modality by which object azimuths (directions) are presented affects learning of multiple locations. In Experiment 1, participants learned sets of three and five object azimuths specified by a visual virtual environment, spatial audition (3D sound), or auditory spatial language. Five azimuths were learned faster when specified by spatial modalities (vision, audition) than by language. Experiment 2 equated the modalities for proprioceptive cues and eliminated spatial cues unique to vision (optic flow) and audition (differential binaural signals). There remained a learning disadvantage for spatial language. We attribute this result to the cost of indirect processing from words to spatial representations.  相似文献   

6.
The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts.  相似文献   

7.
Because they were used for decades to present visual stimuli in psychophysical and psychophysiological studies, cathode ray tubes (CRTs) used to be the gold standard for stimulus presentation in vision research. Recently, as CRTs have become increasingly rare in the market, researchers have started using various types of liquid-crystal display (LCD) monitors as a replacement for CRTs. However, LCDs are typically not cost-effective when used in vision research and often cannot reach the full capacity of a high refresh rate. In this study we measured the temporal and spatial characteristics of a consumer-grade LCD, and the results suggested that a consumer-grade LCD can successfully meet all the technical demands in vision research. The tested LCD, working in a flash style like that of CRTs, demonstrated perfect consistency for initial latencies across locations, yet showed poor spatial uniformity and sluggishness in reaching the requested luminance within the first frame. After these drawbacks were addressed through software corrections, the candidate monitor showed performance comparable or superior to that of CRTs in terms of both spatial and temporal homogeneity. The proposed solution can be used as a replacement for CRTs in vision research.  相似文献   

8.
According to feature-integration theory (Treisman & Gelade, 1980), separable features such as color and shape exist in separate maps in preattentive vision and can be integrated only through the use of spatial attention. Many perceptual aftereffects, however, which are also assumed to reflect the features available in preattentive vision, are sensitive to conjunctions of features. One possible resolution of these views holds that adaptation to conjunctions depends on spatial attention. We tested this proposition by presenting observers with gratings varying in color and orientation. The resulting McCollough aftereffects were independent of whether the adaptation stimuli were presented inside or outside of the focus of spatial attention. Therefore, color and shape appear to be conjoined preattentively, when perceptual aftereffects are used as the measure. These same stimuli, however, appeared to be separable in two additional experiments that required observers to search for gratings of a specified color and orientation. These results show that different experimental procedures may be tapping into different stages of preattentive vision.  相似文献   

9.
The general observation that handwriting is not noticeably impaired by the withdrawal of vision can be explained in two ways. One might argue that vision is not needed during the act of writing. Micro-analyses should then reveal that spatial as well as temporal writing features are identical in conditions of vision and no vision. Alternatively, it is possible that vision is needed during the act of writing, but that without vision possible errors and inaccuracies have to be prevented. Assuming that the latter would place an extra demand on movement control, this should be revealed by an increase in processing time. We have found evidence for the latter view in the present study in which 12 subjects wrote a nonsense letter sequence with and without vision. Close examination showed that writing shapes remained equally invariant under both vision conditions, suggesting that spatial control was unaffected by withdrawing vision. The prediction that invariance of shapes is preserved in the absence of vision at the expense of processing time increments was confirmed. The increase of reaction time observed when visual guidance was withdrawn suggests that more processing time was needed prior to the movement start. Moreover, the RT increment was larger when a short writing duration was instructed. The present findings will be discussed in light of the remarkable flexibility of writing as a motor skill in which writers appear to be able to employ specific strategies to preserve shape in the absence of visual guidance.  相似文献   

10.
Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.  相似文献   

11.
This paper explores the relationship between technology and technique in the use of computers as tools and how it is leading cognitive sciences into to an era of “webs.” Ernst Kapp suggested that it is humans who determine the “appropriate form” of any tool through the way they use and think about it; Douglas Engelbart, a pioneering computer researcher suggested that tools change to meet our expectations pushing us to understand the world in different ways. These two interrelated observations about technology are especially salient for our burgeoning information age. The current intersection of technologies leads to two competing visions of the computer – both deeply influenced by the concept of human–computer symbiosis – and to very different conceptions of human thinking. The vision of computer as recreation of human thinking, heavily influenced by the development of tools such as the personal computer and object-oriented programming, leads to viewing ideal human thinking as efficiently designed, well organized, and locally regulated by executive functions. The second vision of computers, as augmenting the human mind by extending brain activity out into the information universe, leads to web or trails related themes that focus on non-linear, non-hierarchical inter-linking of information into cohesive patterns. This paper suggests that because of the pace of tool development in these two computer capabilities the theme of the central processing unit dominated early, but we are now entering a new, more complex “age of webs.”  相似文献   

12.
Is vision informationally encapsulated from cognition or is it cognitively penetrated? I shall argue that intentions penetrate vision in the experience of visual spatial constancy: the world appears to be spatially stable despite our frequent eye movements. I explicate the nature of this experience and critically examine and extend current neurobiological accounts of spatial constancy, emphasizing the central role of motor signals in computing such constancy. I then provide a stringent condition for failure of informational encapsulation that emphasizes a computational condition for cognitive penetration: cognition must serve as an informational resource for visual computation. This requires proposals regarding semantic information transfer, a crucial issue in any model of informational encapsulation. I then argue that intention provides an informational resource for computation of visual spatial constancy. Hence, intention penetrates vision.  相似文献   

13.
It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009b). Here, we demonstrate that emotion improves fast temporal vision at the expense of fine-grained spatial vision. We tested participants' threshold resolution with Landolt circles containing a small spatial or brief temporal discontinuity. The prior presentation of a fearful face cue, compared with a neutral face cue, impaired spatial resolution but improved temporal resolution. In addition, we show that these benefits and deficits were triggered selectively by the global configural properties of the faces, which were transmitted only through low spatial frequencies. Critically, the common locus of these opposite effects suggests a trade-off between magno- and parvocellular-type visual channels, which contradicts the common assumption that emotion invariably improves vision. We show that, rather than being a general "boost" for all visual features, affective neural circuits sacrifice the slower processing of small details for a coarser but faster visual signal.  相似文献   

14.
When places are explored without vision, observers go from temporally sequenced, circuitous inputs available along walks to knowledge of spatial structure (i.e., straight-line distances and directions characterizing the simultaneous arrangement of the objects passed along the way). Studies show that a life history of vision helps develop nonvisual sensitivity, but they are unspecific on the formative experiences or the underlying processes. This study compared judgments of straight-line distances and directions among landmarks in a familiar area of town by partially sighted persons who varied in types and ages of visual impairment. Those with early childhood loss of broad-field vision and those blind from birth performed significantly worse than those with early or late acuity loss and those with late field loss. Broad-field visual experience facilitates perceptual development by providing a basis for proprioceptive and efferent information from locomotion against distances and directions relative to the surrounding environment. Differences in the perception of walking, in turn, cause the observed differences in sensitivity to spatial structure.  相似文献   

15.
Recently, vision scientists have begun to explore fractals. This paper describes a set of programs that can be used to create fractal and fractal-like drawings. The programs were implemented on the Apple II series of computers. The programs were primarily designed to create deterministic and random fractal-like patterns with fractal dimensionality between 1 and 2. A supplementary program computes the box dimensionality, a measure of dimensionality that does not assume an infinite recursive process. The advantages of this measure of dimensionality over the more typical self-similar measure are discussed.  相似文献   

16.
Although the five primary senses have traditionally been thought of as separate, examples of their interactions, as well as the neural substrate possibly underlying them, have been identified. Arm position sense, for example, depends on touch, proprioception, and spatial vision of the limb. It is, however, unknown whether position sense is also influenced by more fundamental, nonspatial visual information. Here, we report an illusion that demonstrates that the position sense of the eyelid partly depends on information regarding the relative illumination reported by the two eyes. When only one eye is dark-adapted and both eyes are exposed to a dim environment, the lid of the light-adapted eye feels closed or "droopy." The effect decreases when covering the eye by hand or a patch, thus introducing tactile information congruent with the interocular difference in vision. This reveals that the integration of vision with touch and proprioception is not restricted to higher-level spatial vision, but is instead a more fundamental aspect of sensory processing than has been previously shown.  相似文献   

17.
Critical to low-vision navigation are the abilities to recover scale and update a 3-D representation of space. In order to investigate whether these abilities are present under low-vision conditions, we employed the triangulation task of eyes-closed indirect walking to previously viewed targets on the ground. This task requires that the observer continually update the location of the target without any further visual feedback of his/her movement or the target’s location. Normally sighted participants were tested monocularly in a degraded vision condition and a normal vision condition on both indirect and direct walking to previously viewed targets. Surprisingly, we found no difference in walked distances between the degraded and normal vision conditions. Our results provide evidence for intact spatial updating even under severely degraded vision conditions, indicating that participants can recover scale and update a 3-D representation of space under simulated low vision.  相似文献   

18.
19.
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.  相似文献   

20.
The objects we see are not given in the images at the eyes, but must be constructed by the human visual system. Indeed, damage to specific brain regions often leads to specific impairments of visual abilities (for example, the perception of shape, color or motion). Human vision constructs the various properties of visual objects, not independently of each other, but in a highly coordinated fashion. The construction of one visual property strongly influences the constructions of other properties. Visual shape is an important construction for successfully recognizing objects. There is growing consensus that human vision represents shapes in terms of component parts and their spatial relationships. These parts and their spatial relationships provide a powerful first index into one's visual memory of shapes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号