首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT— When we move, the visual world moves toward us. That is, self-motion normally produces visual signals (flow) that tell us about our own motion. But these signals are distorted by our motion: Visual flow actually appears slower while we are moving than it does when we are stationary and our surroundings move past us. Although for many years these kinds of distortions have been interpreted as a suppression of flow to promote the perception of a stable world, current research has shown that these shifts in perceived visual speed may have an important function in measuring our own self-motion. Specifically, by slowing down the apparent rate of visual flow during self-motion, our visual system is able to perceive differences between actual and expected flow more precisely. This is useful in the control of action.  相似文献   

2.
Experiments are reported in which it was found that, with the angular speed of a visual surround held constant, the perceived speed of rotary self-motion increased linearly with increasing perceived distance of this surround. This finding was in agreement with a motion constancy equation derived from a consideration of object-referred motion perception. Since information concerning distance is necessary for the perception of linear but not angular speed, this finding supports the conclusion that visually perceived rotary self-motion perception is dependent upon perceived linear surround motion at least in the horizontal plane. The visual motion constancy mechanism which operates for object-referred motion can apparently not be switched off for the special case of self-motion perception.  相似文献   

3.
张弢  李胜光 《心理科学进展》2011,19(10):1405-1416
通过光流信息来指导个体在环境中有效移动是我们视觉神经系统的一项核心任务。在灵长类的大脑皮层, 视觉运动的信息加工是由位于背侧通路的一系列脑区来完成的, 这一信息通路主要参与运动和空间动作的分析。在高级视皮层, 视觉系统很可能利用非视觉信息来补偿因眼动造成的光流模式扭曲, 以重建对自身运动方向的正确表征。根据目前研究进展, MST和VIP这两个位于顶叶的脑区都参与了自身运动认知过程, 并且对精确的自身运动方向判断是不可或缺的。本文系统介绍了近些年来在自身运动认知神经机制研究领域的进展, 尤其是神经生理学家们利用非人灵长类动物模型在自身运动认知皮层处理机制方面的成果。同时也提出了一些深入研究急需解决的关键问题。  相似文献   

4.
The aim of this study was to investigate the perception of possibilities for action (i.e., affordances) that depend on one's movement capabilities, and more specifically, the passability of a shrinking gap between converging obstacles. We introduce a new optical invariant that specifies in intrinsic units the minimum locomotor speed needed to safely pass through a shrinking gap. Detecting this information during self-motion requires recovering the component of the obstacles' local optical expansion attributable to obstacle motion, independent of self-motion. In principle, recovering the obstacle motion component could involve either visual or non-visual self-motion information. We investigated the visual and non-visual contributions in two experiments in which subjects walked through a virtual environment and made judgments about whether it was possible to pass through a shrinking gap. On a small percentage of trials, visual and non-visual self-motion information were independently manipulated by varying the speed with which subjects moved through the virtual environment. Comparisons of judgments on such catch trials with judgments on normal trials revealed both visual and non-visual contributions to the detection of information about minimum walking speed.  相似文献   

5.
During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone (passive transport), and both biomechanical self-motion and physical translation together (walking). Their results show that each factor alone produces subtractive reductions in visual speed but that subtraction is greatest with both factors together, approximating the sum of the 2 separately. The similarity of results for biomechanical and passive self-motion support H. B. Barlow's (1990) inhibition theory of sensory correlation as a mechanism for implementing H. Wallach's (1987) compensation for self-motion.  相似文献   

6.
We examined how spatial and temporal characteristics of the perception of self-motion, generated by constant velocity visual motion, was reflected in orientation of the head and whole body of young adults standing in a CAVE, a virtual environment that presents wide field of view stereo images with context and texture. Center of pressure responses from a force plate and perception of self-motion through orientation of a hand-held wand were recorded. The influence of the perception of self-motion on postural kinematics differed depending upon the plane and complexity of visual motion. Postural behaviors generated through the perception of self-motion appeared to contain a confluence of the cortically integrated visual and vestibular signals and of other somatosensory inputs. This would suggest that spatial representation during motion in the environment is modified by both ascending and descending controls. We infer from these data that motion of the visual surround can be used as a therapeutic tool to influence posture and spatial orientation, particularly in more visually sensitive individuals following central nervous system (CNS) impairment.  相似文献   

7.
This paper first reviews briefly the literature on the acoustics of infant cry sounds and then presents two empirical studies on the perception of cry and noncry sounds in their social-communicative context. Acoustic analysis of cry sounds has undergone dramatic changes in the last 35 years, including the introduction of more than a hundred different acoustic measures. The study of cry acoustics, however, remains largely focused on neonates who have various medical problems or are at risk for developmental delays. Relatively little is known about how cry sounds and cry perception change developmentally, or about how they compare with noncry sounds. The data presented here support the notion that both auditory and visual information are important in caregivers' interpretations of infant sounds in naturalistic contexts. When only auditory information is available (Study 1), cry sounds become generally more recognizable from 3 to 12 months of age; perception of noncry sounds, however, generally does not change over age. When auditory and visual information contradict each other (Study 2), adults tend to perform at chance levels, with a few interesting exceptions. It is suggested that broadening studies of acoustic analysis and perception to include both cry and noncry sounds should increase our understanding of the development of communication in infancy. Finally, we suggest that examining the cry in its developmental context holds great possibility for delineating the factors that underlie adults' responses to crying.  相似文献   

8.
Accurate and efficient control of self-motion is an important requirement for our daily behavior. Visual feedback about self-motion is provided by optic flow. Optic flow can be used to estimate the direction of self-motion (‘heading’) rapidly and efficiently. Analysis of oculomotor behavior reveals that eye movements usually accompany self-motion. Such eye movements introduce additional retinal image motion so that the flow pattern on the retina usually consists of a combination of self-movement and eye movement components. The question of whether this ‘retinal flow’ alone allows the brain to estimate heading, or whether an additional ‘extraretinal’ eye movement signal is needed, has been controversial. This article reviews recent studies that suggest that heading can be estimated visually but extraretinal signals are used to disambiguate problematic situations. The dorsal stream of primate cortex contains motion processing areas that are selective for optic flow and self-motion. Models that link the properties of neurons in these areas to the properties of heading perception suggest possible underlying mechanisms of the visual perception of self-motion.  相似文献   

9.
Nakamura S  Seno T  Ito H  Sunaga S 《Perception》2010,39(12):1579-1590
The effects of dynamic colour modulation on vection were investigated to examine whether perceived variation of illumination affects self-motion perception. Participants observed expanding optic flow which simulated their forward self-motion. Onset latency, accumulated duration, and estimated magnitude of the self-motion were measured as indices of vection strength. Colour of the dots in the visual stimulus was modulated between white and red (experiment 1), white and grey (experiment 2), and grey and red (experiment 3). The results indicated that coherent colour oscillation in the visual stimulus significantly suppressed the strength of vection, whereas incoherent or static colour modulation did not affect vection. There was no effect of the types of the colour modulation; both achromatic and chromatic modulations turned out to be effective in inhibiting self-motion perception. Moreover, in a situation where the simulated direction of a spotlight was manipulated dynamically, vection strength was also suppressed (experiment 4). These results suggest that observer's perception of illumination is critical for self-motion perception, and rapid variation of perceived illumination would impair the reliabilities of visual information in determining self-motion.  相似文献   

10.
One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.  相似文献   

11.
The currency of our visual experience consists not only of visual features such as color and motion, but also seemingly higher-level features such as causality--as when we see two billiard balls collide, with one causing the other to move. One of the most important and controversial questions about causal perception involves its origin: do we learn to see causality, or does this ability derive in part from innately specified aspects of our cognitive architecture? Such questions are difficult to answer, but can be indirectly addressed via experiments with infants. Here we explore causal perception in 7-month-old infants, using a different approach from previous work. Recent work in adult visual cognition has demonstrated a postdictive aspect to causal perception: in certain situations, we can perceive a collision between two objects in an ambiguous display even after the moment of potential 'impact' has already passed. This illustrates one way in which our conscious perception of the world is not an instantaneous moment-by-moment construction, but rather is formed by integrating information over short temporal windows. Here we demonstrate analogous postdictive processing in infants' causal perception. This result demonstrates that even infants' visual systems process information in temporally extended chunks. Moreover, this work provides a new way of demonstrating causal perception in infants that differs from previous strategies, and is immune to some previous types of critiques.  相似文献   

12.
Estimating the size of bodies is crucial for interactions with physical and social environments. Body-size perception is malleable and can be altered using visual adaptation paradigms. However, it is unclear whether such visual adaptation effects also transfer to other modalities and influence, for example, the perception of tactile distances. In this study, we employed a visual adaptation paradigm. Participants were exposed to images of expanded or contracted versions of self- or other-identity bodies. Before and after this adaptation, they were asked to manipulate the width of body stimuli to appear as ‘normal’ as possible. We replicated an effect of visual adaptation such that the body-size selected as most ‘normal’ was larger after exposure to expanded and thinner after exposure to contracted adaptation stimuli. In contrast, we did not find evidence that this adaptation effect transfers to distance estimates for paired tactile stimuli delivered to the abdomen. A Bayesian analysis showed that our data provide moderate evidence that there is no effect of visual body-size adaptation on the estimation of spatial parameters in a tactile task. This suggests that visual body-size adaptation effects do not transfer to somatosensory body-size representations.  相似文献   

13.
Regarding Scenes   总被引:2,自引:0,他引:2  
ABSTRACT— When we view the visual world, our eyes flit from one location to another about three times each second. These frequent changes in gaze direction result from very fast saccadic eye movements. Useful visual information is acquired only during fixations, periods of relative gaze stability. Gaze control is defined as the process of directing fixation through a scene in real time in the service of ongoing perceptual, cognitive, and behavioral activity. This article discusses current approaches and new empirical findings that are allowing investigators to unravel how human gaze control operates during active real-world scene perception.  相似文献   

14.
We investigated the role of extraretinal information in the perception of absolute distance. In a computer-simulated environment, monocular observers judged the distance of objects positioned at different locations in depth while performing frontoparallel movements of the head. The objects were spheres covered with random dots subtending three different visual angles. Observers viewed the objects ateye level, either in isolation or superimposed on a ground floor. The distance and size of the spheres were covaried to suppress relative size information. Hence, the main cues to distance were the motion parallax and the extraretinal signals. In three experiments, we found evidence that (1) perceived distance is correlated with simulated distance in terms of precision and accuracy, (2) the accuracy in the distance estimate is slightly improved by the presence of a ground-floor surface, (3) the perceived distance is not altered significantly when the visual field size increases, and (4) the absolute distance is estimated correctly during self-motion. Conversely, stationary subjects failed to report absolute distance when they passively observed a moving object producing the same retinal stimulation, unless they could rely on knowledge of the three-dimensional movements.  相似文献   

15.
Visual cognition in our 3D world requires understanding how we accurately localize objects in 2D and depth, and what influence both types of location information have on visual processing. Spatial location is known to play a special role in visual processing, but most of these findings have focused on the special role of 2D location. One such phenomena is the spatial congruency bias, where 2D location biases judgments of object features but features do not bias location judgments. This paradigm has recently been used to compare different types of location information in terms of how much they bias different types of features. Here we used this paradigm to ask a related question: whether 2D and depth-from-disparity location bias localization judgments for each other. We found that presenting two objects in the same 2D location biased position-in-depth judgments, but presenting two objects at the same depth (disparity) did not bias 2D location judgments. We conclude that an object’s 2D location may be automatically incorporated into perception of its depth location, but not vice versa, which is consistent with a fundamentally special role for 2D location in visual processing.  相似文献   

16.
Although there are many well-known forms of visual cues specifying absolute and relative distance, little is known about how visual space perception develops at small temporal scales. How much time does the visual system require to extract the information in the various absolute and relative distance cues? In this article, we describe a system that may be used to address this issue by presenting brief exposures of real, three-dimensional scenes, followed by a masking stimulus. The system is composed of an electronic shutter (a liquid crystal smart window) for exposing the stimulus scene, and a liquid crystal projector coupled with an electromechanical shutter for presenting the masking stimulus. This system can be used in both full- and reduced-cue viewing conditions, under monocular and binocular viewing, and at distances limited only by the testing space. We describe a configuration that may be used for studying the microgenesis of visual space perception in the context of visually directed walking.  相似文献   

17.
Thresholds for the perception of linear vection were measured. These thresholds allowed us to define the spatiotemporal contrast surface sensitivity and the spatiotemporal domain of the perception of rectilinear vection (a visually induced self-motion in a straight line). Moreover, a Weber’s law was found, such that a mean relative differential threshold in angular velocity of about 41% is necessary to perceive curvilinear vection. This visually induced self-motion corresponds to the sensation of moving in a curved path. It is proposed that curvilinear vection is induced when the apparent velocity difference is detectable. The spatiotemporal domain of perception of rectilinear vection and its spatiotemporal contrast surface sensitivity are centered on low spatial frequencies. Concurrently, the values which correspond to the relative differential thresholds of curvilinear vection are low spatial frequencies. Accordingly, the peripheral ambient visual system seems to be involved in perceiving linear vection. It is argued further that the central ambient system might also be involved in the processing of linear vection.  相似文献   

18.
During self-motions, different patterns of optic flow are presented to the left and right eyes. Previous research has, however, focused mainly on the self-motion information contained in a single pattern of optic flow. The present experiments investigated the role that binocular disparity plays in the visual perception of self-motion, showing that the addition of stereoscopic cues to optic flow significantly improves forward linear vection in central vision. Improvements were also achieved by adding changingsize cues to sparse (but not dense) flow patterns. These findings showed that assumptions in the heading literature that stereoscopic cues facilitate self-motion only when the optic flow has ambiguous depth ordering do not apply to vection. Rather, it was concluded that both stereoscopic and changingsize cues provide additional motion-in-depth information that is used in perceiving self-motion.  相似文献   

19.
Identity perception often takes place in multimodal settings, where perceivers have access to both visual (face) and auditory (voice) information. Despite this, identity perception is usually studied in unimodal contexts, where face and voice identity perception are modelled independently from one another. In this study, we asked whether and how much auditory and visual information contribute to audiovisual identity perception from naturally-varying stimuli. In a between-subjects design, participants completed an identity sorting task with either dynamic video-only, audio-only or dynamic audiovisual stimuli. In this task, participants were asked to sort multiple, naturally-varying stimuli from three different people by perceived identity. We found that identity perception was more accurate for video-only and audiovisual stimuli compared with audio-only stimuli. Interestingly, there was no difference in accuracy between video-only and audiovisual stimuli. Auditory information nonetheless played a role alongside visual information as audiovisual identity judgements per stimulus could be predicted from both auditory and visual identity judgements, respectively. While the relationship was stronger for visual information and audiovisual information, auditory information still uniquely explained a significant portion of the variance in audiovisual identity judgements. Our findings thus align with previous theoretical and empirical work that proposes that, compared with faces, voices are an important but relatively less salient and a weaker cue to identity perception. We expand on this work to show that, at least in the context of this study, having access to voices in addition to faces does not result in better identity perception accuracy.  相似文献   

20.
Visual motion is used to control direction and speed of self-motion and time-to-contact with an obstacle. In earlier work, we found that human subjects can discriminate between the distances of different visually simulated self-motions in a virtual scene. Distance indication in terms of an exocentric interval adjustment task, however, revealed linear correlation between perceived and indicated distances but with a profound distance underestimation. One possible explanation for this underestimation is the perception of visual space in virtual environments. Humans perceive visual space in natural scenes as curved, and distances are increasingly underestimated with increasing distance from the observer. Such spatial compression may also exist in our virtual environment. We therefore surveyed perceived visual space in a static virtual scene. We asked observers to compare two horizontal depth intervals, similar to experiments performed in natural space. Subjects had to indicate the size of one depth interval relative to a second interval. Our observers perceived visual space in the virtual environment as compressed, similar to the perception found in natural scenes. However, the nonlinear depth function we found can not explain the observed distance underestimation of visual simulated self-motions in the same environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号