首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We examined the hypothesis (Ono & Wade, 1985) that occlusion of far stimuli by a near one on the same visual line can operate as a depth cue in stereograms containing different numbers of targets in the two eyes. By controlling eye positions, we created conditions in which the visual system could interpret the retinal images as originating from stimuli on the visual axis of one eye and also created other conditions in which the origin of the retinal images was ambiguous. In Experiment 1, we presented two lines to one eye and a single line to the other eye. When the image of the line on the temporal side of the line pair on one retina fused with the image of the single line on the other retina, the nonfused line appeared farther away more often than it did when the image on the nasal side fused. In Experiment 2, we used two differently shaped stimuli. In the condition in which the nonfused stimulus represented an object being occluded, it appeared farther away more often than in the four conditions in which it did not. In Experiment 3, we extended the idea to three different objects. When the middle of the three images fused with the single image, the nonfused stimulus appeared farther when it could be interpreted as being occluded than when it could not. In the condition in which the most temporal image fused with the single image, the nonfused stimuli appeared farther than in the condition in which the most nasal one fused. The results supported the hypothesis that occlusion plays a role in depth perception in the Wheatstone-Panum limiting case.  相似文献   

2.
We examined the hypothesis (Ono & Wade, 1985) that occlusion of far stimuli by a near one on the same visual line can operate as a depth cue in stereograms containing different numbers of targets in the two eyes. By controlling eye positions, we created conditions in which the visual system could interpret the retinal images as originating from stimuli on the visual axis of one eye and also created other conditions in which the origin of the retinal images was ambiguous. In Experiment 1, we presented two lines to one eye and a single line to the other eye. When the image of the line on the temporal side of the line pair on one retina fused with the image of the single line on the other retina, the nonfused line appeared farther away more often than it did when the image on the nasal side fused. In Experiment 2, we used two differently shaped stimuli. In the condition in which the nonfused stimulus represented an object being occluded, it appeared farther away more often than in the four conditions in which it did not. In Experiment 3, we extended the idea to three different objects. When the middle of the three images fused with the single image, the nonfused stimulus appeared farther when it could be interpreted as being occluded than when it could not. In the condition in which the most temporal image fused with the single image, the nonfused stimuli appeared farther than in the condition in which the most nasal one fused. The results supported the hypothesis that occlusion plays a role in depth perception in the Wheatstone-Panum limiting case.  相似文献   

3.
《Acta psychologica》2013,143(3):317-321
Recent studies have demonstrated that central cues, such as eyes and arrows, reflexively trigger attentional shifts. However, it is not clear whether the attention induced by these two cues can be attached to objects within the visual scene. In the current study, subjects' attention was directed to one of two objects (square outlines) via the observation of uninformative directional arrows or eye gaze. Then, the objects rotated 90° clockwise or counter-clockwise to a new location and the target stimulus was presented within one of these two objects. Results showed that independent of the cue type participants responded faster to targets in the cued object than to those in the uncued object. This suggests that in dynamic displays, both gaze and arrow cues are able to trigger reflexive shifts of attention to objects moving within the visual scene.  相似文献   

4.
Visual attention functions to select relevant information from a vast amount of visual input that is available for further processing. Information from the two eyes is processed separately in early stages before converging and giving rise to a coherent percept. Observers normally cannot access eye-of-origin information. In the research reported here, we demonstrated that voluntary attention can be eye-specific, modulating visual processing within a specific monocular channel. Using a modified binocular-rivalry paradigm, we found that attending to a monocular cue while remaining oblivious to its eye of origin significantly enhanced the competition strength of a stimulus presented to the cued eye, even when the stimulus was suppressed from consciousness. Furthermore, this eye-specific attentional effect was insensitive to low-level properties of the cue (e.g., size and contrast) but sensitive to the attentional load. Together, these findings suggest that top-down attention can have a significant modulation effect at the eye-specific stage of visual information processing.  相似文献   

5.
Hering's model of egocentric visual direction assumes implicitly that the effect of eye position on direction is both linear and equal for the two eyes; these two assumptions were evaluated in the present experiment. Five subjects pointed (open-loop) to the apparent direction of a target seen under conditions in which the position of one eye was systematically varied while the position of the other eye was held constant. The data were analyzed through examination of the relationship between the variations in perceived egocentric direction and variations in expected egocentric direction based on the positions of the varying eye. The data revealed that the relationship between eye position and egocentric direction is indeed linear. Further, the data showed that, for some subjects, variations in the positions of the two eyes do not have equal effects on egocentric direction. Both the between-eye differences and the linear relationship may be understood in terms of individual differences in the location of the cyclopean eye, an unequal weighting of the positions of the eyes in the processing of egocentric direction, or some combination of these two factors.  相似文献   

6.
Hering’s model of egocentric visual direction assumes implicitly that the effect of eye position on direction is both linear and equal for the two eyes; these two assumptions were evaluated in the present experiment. Five subjects pointed (open-loop) to the apparent direction of a target seen under conditions in which the position of one eye was systematically varied while the position of the other eye was held constant. The data were analyzed through examination of the relationship between the variations in perceived egocentric direction and variations inexpected egocentric direction based on the positions of the varying eye. The data revealed that the relationship between eye position and egocentric direction is indeed linear. Further, the data showed that, for some subjects, variations in the positions of the two eyes do not have equateffectsTjn egocentric direction. Both the between-eye differences and the linear relationship may be understood in terms of individual differences in the location of the cyclopean eye, an unequal weighting of the positions of the eyes in the processing of egocentric direction, or some combination of these two factors.  相似文献   

7.
In recent years philosophers such as Paul Boghossian, David Velleman and Colin McGinn have argued against the view that colours are dispositional properties, on the grounds that they do not look like dispositional properties, and in particular that they are not represented in visual experience as dispositions to present certain kinds of appearances. Rather colours are represented as being these appearances, i.e., simple, non-dispositional properties. I argue that a proper understanding of how visual experiences represent physical objects as being coloured shows that colours do look like dispositions. In particular, I argue that if visual experiences are to represent properties as properties of physical objects, they must distinguish between these properties and their appearances, and thus cannot represent such properties as colours as being identical with their corresponding appearances.  相似文献   

8.
Pursuit eye movements give rise to retinal motion. To judge stimulus motion relative to the head, the visual system must correct for the eye movement by using an extraretinal, eye-velocity signal. Such correction is important in a variety of motion estimation tasks including judgments of object motion relative to the head and judgments of self-motion direction from optic flow. The Filehne illusion (where a stationary object appears to move opposite to the pursuit) results from a mismatch between retinal and extraretinal speed estimates. A mismatch in timing could also exist. Speed and timing errors were investigated using sinusoidal pursuit eye movements. We describe a new illusion--the slalom illusion--in which the perceived direction of self-motion oscillates left and right when the eyes move sinusoidally. A linear model is presented that determines the gain ratio and phase difference of extraretinal and retinal signals accompanying the Filehne and slalom illusions. The speed mismatch and timing differences were measured in the Filehne and self-motion situations using a motion-nulling procedure. Timing errors were very small for the Filehne and slalom illusions. However, the ratios of extraretinal to retinal gain were consistently less than 1, so both illusions are the consequence of a mismatch between estimates of retinal and extraretinal speed. The relevance of the results for recovering the direction of self-motion during pursuit eye movements is discussed.  相似文献   

9.
Visual input is frequently disrupted by eye movements, blinks, and occlusion. The visual system must be able to establish correspondence between objects visible before and after a disruption. Current theories hold that correspondence is established solely on the basis of spatiotemporal information, with no contribution from surface features. In five experiments, we tested the relative contributions of spatiotemporal and surface feature information in establishing object correspondence across saccades. Participants generated a saccade to one of two objects, and the objects were shifted during the saccade so that the eyes landed between them, requiring a corrective saccade to fixate the target. To correct gaze to the appropriate object, correspondence must be established between the remembered saccade target and the target visible after the saccade. Target position and surface feature consistency were manipulated. Contrary to existing theories, surface features and spatiotemporal information both contributed to object correspondence, and the relative weighting of the two sources of information was governed by the demands of the task. These data argue against a special role for spatiotemporal information in object correspondence, indicating instead that the visual system can flexibly use multiple sources of relevant information.  相似文献   

10.
S Mateeff  J Hohnsbein 《Perception》1989,18(1):93-104
Subjects used eye movements to pursue a light target that moved from left to right with a velocity of 15 deg s-1. The stimulus was a sudden five-fold decrease in target intensity during the movement. The subject's task was to localize the stimulus relative to either a single stationary background point or the midpoint between two points (28 deg apart) placed 0.5 deg above the target path. The stimulus was usually mislocated in the direction of eye movement; the mislocation was affected by the spatial adjacency between background and stimulus. When an auditory, rather than a visual, stimulus was presented during tracking, target position at the time of stimulus presentation was visually mislocated in the direction opposite to that of eye movement. The effect of adjacency between background and target remained the same. The involvement of processes of subject-relative and object-relative visual perception is discussed.  相似文献   

11.
The saccadic latency to visual targets is susceptible to the properties of the currently fixated objects. For example, the disappearance of a fixation stimulus prior to presentation of a peripheral target shortens saccadic latencies (the gap effect). In the present study, we investigated the influences of a social signal from a facial fixation stimulus (i.e., gaze direction) on subsequent saccadic responses in the gap paradigm. In Experiment 1, a cartoon face with a direct or averted gaze was used as a fixation stimulus. The pupils of the face were unchanged (overlap), disappeared (gap), or were translated vertically to make or break eye contact (gaze shift). Participants were required to make a saccade toward a target to the left or the right of the fixation stimulus as quickly as possible. The results showed that the gaze direction influenced saccadic latencies only in the gaze shift condition, but not in the gap or overlap condition; the direct-to-averted gaze shift (i.e., breaking eye contact) yielded shorter saccadic latencies than did the averted-to-direct gaze shift (i.e., making eye contact). Further experiments revealed that this effect was eye contact specific (Exp. 2) and that the appearance of an eye gaze immediately before the saccade initiation also influenced the saccadic latency, depending on the gaze direction (Exp. 3). These results suggest that the latency of target-elicited saccades can be modulated not only by physical changes of the fixation stimulus, as has been seen in the conventional gap effect, but also by a social signal from the attended fixation stimulus.  相似文献   

12.
Research suggests that the neural concomitants of visual rivalry are contingent on the stimulus parameters, implying the existence of three different types of rivalry. Binocular rivalry (dissimilar patterns are presented, one to each eye) is seemingly mediated by interactions between pools of monocular neurons. Monocular rivalry (superimposed patterns are presented to one or both eyes) is presumably the result of competition between neural representations of the patterns. Stimulus rivalry (dissimilar patterns are swapped rapidly between the two eyes) is independent of eye of origin. In the experiment reported here, we integrated these three different types of rivalry into one stimulus. We found that perceptual alternations span the three types of rivalry, demonstrating that the brain can produce a coherent percept sourced from three different types of visual conflict. This result is in agreement with recent work suggesting that the resolution of competitive visual stimuli is mediated by a general mechanism spanning different levels of the visual-processing hierarchy.  相似文献   

13.
Figure-ground, that is the segmentation of visual information into objects and their surrounding backgrounds, provides structure for visual attention. Recent evidence shows a novel role of vergence eye movements in visual attention. In the present work, vergence responses during figure-ground segregation tasks are psychophysically investigated. We show that during a figure-ground detection task, subjects convergence their eyes. Vergence eye movements are larger in figure trials than in ground trials. In detected figures trials, vergence are stronger than in trials where the figure went unnoticed. Moreover in figure trials, vergence responses are stronger to low-contrast figures than to high-contrast figures. We argue that these discriminative vergence responses have a role in figure-ground.  相似文献   

14.
The face communicates an impressive amount of visual information. We use it to identify its owner, how they are feeling and to help us understand what they are saying. Models of face processing have considered how we extract such meaning from the face but have ignored another important signal - eye gaze. In this article we begin by reviewing evidence from recent neurophysiological studies that suggests that the eyes constitute a special stimulus in at least two senses. First, the structure of the eyes is such that it provides us with a particularly powerful signal to the direction of another person's gaze, and second, we may have evolved neural mechanisms devoted to gaze processing. As a result, gaze direction is analysed rapidly and automatically, and is able to trigger reflexive shifts of an observer's visual attention. However, understanding where another individual is directing their attention involves more than simply analysing their gaze direction. We go on to describe research with adult participants, children and non-human primates that suggests that other cues such as head orientation and pointing gestures make significant contributions to the computation of another's direction of attention.  相似文献   

15.
16.
When the eyes pursue a fixation point that sweeps across a moving background pattern, and the fixation point is suddenly made to stop, the ongoing motion of the background pattern seems to accelerate to a higher velocity. Experiment I showed that this acceleration illusion is not caused by the sudden change in (i) the relative velocity between background and fixation point, (ii) the velocity of the retinal image of the background pattern, or (iii) the motion of the retinal image of the rims of the CRT screen on which the experiment was carried out. In experiment II the magnitude of the illusion was quantified. It is strongest when background and eyes move in the same direction. When they move in opposite directions it becomes less pronounced (and may disappear) with higher background velocities. The findings are explained in terms of a model proposed by the first author, in which the perception of object motion and velocity derives from the interaction between retinal slip velocity information and the brain's 'estimate' of eye velocity in space. They illustrate that the classic Aubert-Fleischl phenomenon (a stimulus seems to be moving slower when pursued with the eyes than when moving in front of stationary eyes) is a special case of a more general phenomenon: whenever we make a pursuit eye movement we underestimate the velocity of all stimuli in our visual field which happen to move in the same direction as our eyes, or which move slowly in the direction opposite to our eyes.  相似文献   

17.
18.
Mitroff SR  Scholl BJ 《Perception》2004,33(10):1267-1273
Because of the massive amount of incoming visual information, perception is fundamentally selective. We are aware of only a small subset of our visual input at any given moment, and a great deal of activity can occur right in front of our eyes without reaching awareness. While previous work has shown that even salient visual objects can go unseen, here we demonstrate the opposite pattern, wherein observers perceive stimuli which are not physically present. In particular, we show in two motion-induced blindness experiments that unseen objects can momentarily reenter awareness when they physically disappear: in some situations, you can see the disappearance of something you can't see. Moreover, when a stimulus changes outside of awareness in this situation and then physically disappears, observers momentarily see the altered version--thus perceiving properties of an object that they had never seen before, after that object is already gone. This phenomenon of 'perceptual reentry' yields new insights into the relationship between visual memory and conscious awareness.  相似文献   

19.
Warren (1970) has claimed that there are visual facilitation effects on auditory localization in adults but not in children. He suggests that a “visual map” organizes spatial information and that considerable experience of correlated auditory and visual events is necessary before normal spatial perception is developed. In the present experiment, children in Grades 1, 4, and 7 had to identify the position, right or left, of a single tone either blindfolded or with their eyes open. Analysis of the proportion of area under the ROC curve (obtained using reaction times) in the respective conditions showed that Ss were more sensitive to auditory position when vision was available. Reaction time was also generally faster in the light. I argue that the increase in sensitivity in the light represents updating of auditory position memory by voluntary eye movement. In the dark, eye movements are subject to involuntary and unperceived drift, which would introduce noise into the eye control mechanism and hence into auditory spatial memory.  相似文献   

20.
The eye movements of infants, aged 4–5, 7–8, and 10–11 weeks, were recorded while they viewed either a representation of a face or a nonface stimulus. Presentation of the visual stimulus was paired with the presentation of an auditory stimulus (either voice or tone) or silence. Attention to the visual stimulus was greater for the older two groups than for the youngest group. The effect of the addition of sound was to increase attention to the visual stimulus. In general, the face was looked at more than the nonface stimulus. The difference in visual attention between the face and the nonface stimulus did not appear to be based solely on the physical characteristics of the stimuli. A sharp increase in the amount of looking at the eyes of the face stimulus at 7–8 weeks of age seemed to be related to a developing appreciation of the meaning of the face as a pattern.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号