首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.  相似文献   

2.
Forest before trees: The precedence of global features in visual perception   总被引:1,自引:0,他引:1  
The idea that global structuring of a visual scene precedes analysis of local features is suggested, discussed, and tested. In the first two experiments subjects were asked to respond to an auditorily presented name of a letter while looking at a visual stimulus that consisted of a large character (the global level) made out of small characters (the local level). The subjects' auditory discrimination responses were subject to interference only by the global level and not by the local one. In Experiment 3 subjects were presented with large characters made out of small ones, and they had to recognize either just the large characters or just the small ones. Whereas the identity of the small characters had no effect on recognition of the large ones, global cues which conflicted with the local ones did inhibit the responses to the local level. In Experiment 4 subjects were asked to judge whether pairs of simple patterns of geometrical forms which were presented for a brief duration were the same or different. The patterns within a pair could differ either at the global or at the local level. It was found that global differences were detected more often than local differences.  相似文献   

3.
Fagot and Deruelle (1997) demonstrated that, when tested with identical visual stimuli, baboons exhibit an advantage in processing local features, whereas humans show the “global precedence” effect initially reported by Navon (1977). In the present experiments, we investigated the cause of this species difference. Humans and baboons performed a visual search task in which the target differed from the distractors at either the global or the local level. Humans responded more quickly to global than to local targets, whereas baboons did the opposite (Experiment 1). Human response times (RTs) were independent of display size, for both local and global processing. Baboon RTs increased linearly with display size, more so for global than for local processing. The search slope for baboons disappeared for continuous targets (Experiment 2). That effect was not due to variations in stimulus luminance (Experiment 3). Finally, variations in stimulus density affected global search slopes in baboons but not in humans (Experiment 4). Overall, results suggest that perceptual grouping operations involved during the processing of hierarchical stimuli are attention demanding for baboons, but not for humans.  相似文献   

4.
Crowell JA  Andersen RA 《Perception》2001,30(12):1465-1488
The pattern of motion in the retinal image during self-motion contains information about the person's movement. Pursuit eye movements perturb the pattern of retinal-image motion, complicating the problem of self-motion perception. A question of considerable current interest is the relative importance of retinal and extra-retinal signals in compensating for these effects of pursuit on the retinal image. We addressed this question by examining the effect of prior motion stimuli on self-motion judgments during pursuit. Observers viewed 300 ms random-dot displays simulating forward self-motion during pursuit to the right or to the left; at the end of each display a probe appeared and observers judged whether they would pass left or right of it. The display was preceded by a 300 ms dot pattern that was either stationary or moved in the same direction as, or opposite to, the eye movement. This prior motion stimulus had a large effect on self-motion judgments when the simulated scene was a frontoparallel wall (experiment 1), but not when it was a three-dimensional (3-D) scene (experiment 2). Corresponding simulated-pursuit conditions controlled for purely retinal motion aftereffects, implying that the effect in experiment 1 is mediated by an interaction between retinal and extra-retinal signals. In experiment 3, we examined self-motion judgments with respect to a 3-D scene with mixtures of real and simulated pursuit. When real and simulated pursuits were in opposite directions, performance was determined by the total amount of pursuit-related retinal motion, consistent with an extra-retinal 'trigger' signal that facilitates the action of a retinally based pursuit-compensation mechanism. However, results of experiment 1 without a prior motion stimulus imply that extra-retinal signals are more informative when retinal information is lacking. We conclude that the relative importance of retinal and extra-retinal signals for pursuit compensation varies with the informativeness of the retinal motion pattern, at least for short durations. Our results provide partial explanations for a number of findings in the literature on perception of self-motion and motion in the frontal plane.  相似文献   

5.
To interpret our environment, we integrate information from all our senses. For moving objects, auditory and visual motion signals are correlated and provide information about the speed and the direction of the moving object. We investigated at what level the auditory and the visual modalities interact and whether the human brain integrates only motion signals that are ecologically valid. We found that the sensitivity for identifying motion was improved when motion signals were provided in both modalities. This improvement in sensitivity can be explained by probability summation. That is, auditory and visual stimuli are combined at a decision level, after the stimuli have been processed independently in the auditory and the visual pathways. Furthermore, this integration is direction blind and is not restricted to ecologically valid motion signals.  相似文献   

6.
We investigated the role of extraretinal information in the perception of absolute distance. In a computer-simulated environment, monocular observers judged the distance of objects positioned at different locations in depth while performing frontoparallel movements of the head. The objects were spheres covered with random dots subtending three different visual angles. Observers viewed the objects ateye level, either in isolation or superimposed on a ground floor. The distance and size of the spheres were covaried to suppress relative size information. Hence, the main cues to distance were the motion parallax and the extraretinal signals. In three experiments, we found evidence that (1) perceived distance is correlated with simulated distance in terms of precision and accuracy, (2) the accuracy in the distance estimate is slightly improved by the presence of a ground-floor surface, (3) the perceived distance is not altered significantly when the visual field size increases, and (4) the absolute distance is estimated correctly during self-motion. Conversely, stationary subjects failed to report absolute distance when they passively observed a moving object producing the same retinal stimulation, unless they could rely on knowledge of the three-dimensional movements.  相似文献   

7.
The gradedness or discreteness of our visual awareness has been debated. Here, we investigate the influence of spatial scope of attention on the gradedness of visual awareness. We manipulated scope of attention using hierarchical letter-based tasks (global: broad scope; local: narrow scope). Participants reported the identity of a masked hierarchical letter either at the global level or at the local level. We measured subjective awareness using the perceptual awareness scale ratings and objective performance. The results indicate more graded visual awareness (lesser slope for the awareness rating curve) at the global level compared to the local level. Graded perception was also observed in visibility ratings usage with global level task showing higher usage of the middle PAS ratings. Our results are in line with the prediction of level of processing hypothesis and show that global/local attentional scope and contextual endogenous factors influence the graded nature of our visual awareness.  相似文献   

8.
The present study was designed to trace the normal development of local and global processing of hierarchical visual forms. We presented pairs of hierarchical shapes to children and adults and asked them to indicate whether the two shapes were the same or different at either the global or the local level. In Experiments 1 (6-year-olds, 10-year-olds, adults) and 2 (10-year-olds, 14-year-olds, adults), we presented stimuli centrally. All age groups responded faster on global trials than local trials (global precedence effect), but the bias was stronger in children and diminished to the adult level between 10 and 14 years of age. In Experiment 3 (10-year-olds, 14-year-olds, adults), we presented stimuli in the left or right visual field so that they were transmitted first to the contralateral hemisphere. All age groups responded faster on local trials when stimuli were presented in the right visual field (left hemisphere); reaction times on global trials were independent of visual field. The results of Experiment 3 suggest that by 10 years of age the hemispheres have adult-like specialization for the processing of hierarchical shapes, at least when attention is directed to the global versus local level. Nevertheless, their greater bias in Experiments 1 and 2 suggests that 10-year-olds are less able than adults to modulate attention to the output from local versus global channels-perhaps because they are less able to ignore distractors and perhaps because the cerebral hemispheres are less able to engage in parallel processing.  相似文献   

9.
We investigated the effect of local texture motion on time-to-contact (TTC) estimation. In Experiment 1, observers estimated the TTC of a looming disk with a spiral texture pattern in a prediction-motion task. Rotation of the spiral texture in a direction causing illusory contraction resulted in a significant TTC overestimation, relative to a condition without texture rotation. This would be consistent with an intrusion of task-irrelevant local upon task-relevant global information. However, illusory expansion did not cause a relative TTC underestimation but rather also a tendency towards overestimation. In Experiment 2, a vertical cylinder moved on the frontoparallel plane. Observers judged its TTC with a finish line. The cylinder was textured with stripes oriented in parallel to its longitudinal axis. It was either not rotating, rotating such that the stripes moved towards the finish line (i.e., in the same direction as the contour), or rotating such that the stripes moved away from the finish line. Both types of texture motion caused TTC overestimation compared to the static condition. Experiment 3 showed that the different effects of task-relevant and task-irrelevant texture motion are not a mere procedural effect of the prediction-motion task. In conclusion, task-irrelevant local motion and global motion are neither averaged in a simple manner nor are they processed independently.  相似文献   

10.
Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global scene context. The model comprises 2 parallel pathways; one pathway computes local features (saliency) and the other computes global (scene-centered) features. The contextual guidance model of attention combines bottom-up saliency, scene context, and top-down mechanisms at an early stage of visual processing and predicts the image regions likely to be fixated by human observers performing natural search tasks in real-world scenes.  相似文献   

11.
Three experiments are reported in which participants identified target letters that appeared at either the global or local level of hierarchically organized stimuli. It has been previously reported that response time is facilitated when targets on successive trials appear at the same level (L. M. Ward, 1982; L. C. Robertson, 1996). Experiments 1 and 2 showed that this sequential priming effect can be mediated by target-level information alone, independent of the resolution, or actual physical size, of targets. Target level and resolution were unconfounded by manipulating total stimulus size, such that global elements of the smaller stimuli subtended the same amount of visual angle as local elements of the larger stimuli. Experiment 3, however, showed that when level information is less useful than resolution in parsing targets from distractors, resolution does become critical in intertrial priming. These data are discussed as they relate to the role of attention in local vs. global (part vs. whole) processing.  相似文献   

12.
We describe and evaluate a model of motion perception based on the integration of information from two parallel pathways: a motion pathway and a luminance pathway. The motion pathway has two stages. The first stage measures and pools local motion across the input animation sequence and assigns reliability indices to these pooled measurements. The second stage groups locations on the basis of these measurements. In the luminance pathway, the input scene is segmented into regions on the basis of similarities in luminance. In a subsequent integration stage, motion and luminance segments are combined to obtain the final estimates of object motion. The neural network architecture we employ is based on LEGION (locally excitatory globally inhibitory oscillator networks), a scheme for feature binding and region labeling based on oscillatory correlation. Many aspects of the model are implemented at the neural network level, whereas others are implemented at a more abstract level. We apply this model to the computation of moving, uniformly illuminated, two-dimensional surfaces that are either opaque or transparent. Model performance replicates a number of distinctive features of human motion perception.  相似文献   

13.
The global precedence hypothesis has been operationally defined as a faster or earlier processing of the global than of the local properties of an image (global advantage) and as interference by processing at the global level with processing at the local level (global interference). Navon (1977) proposed an association between the global advantage and interference effects. Other studies have shown a dissociation between the two effects (e.g., Lamb & Robertson, 1988). It seems that the controversy in previous research resulted from not equalizing the eccentricities of global and local properties. In the present study, the eccentricities of the two levels were equalized by using stimuli with all their elements located along their perimeters. The results of the first experiment demonstrated that although the global level was identified faster than the local level in both the central and the peripheral locations of the visual field (global advantage), the pattern of global interference varied across the visual field. Consistency of global and local levels increased the speed of processing of the local level displayed at the center of the visual field but slowed down the processing of that level at peripheral locations. The results of Experiment 2 demonstrated that it was most likely that the variation in the pattern of global interference was determined by the variable of eccentricity, rather than by the sizes of the global and local levels.  相似文献   

14.
The representation of uniform motion in vision   总被引:3,自引:0,他引:3  
M T Swanston  N J Wade  R H Day 《Perception》1987,16(2):143-159
For veridical detection of object motion any moving detecting system must allocate motion appropriately between itself and objects in space. A model for such allocation is developed for simplified situations (points of light in uniform motion in a frontoparallel plane). It is proposed that motion of objects is registered and represented successively at four levels within frames of reference that are defined by the detectors themselves or by their movements. The four levels are referred to as retinocentric, orbitocentric, egocentric, and geocentric. Thus the retinocentric signal is combined with that for eye rotation to give an orbitocentric signal, and the left and right orbitocentric signals are combined to give an egocentric representation. Up to the egocentric level, motion representation is angular rather than three-dimensional. The egocentric signal is combined with signals for head and body movement and for egocentric distance to give a geocentric representation. It is argued that although motion perception is always geocentric, relevant registrations also occur at the three earlier levels. The model is applied to various veridical and nonveridical motion phenomena.  相似文献   

15.
Previous studies have shown that two-frame motion detection thresholds are elevated if one frame's contrast is raised, despite the increase in average contrast--the "contrast paradox". In this study, we investigated if such contrast interactions occurred at a monocular or binocular site of visual processing. Two-frame motion direction discrimination thresholds were measured for motion frames that were presented binocularly, dichoptically or interocularly. Thresholds for each presentation condition were measured for motion frames that comprised either matched or unmatched contrasts. The results showed that contrast mechanisms producing the contrast paradox combine contrast signals from both eyes prior to motion computation. Furthermore, the results are consistent with the existence of monocular and binocular contrast gain control mechanisms that coexist either as combined or independent systems.  相似文献   

16.
We examined the ability of older adults to select local and global stimuli varying in perceptual saliency—a task requiring nonspatial visual selection. Participants were asked to identify in separate blocks a target at either the global or the local level of a hierarchical stimulus, while the saliency of each level was varied (across different conditions, either the local or the global form was the more salient and relatively easier to identify). Older adults were less efficient than young adults in ignoring distractors that were higher in saliency than were targets, and this occurred across both the global and local levels of form. The increased effects of distractor saliency on older adults occurred even when the effects were scaled by overall differences in task performance. The data provide evidence for an age-related decline in nonspatial attentional selection of low-salient hierarchical stimuli, not determined by the (global or local) level at which selection was required. We discuss the implications of these results for understanding both the interaction between saliency and hierarchical processing and the effects of aging on nonspatial visual attention.  相似文献   

17.
18.
Abstract

The size and exposure duration of stimuli have been found to be relevant factors to the issue of processing dominance. Nevertheless, the relation between these two factors and their possible effects on processing dominance have never been studied. The aim of the present research was twofold: (a) to examine whether size and the exposure duration of stimuli affect processing dominance; (b) to examine whether these effects depend on the same/different eccentricity of global and local levels. Stimuli were presented at three exposure durations: 140 msec, 70 msec, and 40 msec. The overall sizes of stimuli were varied at three levels: small (3[ddot]), intermediate (6[ddot]) and large (12[ddot]). In Experiment 1 stimuli were used whose global and local levels were at different eccentricity (Hs and Ss stimuli). In Experiment 2 stimuli whose global and local levels were at the same eccentricity (Cs stimuli) were used. The results showed that the effects of visual angle on processing dominance are independent of the exposure duration of stimuli used. The transition from global to local dominance as visual angle is increased depends on the eccentricity of global and local information: It only appears when the eccentricity is different and biased towards the local level (Hs and Ss stimuli). Finally, the size of the effect is modulated by the visual angle subtended by the stimuli: the size of the effect of global advantage was inversely related to visual angle. The size of the interference effect from the local level to the global level was directly related to the visual angle, whereas that from the global level to the local level was inversely related to the visual angle subtended by the stimuli.  相似文献   

19.
Processing local elements of hierarchical patterns at a superior level and independently from an intact global influence is a well-established characteristic of autistic visual perception. However, whether this confirmed finding has an equivalent in the auditory modality is still unknown. To fill this gap, 18 autistics and 18 typical participants completed a melodic decision task where global and local level information can be congruent or incongruent. While focusing either on the global (melody) or local level (group of notes) of hierarchical auditory stimuli, participants have to decide whether the focused level is rising or falling. Autistics showed intact global processing, a superior performance when processing local elements and a reduced global-to-local interference compared to typical participants. These results are the first to demonstrate that autistic processing of auditory hierarchical stimuli closely parallels processing of visual hierarchical stimuli. When analyzing complex auditory information, autistic participants present a local bias and a more autonomous local processing, but not to the detriment of global processing.  相似文献   

20.
The purpose of this study was to assess the visual processing of global and local levels of hierarchical stimuli in domestic dogs. Fourteen dogs were trained to recognise a compound stimulus in a simultaneous conditioned discrimination procedure and were then tested for their local/global preference in a discrimination test. As a group, dogs showed a non-significant trend for global precedence, although large inter-individual variability was observed. Choices in the test were not affected by either dogs’ sex or the type of stimulus used for training. However, the less time a dog took to complete the discrimination training phase, the higher the probability that it chose the global level of test stimulus. Moreover, dogs that showed a clear preference for the global level in the test were significantly less likely to show positional responses during discrimination training. These differences in the speed of acquisition and response patterns may reflect individual differences in the cognitive requirements during discrimination training. The individual variability in global/local precedence suggests that experience in using visual information may be more important than predisposition in determining global/local processing in dogs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号