首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When a stationary object begins to move, visual spatial attention is reflexively deployed to the location of that object. We tested whether this capture of attention by new motion is entirely stimulus driven, or whether it is contingent on an observer's goals. Participants monitored a visual display for a colour change, inducing an attentional control set (ACS) for colour. Across the three performed experiments, irrelevant new-motion cues always captured visual spatial attention, despite the ACS for colour. This persistence of the attentional cueing effect demonstrates that ACSs, in particular an ACS for colour, cannot prevent new motion from capturing attention. Unlike other stimulus types, such as luminance changes, colour singletons, and new objects, new motion may always capture attention regardless of an observer's goals. This conclusion entails that new motion is an important determinant of when, and to where, visual spatial attention is deployed.  相似文献   

2.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as thefocus of expansion (FOE), corresponds to the person’s direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people’s ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

3.
Several tendencies found in explicit judgments about object motion have been interpreted as evidence that people possess a naive theory of impetus. The theory states that objects that are caused to move by other objects acquire force that determines the kind of motion exhibited by the object, and that this force gradually dissipates over time. I argue that the findings can better be understood as manifestations of a general understanding of externally caused motion based on experiences of acting on objects. Experiences of acting on objects yield the idea that properties of the cause of motion are transmitted to the effect object. This idea functions as a heuristic for explicit predictions of object motion under conditions of uncertainty. This accounts not only for the findings taken as evidence for the impetus theory, but also for several findings that fall outside the scope of the impetus theory. It has also been claimed that judgments about the location at which a moving object disappeared are influenced by the impetus theory. I argue that these judgments are better explained in a different way, as best-guess extrapolations made by the visual system as a practical guide to interactions with the object, such as interception.  相似文献   

4.
In dynamic environments in which many stimulus elements are in motion, visual search may depend upon specific characteristics of target motion that are known in advance. When stimulus elements move in various directions but each element can move in only one direction, prior knowledge of the target's direction of motion reduces search time.  相似文献   

5.
S N Watamaniuk 《Perception》1992,21(6):791-802
Despite the sluggish temporal response of the human visual system, moving objects appear clear and without blur, which suggests that visible persistence is reduced when objects move. It has been argued that spatiotemporal proximity alone can account for this modulation of visible persistence and that activation of a motion mechanism per se is not necessary. Experiments are reported which demonstrate that there is a motion-specific influence on visible persistence. Specifically, points moving in constant directions, or fixed trajectories, show less persistence than points moving with the same spatial and temporal displacements but taking random walks, randomly changing direction each frame. Subjects estimated the number of points present in the display for these two types of motion conditions. Under conditions chosen to produce 'good' apparent motion, ie small temporal and spatial increments, the apparent number of points for the fixed-trajectory condition was significantly lower than the apparent number in the random-walk condition. The traditional explanation of the suppression of persistence based on the spatiotemporal proximity of objects cannot account for these results. The enhanced suppression of persistence observed for a target moving in a consistent direction depends upon the activation of a directionally tuned motion mechanism extended over space and time.  相似文献   

6.
真实环境中的视觉搜索是人和动物赖以生存的重要能力。目前的视觉搜索研究多使用静态的观察者和静止的二维搜索对象, 侧重于探究注意在搜索中的作用; 现有的视觉搜索理论模型主要概括了影响搜索的自上而下的注意因素, 而将自下而上影响因素简单归结为影像显著性, 然而在真实环境中, 观察者或搜索对象是可以运动的, 搜索时可利用的视觉信息包括动态光流和静态影像结构信息。已有的视觉识别研究发现这两种信息相结合可以使观察者准确持久地识别场景、事件和三维结构。在现有视觉搜索理论模型中引入两种视觉信息可以较好还原真实环境中的搜索任务。我们提出研究构想和实验方案,探究利用动、静态视觉信息的视觉搜索过程, 从而完善现有的视觉搜索模型。我们认为充分利用环境信息可以提高搜索效率, 且在视觉搜索训练和智能搜索设计等方面有重要的应用价值。  相似文献   

7.
In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).  相似文献   

8.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as the focus of expansion (FOE), corresponds to the person's direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people's ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

9.
The interpretation of a dynamic visual scene requires integrating information within frames (grouping and completion) and across frames (correspondence matching). Fragmentary views of objects were used in five experiments. These views could not be matched with each other by any similarity transformation on the basis of their explicit visual features, but their completed versions were related by a rotational transformation. When the fragmentary images were successively presented to observers, it was found that they produced apparent motion in the picture plane and in depth. Thus, apparent motion is capable of establishing correspondence at the level of perceptually recovered objects in three-dimensional space.  相似文献   

10.
Motion lines (MLs) are a pictorial technique used to represent object movement in a still picture. This study explored how MLs contribute to motion perception. In Experiment 1, we reported the creation of a motion illusion caused by MLs: random displacements of objects with MLs on each frame were perceived as unidirectional global motion along the pictorial motion direction implied by MLs. In Experiment 2, we showed that the illusory global motion in the peripheral visual field captured the perceived motion direction of random displacement of objects without MLs in the central visual field, and confirmed that the results in Experiment 1 did not stem simply from response bias, but resulted from perceptual processing. In Experiment 3, we showed that the spatial arrangement of orientation information rather than ML length is important for the illusory global motion. Our results indicate that the ML effect is based on perceptual processing rather than response bias, and that comparison of neighboring orientation components may underlie the determination of pictorial motion direction with MLs.  相似文献   

11.
Recent work has shown that observers are remarkably effective in searching displays of randomly moving items. In two experiments, we combined working memory tasks with visual search, to test whether search through such complex motion displays, as compared with search through static items, places an extra burden on spatial working memory. In our first experiment, we show that the dual-task interference observed for motion search is specific to spatial working memory, in line with earlier work for static search. In our second experiment, we found dual-task interference for both static and motion search, but no difference between them. The results support the suggestion that the same search process is active during search among static and search among moving items.  相似文献   

12.
When some distractors (old items) appear before others (new items) in an inefficient visual search task, the old items are excluded from the search (visual marking). Previous studies have shown that changing the shape of old items eliminates this effect, suggesting that shape identity must be maintained for successful visual marking. However, the contribution of top-down target knowledge to the maintenance of visual marking under shape change conditions has not been systematically examined. The present study tested whether the vulnerability of visual marking to shape change is contingent on observers' attentional set, by manipulating compatibility of the set and the domains in which the change occurs. The results indicated that visual marking survived shape changes when the observer's attentional set was consistent with critical features between the old and new items. This protection was observed when the set was based on explicit instructions at the beginning of the experiment, and when the task set was implicitly carried over from the previous task. These results suggest that top-down processes play a role in maintaining memory templates by enhancing the grouping and suppression processes during visual search, despite disruptive bottom-up signals.  相似文献   

13.
Across humans' evolutionary history, detecting animate entities in the visual field (such as prey and predators) has been critical for survival. One of the defining features of animals is their motion-self-propelled and self-directed. Does such animate motion capture visual attention? To answer this question, we compared the time to detect targets involving objects that were moving predictably as a result of collisions (inanimate motion) with the time to detect targets involving objects that were moving unpredictably, having been in no such collisions (animate motion). Across six experiments, we consistently found that targets involving objects that underwent animate motion were responded to more quickly than targets involving objects that underwent inanimate motion. Moreover, these speeded responses appeared to be due to the perceived animacy of the objects, rather than due to their uniqueness in the display or involvement of a top-down strategy. We conclude that animate motion does indeed capture visual attention.  相似文献   

14.
It is shown how geometrically changing projections of objects which move and/or change their shape carry no specific information about form and three-dimensional motion. How, then, does the visual apparatus produce specific percepts from such non-specific changing stimuli? By applying an analogue computer technique, changing projections of artificial objects are generated on a CRT screen. These projections are fed into the eye by means of an optical device where they form a continuously changing solid angle of homogeneous light. The main conclusion is that it is a principle of perceptual three-dimensionality which gives specificity to the percepts. Preliminary statements of principles for prediction of perceived motion in depth from a given change in proximal stimulus are presented.  相似文献   

15.
Visual marking inhibits singleton capture   总被引:4,自引:0,他引:4  
This paper is concerned with how we prioritize the selection of new objects in visual scenes. We present four experiments investigating the effects of distractor previews on visual search through new objects. Participants viewed a set of to-be-ignored nontargets, with the task being to search for a target in a second set, added to the first after 1000ms. This second set could contain a salient feature singleton, defined in terms of its color, orientation, or both color and orientation. When the singleton was a distractor, search was slowed relative to when there was no singleton. Search was facilitated when the singleton was a target. Interestingly, both the interference and facilitation effects were modulated when the preview shared features with the singleton. Follow-up experiments showed that this reduction of singleton effects was not due to: (i) low-level sensory aspects of the displays, (ii) increased heterogeneity in the search set in the preview condition, or (iii) color-based grouping of old and new items. Instead, we suggest that there is an inhibitory carry-over from the first to the second set of items based on feature similarity. We suggest the suppression stems from a process termed visual marking, which suppresses irrelevant visual objects in anticipation of more relevant new objects (Watson & Humphreys, 1997). The findings argue against alternative explanations such as the automatic capture by abrupt new onsets account.  相似文献   

16.
In daily life our visual system is bombarded with motion information. We see cars driving by, flocks of birds flying in the sky, clouds passing behind trees that are dancing in the wind. Vision science has a good understanding of the first stage of visual motion processing, that is, the mechanism underlying the detection of local motions. Currently, research is focused on the processes that occur beyond the first stage. At this level, local motions have to be integrated to form objects, define the boundaries between them, construct surfaces and so on. An interesting, if complicated case is known as motion transparency: the situation in which two overlapping surfaces move transparently over each other. In that case two motions have to be assigned to the same retinal location. Several researchers have tried to solve this problem from a computational point of view, using physiological and psychophysical results as a guideline. We will discuss two models: one uses the traditional idea known as ‘filter selection’ and the other a relatively new approach based on Bayesian inference. Predictions from these models are compared with our own visual behaviour and that of the neural substrates that are presumed to underlie these perceptions.  相似文献   

17.
We report results from six experiments in which participants had to search for a "C" among "O" distractors. The search items were either holes or objects, defined by motion, contrast, or both. Our main findings were (1) it was easier to search among objects than to search among holes, and (2) the difference between search among objects and search among holes was primarily caused by grouping with the background. The data support the hypothesis that the shape of a hole is only available indirectly. We further note that, in our experiments, search performance for both holes and objects depended on the surface medium used to define the search items.  相似文献   

18.
The present study examined attentional capture by an unannounced motion singleton in a visual search task. The results showed that a motion singleton only captured attention on its first unannounced occurrence when the observers had not encountered moving items before in the experiment, whereas it failed to capture when observers were familiar with moving items. This indicates that motion can capture attention independently of top-down attentional control settings, but only when motion as a feature is unexpected and new. An additional experiment tested whether salient items can capture attention when all stimuli possess new and unexpected features, and novelty information cannot guide attention. The results showed that attention was shifted to the location of the salient item when all items were new and unexpected, reinforcing the view that salient items receive attentional priority. The implications of these results for current theories of attention are discussed.  相似文献   

19.
The onset of motion captures attention during visual search even if the motion is not task relevant, which suggests that motion onsets capture attention in a stimulus-driven manner. However, we have recently shown that stimulus-driven attentional capture by abruptly appearing objects is attenuated under conditions of high perceptual load. In the present study, we examined the influence of perceptual load on attentional capture by another type of dynamic stimulus: the onset of motion. Participants searched for a target letter through briefly presented low- and high-load displays. On each trial, two irrelevant flankers also appeared, one with a motion onset and one that was static. Flankers defined by a motion onset captured attention in the low-load but not in the high-load displays. This modulation of capture in high-load displays was not the result of overall lengthening of reaction times (RTs) in this condition, since search for a single low-contrast target lengthened RTs but did not influence capture. These results, together with those of previous studies, suggest that perceptual load can modulate attentional capture by dynamic stimuli.  相似文献   

20.
Visual Search Remains Efficient When Visual Working Memory is Full   总被引:9,自引:0,他引:9  
Many theories of attention have proposed that visual working memory plays an important role in visual search tasks. The present study examined the involvement of visual working memory in search using a dual-task paradigm in which participants performed a visual search task while maintaining no, two, or four objects in visual working memory. The presence of a working memory load added a constant delay to the visual search reaction times, irrespective of the number of items in the visual search array. That is, there was no change in the slope of the function relating reaction time to the number of items in the search array, indicating that the search process itself was not slowed by the memory load. Moreover, the search task did not substantially impair the maintenance of information in visual working memory. These results suggest that visual search requires minimal visual working memory resources, a conclusion that is inconsistent with theories that propose a close link between attention and working memory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号