首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 165 毫秒
1.
Multiple-object tracking (MOT) studies have shown that tracking ability declines as object speed increases. However, this might be attributed solely to the increased number of times that target and distractor objects usually pass close to each other (“close encounters”) when speed is increased, resulting in more target–distractor confusions. The present study investigates whether speed itself affects MOT ability by using displays in which the number of close encounters is held constant across speeds. Observers viewed several pairs of disks, and each pair rotated about the pair’s midpoint and, also, about the center of the display at varying speeds. Results showed that even with the number of close encounters held constant across speeds, increased speed impairs tracking performance, and the effect of speed is greater when the number of targets to be tracked is large. Moreover, neither the effect of number of distractors nor the effect of target–distractor distance was dependent on speed, when speed was isolated from the typical concomitant increase in close encounters. These results imply that increased speed does not impair tracking solely by increasing close encounters. Rather, they support the view that speed affects MOT capacity by requiring more attentional resources to track at higher speeds.  相似文献   

2.
This study investigates how speed of motion is processed in language. In three eye‐tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., The lion ambled/dashed to the balloon). Results showed that looking time to relevant objects in the visual scene was affected by the speed of verb of the sentence, speaking rate, and configuration of a supporting visual scene. The results provide novel evidence for the mental simulation of speed in language and show that internal dynamic simulations can be played out via eye movements toward a static visual scene.  相似文献   

3.
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.  相似文献   

4.
The attentional processes for tracking moving objects may be largely hemisphere-specific. Indeed, in our first two experiments the maximum object speed (speed limit) for tracking targets in one visual hemifield (left or right) was not significantly affected by a requirement to track additional targets in the other hemifield. When the additional targets instead occupied the same hemifield as the original targets, the speed limit was reduced. At slow target speeds, however, adding a second target to the same hemifield had little effect. At high target speeds, the cost of adding a same-hemifield second target was approximately as large as would occur if observers could only track one of the targets. This shows that performance with a fast-moving target is very sensitive to the amount of resource allocated. In a third experiment, we investigated whether the resources for tracking can be distributed unequally between two targets. The speed limit for a given target was higher if the second target was slow rather than fast, suggesting that more resource was allocated to the faster of the two targets. This finding was statistically significant only for targets presented in the same hemifield, consistent with the theory of independent resources in the two hemifields. Some limited evidence was also found for resource sharing across hemifields, suggesting that attentional tracking resources may not be entirely hemifield-specific. Together, these experiments indicate that the largely hemisphere-specific tracking resource can be differentially allocated to faster targets.  相似文献   

5.
In this study, we evaluated observers' ability to compare naturally shaped three-dimensional (3-D) objects, using their senses of vision and touch. In one experiment, the observers haptically manipulated 1 object and then indicated which of 12 visible objects possessed the same shape. In the second experiment, pairs of objects were presented, and the observers indicated whether their 3-D shape was the same or different. The 2 objects were presented either unimodally (vision-vision or haptic-haptic) or cross-modally (vision-haptic or haptic-vision). In both experiments, the observers were able to compare 3-D shape across modalities with reasonably high levels of accuracy. In Experiment 1, for example, the observers' matching performance rose to 72% correct (chance performance was 8.3%) after five experimental sessions. In Experiment 2, small (but significant) differences in performance were obtained between the unimodal vision-vision condition and the two cross-modal conditions. Taken together, the results suggest that vision and touch have functionally overlapping, but not necessarily equivalent, representations of 3-D shape.  相似文献   

6.
The effects of a background scene on the perception of the trajectory of an approaching object and its relation to changes in angular speed and angular size were examined in five experiments. Observers judged the direction (upward or downward) of two sequentially presented motion trajectories simulating a sphere traveling toward the observer at a constant 3-D speed from a fixed distance. In Experiments 14, we examined the effects of changes in angular speed and the presence of a scene background, with changes in angular size based either on the trajectories being discriminated or on an intermediate trajectory. In Experiment 5, we examined the effects of changes in angular speed and scene background, with angular size either constant or consistent with an intermediate 3-D trajectory. Overall, we found that (1) observers were able to judge the direction of object motion trajectories from angular speed changes; (2) observers were more accurate with a 3-D scene background, as compared with a uniform background, suggesting that scene information is important for recovering object motion trajectories; and (3) observers were more accurate in judging motion trajectories based on angular speed when the angular size function was consistent with motion in depth than when the angular size was constant.  相似文献   

7.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as the focus of expansion (FOE), corresponds to the person's direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people's ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

8.
Observers can visually track multiple objects that move independently even if the scene containing the moving objects is rotated in a smooth way. Abrupt scene rotations yield tracking more difficult but not impossible. For nonrotated, stable dynamic displays, the strategy of looking at the targets' centroid has been shown to be of importance for visual tracking. But which factors determine successful visual tracking in a nonstable dynamic display? We report two eye tracking experiments that present evidence for centroid looking. Across abrupt viewpoint changes, gaze on the centroid is more stable than gaze on targets indicating a process of realigning targets as a group. Further, we show that the relative importance of centroid looking increases with object speed.  相似文献   

9.
Three experiments investigated whether the semantic informativeness of a scene region (object) influences its representation between successive views. In Experiment 1, a scene and a modified version of that scene were presented in alternation, separated by a brief retention interval. A changed object was either semantically consistent with the scene (non-informative) or inconsistent (informative). Change detection latency was shorter in the semantically inconsistent versus consistent condition. In Experiment 2, eye movements were eliminated by presenting a single cycle of the change sequence. Detection accuracy was higher for inconsistent versus consistent objects. This inconsistent object advantage was obtained when the potential strategy of selectively encoding inconsistent objects was no longer advantageous (Experiment 3). These results indicate that the semantic properties of an object influence whether the representation of that object is maintained between views of a scene, and this influence is not caused solely by the differential allocation of eye fixations to the changing region. The potential cognitive mechanisms supporting this effect are discussed.  相似文献   

10.
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.  相似文献   

11.
The reported experiment tested the effect of abrupt and unpredictable viewpoint changes on the attentional tracking of multiple objects in dynamic 3-D scenes. Observers tracked targets that moved independently among identically looking distractors on a rectangular floor plane. The tracking interval was 11 s. Abrupt rotational viewpoint changes of 10°, 20°, or 30° occurred after 8 s. Accuracy of tracking targets across a 10° viewpoint change was comparable to accuracy in a continuous control condition, whereas viewpoint changes of 20° and 30° impaired tracking performance considerably. This result suggests that tracking is mainly dependant on a low-level process whose performance is saved against small disturbances by the visual system's ability to compensate for small changes of retinocentric coordinates. Tracking across large viewpoint changes succeeds only if allocentric coordinates are remembered to relocate targets after displacements.  相似文献   

12.
When a person moves in a straight line through a stationary environment, the images of object surfaces move in a radial pattern away from a single point. This point, known as thefocus of expansion (FOE), corresponds to the person’s direction of motion. People judge their heading from image motion quite well in this situation. They perform most accurately when they can see the region around the FOE, which contains the most useful information for this task. Furthermore, a large moving object in the scene has no effect on observer heading judgments unless it obscures the FOE. Therefore, observers may obtain the most accurate heading judgments by focusing their attention on the region around the FOE. However, in many situations (e.g., driving), the observer must pay attention to other moving objects in the scene (e.g., cars and pedestrians) to avoid collisions. These objects may be located far from the FOE in the visual field. We tested whether people can accurately judge their heading and the three-dimensional (3-D) motion of objects while paying attention to one or the other task. The results show that differential allocation of attention affects people’s ability to judge 3-D object motion much more than it affects their ability to judge heading. This suggests that heading judgments are computed globally, whereas judgments about object motion may require more focused attention.  相似文献   

13.
Previous research on the perception of 3-D object motion has considered time to collision, time to passage, collision detection, and judgments of speed and direction of motion but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply, but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment.  相似文献   

14.
In two experiments we examined whether the allocation of attention in natural scene viewing is influenced by the gaze cues (head and eye direction) of an individual appearing in the scene. Each experiment employed a variant of the flicker paradigm in which alternating versions of a scene and a modified version of that scene were separated by a brief blank field. In Experiment 1, participants were able to detect the change made to the scene sooner when an individual appearing in the scene was gazing at the changing object than when the individual was absent, gazing straight ahead, or gazing at a nonchanging object. In addition, participants' ability to detect change deteriorated linearly as the changing object was located progressively further from the line of regard of the gazer. Experiment 2 replicated this change detection advantage of gaze-cued objects in a modified procedure using more critical scenes, a forced-choice change/no-change decision, and accuracy as the dependent variable. These findings establish that in the perception of static natural scenes and in a change detection task, attention is preferentially allocated to objects that are the target of another's social attention.  相似文献   

15.
In two experiments we examined whether the allocation of attention in natural scene viewing is influenced by the gaze cues (head and eye direction) of an individual appearing in the scene. Each experiment employed a variant of the flicker paradigm in which alternating versions of a scene and a modified version of that scene were separated by a brief blank field. In Experiment 1, participants were able to detect the change made to the scene sooner when an individual appearing in the scene was gazing at the changing object than when the individual was absent, gazing straight ahead, or gazing at a nonchanging object. In addition, participants' ability to detect change deteriorated linearly as the changing object was located progressively further from the line of regard of the gazer. Experiment 2 replicated this change detection advantage of gaze-cued objects in a modified procedure using more critical scenes, a forced-choice change/no-change decision, and accuracy as the dependent variable. These findings establish that in the perception of static natural scenes and in a change detection task, attention is preferentially allocated to objects that are the target of another's social attention.  相似文献   

16.
Tombu M  Seiffert AE 《Cognition》2008,108(1):1-25
Attentional demands of multiple-object tracking were demonstrated using a dual-task paradigm. Participants were asked to make speeded responses based on the pitch of a tone, while at the same time tracking four of eight identical dots. Tracking difficulty was manipulated either concurrent with or after the tone task. If increasing tracking difficulty increases attentional demands, its effect should be larger when it occurs concurrent with the tone. In Experiment 1, tracking difficulty was manipulated by having all dots briefly attract one another on some trials, causing a transient increase in dot proximity and speed. Results showed that increasing proximity and speed had a significantly larger effect when it occurred at the same time as the tone task. Experiments 2 and 3 showed that manipulating either proximity or speed independently was sufficient to produce this pattern of results. Experiment 4 manipulated object contrast, which affected tracking performance equally whether it occurred concurrent with or after the tone task. Overall, results support the view that the moment-to-moment tracking of multiple objects demands attention. Understanding what factors increase the attentional demands of tracking may help to explain why tracking is sometimes successful and at other times fails.  相似文献   

17.
Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.  相似文献   

18.
Holcombe AO  Chen WY 《Cognition》2012,123(2):218-228
Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional pointers available to follow objects. Spatial interference theory proposes that when targets are near each other, their attentional spotlights mutually interfere. Resource theory asserts that a limited resource is divided among targets, and performance reflects the amount available per target. Utilising widely separated objects to avoid spatial interference, the present experiments validated the predictions of resource theory. The fastest target speed at which two targets could be tracked was much slower than the fastest speed at which one target could be tracked. This speed limit for tracking two targets was approximately that predicted if at high speeds, only a single target could be tracked. This result cannot be accommodated by the fixed-limit or interference theories. Evidently a fast target, if it moves fast enough, can exhaust attentional resources.  相似文献   

19.
In the first three experiments, subjects felt solid geometrical forms and matched raised-line pictures to the objects. Performance was best in experiment 1 for top views, with shorter response latencies than for side views, front views, or 3-D views with foreshortening. In a second experiment with blind participants, matching accuracy was not significantly affected by prior visual experience, but speed advantages were found for top views, with 3-D views also yielding better matching accuracy than side views. There were no performance advantages for pictures of objects with a constant cross section in the vertical axis. The early-blind participants had lower performance for side and frontal views. The objects were rotated to oblique orientations in experiment 3. Early-blind subjects performed worse than the other subjects given object rotation. Visual experience with pictures of objects at many angles could facilitate identification at oblique orientations. In experiment 5 with blindfolded sighted subjects, tangible pictures were used as targets and as choices. The results yielded superior overall performance for 3-D views (mean, M = 74% correct) and much lower matching accuracy for top views as targets (M = 58% correct). Performance was highest when the target and matching viewpoint were identical, but 3-D views (M = 96% correct) were still far better than top views. The accuracy advantage of the top views also disappeared when more complex objects were tested in experiment 6. Alternative theoretical implications of the results are discussed.  相似文献   

20.
We investigated, in two experiments, the discrimination of bilateral symmetry in vision and touch using four sets of unfamiliar displays. They varied in complexity from 3 to 30 turns. Two sets were 2-D flat forms (raised-line shapes and raised surfaces) while the other two were 3-D objects constructed by extending the 2-D shapes in height (short and tall objects). Experiment 1 showed that visual accuracy was excellent but latencies increased for raised-line shapes compared with 3-D objects. Experiment 2 showed that unimanual exploration was more accurate for asymmetric than for symmetric judgments, but only for 2-D shapes and short objects. Bimanual exploration at the body midline facilitated the discrimination of symmetric shapes without changing performance with asymmetric ones. Accuracy for haptically explored symmetric stimuli improved as the stimuli were extended in the third dimension, while no such a trend appeared for asymmetric stimuli. Unlike vision, haptic response latency decreased for 2-D shapes compared with 3-D objects. The present results are relevant to the understanding of symmetry discrimination in vision and touch.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号