首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.  相似文献   

2.
《Acta psychologica》2013,143(3):317-321
Recent studies have demonstrated that central cues, such as eyes and arrows, reflexively trigger attentional shifts. However, it is not clear whether the attention induced by these two cues can be attached to objects within the visual scene. In the current study, subjects' attention was directed to one of two objects (square outlines) via the observation of uninformative directional arrows or eye gaze. Then, the objects rotated 90° clockwise or counter-clockwise to a new location and the target stimulus was presented within one of these two objects. Results showed that independent of the cue type participants responded faster to targets in the cued object than to those in the uncued object. This suggests that in dynamic displays, both gaze and arrow cues are able to trigger reflexive shifts of attention to objects moving within the visual scene.  相似文献   

3.
Factors affecting joint visual attention in 12- and 18-month-olds were investigated. In Experiment 1 infants responded to 1 of 3 parental gestures: looking, looking and pointing, or looking, pointing, and verbalizing. Target objects were either identical to or distinctive from distractor objects. Targets were in front of or behind the infant to test G. E. Butterworth's (1991b) hypothesis that 12-month-olds do not follow gaze to objects behind them. Pointing elicited more episodes of joint visual attention than looking alone. Distinctive targets elicited more episodes of joint visual attention than identical targets. Although infants most reliably followed gestures to targets in front of them, even 12-month-olds followed gestures to targets behind them. In Experiment 2 parents were rotated so that the magnitude of their head turns to fixate front and back targets was equivalent. Infants looked more at front than at back targets, but there was also an effect of magnitude of head turn. Infants' relative neglect of back targets is partly due to the "size" of adult's gesture.  相似文献   

4.
This study investigates how speed of motion is processed in language. In three eye‐tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., The lion ambled/dashed to the balloon). Results showed that looking time to relevant objects in the visual scene was affected by the speed of verb of the sentence, speaking rate, and configuration of a supporting visual scene. The results provide novel evidence for the mental simulation of speed in language and show that internal dynamic simulations can be played out via eye movements toward a static visual scene.  相似文献   

5.
Research on dynamic attention has shown that visual tracking is possible even if the observer’s viewpoint on the scene holding the moving objects changes. In contrast to smooth viewpoint changes, abrupt changes typically impair tracking performance. The lack of continuous information about scene motion, resulting from abrupt changes, seems to be the critical variable. However, hard onsets of objects after abrupt scene motion could explain the impairment as well. We report three experiments employing object invisibility during smooth and abrupt viewpoint changes to examine the influence of scene information on visual tracking, while equalizing hard onsets of moving objects after the viewpoint change. Smooth viewpoint changes provided continuous information about scene motion, which supported the tracking of temporarily invisible objects. However, abrupt and, therefore, discontinuous viewpoint changes strongly impaired tracking performance. Object locations retained with respect to a reference frame can account for the attentional tracking that follows invisible objects through continuous scene motion.  相似文献   

6.
The reported experiment tested the effect of abrupt and unpredictable viewpoint changes on the attentional tracking of multiple objects in dynamic 3-D scenes. Observers tracked targets that moved independently among identically looking distractors on a rectangular floor plane. The tracking interval was 11 s. Abrupt rotational viewpoint changes of 10°, 20°, or 30° occurred after 8 s. Accuracy of tracking targets across a 10° viewpoint change was comparable to accuracy in a continuous control condition, whereas viewpoint changes of 20° and 30° impaired tracking performance considerably. This result suggests that tracking is mainly dependant on a low-level process whose performance is saved against small disturbances by the visual system's ability to compensate for small changes of retinocentric coordinates. Tracking across large viewpoint changes succeeds only if allocentric coordinates are remembered to relocate targets after displacements.  相似文献   

7.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

8.
Staudte M  Crocker MW 《Cognition》2011,(2):268-291
Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream ( [Griffin, 2001] , [Meyer et al., 1998] and [Tanenhaus et al., 1995] ). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker’s focus of (visual) attention to anticipate, ground, and disambiguate spoken references. To investigate the dynamics of such gaze-following and its influence on utterance comprehension in a controlled manner, we use a human–robot interaction setting. Specifically, we hypothesize that referential gaze is interpreted as a cue to the speaker’s referential intentions which facilitates or disrupts reference resolution. Moreover, the use of a dynamic and yet extremely controlled gaze cue enables us to shed light on the simultaneous and incremental integration of the unfolding speech and gaze movement.We report evidence from two eye-tracking experiments in which participants saw videos of a robot looking at and describing objects in a scene. The results reveal a quantified benefit-disruption spectrum of gaze on utterance comprehension and, further, show that gaze is used, even during the initial movement phase, to restrict the spatial domain of potential referents. These findings more broadly suggest that people treat artificial agents similar to human agents and, thus, validate such a setting for further explorations of joint attention mechanisms.  相似文献   

9.
In the present study, we investigated the influence of object-scene relationships on eye movement control during scene viewing. We specifically tested whether an object that is inconsistent with its scene context is able to capture gaze from the visual periphery. In four experiments, we presented rendered images of naturalistic scenes and compared baseline consistent objects with semantically, syntactically, or both semantically and syntactically inconsistent objects within those scenes. To disentangle the effects of extrafoveal and foveal object-scene processing on eye movement control, we used the flash-preview moving-window paradigm: A short scene preview was followed by an object search or free viewing of the scene, during which visual input was available only via a small gaze-contingent window. This method maximized extrafoveal processing during the preview but limited scene analysis to near-foveal regions during later stages of scene viewing. Across all experiments, there was no indication of an attraction of gaze toward object-scene inconsistencies. Rather than capturing gaze, the semantic inconsistency of an object weakened contextual guidance, resulting in impeded search performance and inefficient eye movement control. We conclude that inconsistent objects do not capture gaze from an initial glimpse of a scene.  相似文献   

10.
This study investigates the effects of attention‐guiding stimuli on 4‐month‐old infants' object processing. In the human head condition, infants saw a person turning her head and eye gaze towards or away from objects. When presented with the objects again, infants showed increased attention in terms of longer looking time measured by eye tracking and an increased Nc amplitude measured by event‐related potentials (ERP) for the previously uncued objects versus the cued objects. This suggests that the uncued objects were previously processed less effectively and appeared more novel to the infants. In a second condition, a car instead of a human head turned towards or away from objects. Eye‐tracking results did not reveal any significant difference in infants' looking time. ERPs indicated only a marginally significant effect in late slow‐wave activity associated with memory encoding for the uncued objects. We conclude that human head orientation and gaze direction affect infants' object‐directed attention, whereas movement and orientation of a car have only limited influence on infants' object processing.  相似文献   

11.
People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics—let alone their semantic congruity—processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.  相似文献   

12.
Previous research has identified multiple features of individual objects that are capable of guiding visual attention. However, in dynamic multi-element displays not only individual object features but also changing spatial relations between two or more objects might signal relevance. Here we report a series of experiments that investigated the hypothesis that reduced inter-object spacing guides visual attention toward the corresponding objects. Our participants discriminated between different probes that appeared on moving objects while we manipulated spatial proximity between the objects at the moment of probe onset. Indeed, our results confirm that there is a bias toward temporarily close objects, which persists even when such a bias is harmful for the actual task (Experiments 1a and 1b). Remarkably, this bias is mediated by oculomotor processes. Controlling for eye-movements reverses the pattern of results (Experiment 2a), whereas the location of the gaze tends toward the temporarily close objects under free viewing conditions (Experiment 2b). Taken together, our results provide insights into the interplay of attentional and oculomotor processes during dynamic scene processing. Thereby, they also add to the growing body of evidence showing that within dynamic perception, attentional and oculomotor processes act conjointly and are hardly separable.  相似文献   

13.
Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation.  相似文献   

14.
The current research uses eye‐tracking technology in a consumer context to explore the interactive effects of olfactory and visual cues on consumers' eye gaze patterns. We manipulate the semantic correspondence between pictorial objects depicted in print advertisements and odors smelled (or not) while looking at the ads. The results indicate that smelling a scent that shares learned semantic associations with an object in the advertisement diverts consumers' eye gazes to the semantically related object in the ad, with positive downstream effects on advertising recall and purchase intent. This is the first study we are aware of demonstrating multisensory integration of odors and pictures on consumer eye gaze patterns with clear implications for consumer choice. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
ABSTRACT— How observers distribute limited processing resources to regions of a scene is based on a dynamic balance between current goals and reflexive tendencies. Past research showed that these reflexive tendencies include orienting toward objects that expand as if they were looming toward the observer, presumably because this signal indicates an impending collision. Here we report that during visual search, items that loom abruptly capture attention more strongly when they approach from the periphery rather than from near the center of gaze (Experiment 1), and target objects are more likely to be attended when they are on a collision path with the observer rather than on a near-miss path (Experiment 2). Both effects are exaggerated when search is performed in a large projection dome (Experiment 3). These findings suggest that the human visual system prioritizes events that are likely to require a behaviorally urgent response.  相似文献   

16.
通过改变目标数量、运动框架突变旋转角度探究不同场认知风格被试在多目标追踪任务中的表现。结果发现:(1)在任务难度较低(运动参考框架稳定, 目标数量为3和4)和任务难度中等(运动参考框架突变向右旋转20°, 目标数量为4)条件下, 场独立型被试的多目标追踪表现均显著好于场依存型被试。在任务难度较高(运动框架稳定, 目标数量为5以及运动参考框架突变向右旋转40°, 目标数量为4)条件下, 两组被试差异不显著。表明不同场认知风格被试追踪表现受任务难度影响; (2)随着目标数量由3至5增多, 追踪负荷增大使被试的追踪成绩显著下降; (3)相比运动框架稳定, 运动框架向右突变旋转20°和40°均显著削弱了被试的追踪表现。旋转角度变化破坏了场景连续性, 影响了追踪表现。  相似文献   

17.
18.
Infants' ability to represent objects has received significant attention from the developmental research community. With the advent of eye-tracking technology, detailed analysis of infants' looking patterns during object occlusion have revealed much about the nature of infants' representations. The current study continues this research by analyzing infants' looking patterns in a novel manner and by comparing infants' looking at a simple display in which a single three-dimensional (3D) object moves along a continuous trajectory to a more complex display in which two 3D objects undergo trajectories that are interrupted behind an occluder. Six-month-old infants saw an occlusion sequence in which a ball moved along a linear path, disappeared behind a rectangular screen, and then a ball (ball-ball event) or a box (ball-box event) emerged at the other edge. An eye-tracking system recorded infants' eye-movements during the event sequence. Results from examination of infants' attention to the occluder indicate that during the occlusion interval infants looked longer to the side of the occluder behind which the moving occluded object was located, shifting gaze from one side of the occluder to the other as the object(s) moved behind the screen. Furthermore, when events included two objects, infants attended to the spatiotemporal coordinates of the objects longer than when a single object was involved. These results provide clear evidence that infants' visual tracking is different in response to a one-object display than to a two-object display. Furthermore, this finding suggests that infants may require more focused attention to the hidden position of objects in more complex multiple-object displays and provides additional evidence that infants represent the spatial location of moving occluded objects.  相似文献   

19.
Conversation is supported by the beliefs that people have in common and the perceptual experience that they share. The visual context of a conversation has two aspects: the information that is available to each conversant, and their beliefs about what is present for each other. In our experiment, we separated these factors for the first time and examined their impact on a spontaneous conversation. We manipulated the fact that a visual scene was shared or not and the belief that a visual scene was shared or not. Participants watched videos of actors talking about a controversial topic, then discussed their own views while looking at either a blank screen or the actors. Each believed (correctly or not) that their partner was either looking at a blank screen or the same images. We recorded conversants' eye movements, quantified how they were coordinated, and analyzed their speech patterns. Gaze coordination has been shown to be causally related to the knowledge people share before a conversation, and the information they later recall. Here, we found that both the presence of the visual scene, and beliefs about its presence for another, influenced language use and gaze coordination.  相似文献   

20.
Past research has shown that change detection performance is often more efficient for target objects that are semantically incongruent with a surrounding scene context than for target objects that are semantically congruent with the scene context. One account of these findings is that attention is attracted to objects for which the identity of the object conflicts with the meaning of the scene, perhaps as a violation of expectancies created by earlier recruitment of scene gist information. An alternative account of the performance benefit for incongruent objects is that attention is more apt to linger on incongruent objects, as perhaps identifying these objects is more difficult due to conflicting information from the scene context. In the current experiment, we present natural scenes in a change detection task while monitoring eye movements. We find that eye gaze is attracted to these objects relatively early during scene processing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号