首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Individual differences in children's online language processing were explored by monitoring their eye movements to objects in a visual scene as they listened to spoken sentences. Eleven skilled and 11 less-skilled comprehenders were presented with sentences containing verbs that were either neutral with respect to the visual context (e.g., Jane watched her mother choose the cake, where all of the objects in the scene were choosable) or supportive (e.g., Jane watched her mother eat the cake, where the cake was the only edible object). On hearing the supportive verb, the children made fast anticipatory eye movements to the target object (e.g., the cake), suggesting that children extract information from the language they hear and use this to direct ongoing processing. Less-skilled comprehenders did not differ from controls in the speed of their anticipatory eye movements, suggesting normal sensitivity to linguistic constraints. However, less-skilled comprehenders made a greater number of fixations to target objects, and these fixations were of a duration shorter than those observed in the skilled comprehenders, especially in the supportive condition. This pattern of results is discussed in terms of possible processing limitations, including difficulties with memory, attention, or suppressing irrelevant information.  相似文献   

2.
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye‐movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip‐art scenes and object arrays, raising the possibility that anticipatory eye‐movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real‐world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real‐world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co‐presence of the scene, or whether memory representations can be utilized instead. The same real‐world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object‐based visual indices.  相似文献   

3.
Altmann GT  Kamide Y 《Cognition》1999,73(3):247-264
Participants' eye movements were recorded as they inspected a semi-realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. In each of two experiments, the onset of saccadic eye movements to the target object (the cake) was significantly later in the move condition than in the eat condition; saccades to the target were launched after the onset of the spoken word cake in the move condition, but before its onset in the eat condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the (as yet unencountered) post-verbal grammatical object. The data support a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur.  相似文献   

4.
The present study replicated the well-known demonstration by Altmann and Kamide (1999) that listeners make linguistically guided anticipatory eye movements, but used photographs of scenes rather than clip-art arrays as the visual stimuli. When listeners heard a verb for which a particular object in a visual scene was the likely theme, they made earlier looks to this object (e.g., looks to a cake upon hearing The boy will eat …) than when they heard a control verb (The boy will move …). New data analyses assessed whether these anticipatory effects are due to a linguistic effect on the targeting of saccades (i.e., the where parameter of eye movement control), the duration of fixations (i.e., the when parameter), or both. Participants made fewer fixations before reaching the target object when the verb was selectionally restricting (e.g., will eat). However, verb type had no effect on the duration of individual eye fixations. These results suggest an important constraint on the linkage between spoken language processing and eye movement control: Linguistic input may influence only the decision of where to move the eyes, not the decision of when to move them.  相似文献   

5.
6.
7.
Eye movements were recorded during the display of two images of a real-world scene that were inspected to determine whether they were the same or not (a comparative visual search task). In the displays where the pictures were different, one object had been changed, and this object was sometimes taken from another scene and was incongruent with the gist. The experiment established that incongruous objects attract eye fixations earlier than the congruous counterparts, but that this effect is not apparent until the picture has been displayed for several seconds. By controlling the visual saliency of the objects the experiment eliminates the possibility that the incongruency effect is dependent upon the conspicuity of the changed objects. A model of scene perception is suggested whereby attention is unnecessary for the partial recognition of an object that delivers sufficient information about its visual characteristics for the viewer to know that the object is improbable in that particular scene, and in which full identification requires foveal inspection.  相似文献   

8.
When viewing a visual scene, eye movements are often language-mediated: people look at objects as those objects are named. Eye movements can even reflect predictive language processing, moving to an object before it is named. Children are also capable of making language-mediated eye movements, even predictive ones, and prediction may be involved in language learning. The present study explored whether eye movements are language-mediated in a more naturalistic task – shared storybook reading. Research has shown that children fixate illustrations during shared storybook reading, ignoring text. The present study used high-precision eye-tracking to replicate this finding. Further, prereader participants showed increased likelihood of fixating relevant storybook illustrations as words were read aloud, indicating that their eye movements were language mediated like the adult participants. Language-mediated eye movements to illustrations were reactive, not predictive, in both participant groups.  相似文献   

9.
Visual arguments     
Boland JE 《Cognition》2005,95(3):237-274
Three experiments investigated the use of verb argument structure by tracking participants' eye movements across a set of related pictures as they listened to sentences. The assumption was that listeners would naturally look at relevant pictures as they were mentioned or implied. The primary hypothesis was that a verb would implicitly introduce relevant entities (linguistic arguments) that had not yet been mentioned, and thus a picture corresponding to such an entity would draw anticipatory looks. For example, upon hearing ...mother suggested..., participants would look at a potential recipient of the suggestion. The only explicit task was responding to comprehension questions. Experiments 1 and 2 manipulated both the argument structure of the verb and the typicality/co-occurrence frequency of the target argument/adjunct, in order to distinguish between anticipatory looks to arguments specifically and anticipatory looks to pictures that were strongly associated with the verb, but did not have the linguistic status of argument. Experiment 3 manipulated argument status alone. In Experiments 1 and 3, there were more anticipatory looks to potential arguments than to potential adjuncts, beginning about 500 ms after the acoustic onset of the verb. Experiment 2 revealed a main effect of typicality. These findings indicate that both real world knowledge and argument structure guide visual attention within this paradigm, but that argument structure has a privileged status in focusing listener attention on relevant aspects of a visual scene.  相似文献   

10.
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.  相似文献   

11.
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers.  相似文献   

12.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

13.
In this paper we briefly describe preliminary data from two experiments that we have carried out to investigate the relationship between visual encoding and memory for objects and their locations within scenes. In these experiments, we recorded participants′ eye movements as they viewed a photograph of a cubicle with 12 objects positioned pseudo-randomly on a desk and shelves. After viewing the photograph, participants were taken to the actual cubicle where they undertook two memory tests. Participants were asked to identify the 12 target objects(from the photograph)presented amongst 12 distractors. They were then required to place each of the objects in the location that they occupied in the photograph. These tests assessed participants′ memory for identity of the objects and their locations. In Experiment 1, we assessed the influence of the encoding period and the test delay on object identity and location memory. In Experiment 2 we manipulated scanning behaviour during encoding by "boxing"some of the objects in the photo. We showed that using boxes to change eye movement behaviour during encoding directly affected the nature of memory for the scene. The results of these studies indicate a fundamental relationship between visual encoding and memory for objects and their locations. We explain our findings in terms of the Visual Memory Model(Hollingworth & Henderson, 2002).  相似文献   

14.
This study investigates how speed of motion is processed in language. In three eye‐tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., The lion ambled/dashed to the balloon). Results showed that looking time to relevant objects in the visual scene was affected by the speed of verb of the sentence, speaking rate, and configuration of a supporting visual scene. The results provide novel evidence for the mental simulation of speed in language and show that internal dynamic simulations can be played out via eye movements toward a static visual scene.  相似文献   

15.
In the study, 33 participants viewed photographs from either a potential homebuyer's or a burglar's perspective, or in preparation for a memory test, while their eye movements were recorded. A free recall and a picture recognition task were performed after viewing. The results showed that perspective had rapid effects, in that the second fixation after the scene onset was more likely to land on perspective-relevant than on perspective-irrelevant areas within the scene. Perspective-relevant areas also attracted longer total fixation time, more visits, and longer first-pass dwell times than did perspective-irrelevant areas. As for the effects of visual saliency, the first fixation was more likely to land on a salient than on a nonsalient area; salient areas also attracted more visits and longer total fixation time than did nonsalient areas. Recall and recognition performance reflected the eye fixation results: Both were overall higher for perspective-relevant than for perspective-irrelevant scene objects. The relatively low error rates in the recognition task suggest that participants had gained an accurate memory for scene objects. The findings suggest that the role of bottom-up versus top-down factors varies as a function of viewing task and the time-course of scene processing.  相似文献   

16.
Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.  相似文献   

17.
Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.  相似文献   

18.
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with “real-world” tasks and research utilizing the visual-world paradigm are also briefly discussed.  相似文献   

19.
Because of the strong associations between verbal labels and the visual objects that they denote, hearing a word may quickly guide the deployment of visual attention to the named objects. We report six experiments in which we investigated the effect of hearing redundant (noninformative) object labels on the visual processing of multiple objects from the named category. Even though the word cues did not provide additional information to the participants, hearing a label resulted in faster detection of attention probes appearing near the objects denoted by the label. For example, hearing the word chair resulted in more effective visual processing of all of the chairs in a scene relative to trials in which the participants attended to the chairs without actually hearing the label. This facilitation was mediated by stimulus typicality. Transformations of the stimuli that disrupted their association with the label while preserving the low-level visual features eliminated the facilitative effect of the labels. In the final experiment, we show that hearing a label improves the accuracy of locating multiple items matching the label, even when eye movements are restricted. We posit that verbal labels dynamically modulate visual processing via top-down feedback--an instance of linguistic labels greasing the wheels of perception.  相似文献   

20.
通过眼动记录和部分场景再认两种方法,探讨了虚拟建筑物对称场景中物体朝向统一、凸显两种条件对内在参照系建立的影响。结果发现:(1)场景中均为有朝向建筑物且朝向统一时,被试选择物体朝向与对称轴建立内在参照系的可能性没有差异;(2)场景中只有一个有朝向建筑物,其余均为无朝向物体时,即朝向凸显条件下,被试倾向于选择对称轴来建立内在参照系。物体朝向对内在参照系建立的影响作用具有局限性和不稳定性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号